3,167 Matching Results

Search Results

Advanced search parameters have been applied.

Assessment of Energy Storage Alternatives in the Puget Sound Energy System Volume 2: Energy Storage Evaluation Tool

Description: This volume presents the battery storage evaluation tool developed at Pacific Northwest National Laboratory (PNNL), which is used to evaluate benefits of battery storage for multiple grid applications, including energy arbitrage, balancing service, capacity value, distribution system equipment deferral, and outage mitigation. This tool is based on the optimal control strategies to capture multiple services from a single energy storage device. In this control strategy, at each hour, a look-ahead optimization is first formulated and solved to determine battery base operating point. The minute by minute simulation is then performed to simulate the actual battery operation. This volume provide background and manual for this evaluation tool.
Date: December 1, 2013
Creator: Wu, Di; Jin, Chunlian; Balducci, Patrick J. & Kintner-Meyer, Michael CW
Partner: UNT Libraries Government Documents Department

Generalized Monge-Kantorovich optimization for grid generation and adaptation in LP

Description: The Monge-Kantorovich grid generation and adaptation scheme of is generalized from a variational principle based on L{sub 2} to a variational principle based on L{sub p}. A generalized Monge-Ampere (MA) equation is derived and its properties are discussed. Results for p > 1 are obtained and compared in terms of the quality of the resulting grid. We conclude that for the grid generation application, the formulation based on L{sub p} for p close to unity leads to serious problems associated with the boundary. Results for 1.5 {approx}< p {approx}< 2.5 are quite good, but there is a fairly narrow range around p = 2 where the results are close to optimal with respect to grid distortion. Furthermore, the Newton-Krylov methods used to solve the generalized MA equation perform best for p = 2.
Date: January 1, 2009
Creator: Delzanno, G L & Finn, J M
Partner: UNT Libraries Government Documents Department

Final Report---Next-Generation Solvers for Mixed-Integer Nonlinear Programs: Structure, Search, and Implementation

Description: The mathematical modeling of systems often requires the use of both nonlinear and discrete components. Problems involving both discrete and nonlinear components are known as mixed-integer nonlinear programs (MINLPs) and are among the most challenging computational optimization problems. This research project added to the understanding of this area by making a number of fundamental advances. First, the work demonstrated many novel, strong, tractable relaxations designed to deal with non-convexities arising in mathematical formulation. Second, the research implemented the ideas in software that is available to the public. Finally, the work demonstrated the importance of these ideas on practical applications and disseminated the work through scholarly journals, survey publications, and conference presentations.
Date: May 30, 2013
Creator: Linderoth, Jeff T. & Luedtke, James R.
Partner: UNT Libraries Government Documents Department

An Adaptive Linearization Method for a Constraint Satisfaction Problem in Semiconductor Device Design Optimization

Description: The device optimization is a very important element in semiconductor technology advancement. Its objective is to find a design point for a semiconductor device so that the optimized design goal meets all specified constraints. As in other engineering fields, a nonlinear optimizer is often used for design optimization. One major drawback of using a nonlinear optimizer is that it can only partially explore the design space and return a local optimal solution. This dissertation provides an adaptive optimization design methodology to allow the designer to explore the design space and obtain a globally optimal solution. One key element of our method is to quickly compute the set of all feasible solutions, also called the acceptability region. We described a polytope-based representation for the acceptability region and an adaptive linearization technique for device performance model approximation. These efficiency enhancements have enabled significant speed-up in estimating acceptability regions and allow acceptability regions to be estimated for a larger class of device design tasks. Our linearization technique also provides an efficient mechanism to guarantee the global accuracy of the computed acceptability region. To visualize the acceptability region, we study the orthogonal projection of high-dimensional convex polytopes and propose an output sensitive algorithm for projecting polytopes into two dimensions.
Date: May 1999
Creator: Chang, Chih-Hui, 1967-
Partner: UNT Libraries

Final Report of the Simulation Optimization Task Force

Description: This is the final report of the ATLAS Simulation Optimization Task Force, establishedin June of 2007. This note justifies the selected Geant4 version, physics list, and range cuts to be used by the default ATLAS simulation for initial data taking and beyond. The current status of several projects, including detector description, simulation validation, studies of additional Geant4 parameters, and cavern background, are reported.
Date: December 15, 2008
Creator: Collaboration, ATLAS; Rimoldi, A.; Carli, T.; Dell'Acqua, A.; Froidevaux, D.; Gianotti, F. et al.
Partner: UNT Libraries Government Documents Department

Stencil Computation Optimization and Auto-tuning on State-of-the-Art Multicore Architectures

Description: Understanding the most efficient design and utilization of emerging multicore systems is one of the most challenging questions faced by the mainstream and scientific computing industries in several decades. Our work explores multicore stencil (nearest-neighbor) computations -- a class of algorithms at the heart of many structured grid codes, including PDE solvers. We develop a number of effective optimization strategies, and build an auto-tuning environment that searches over our optimizations and their parameters to minimize runtime, while maximizing performance portability. To evaluate the effectiveness of these strategies we explore the broadest set of multicore architectures in the current HPC literature, including the Intel Clovertown, AMD Barcelona, Sun Victoria Falls, IBM QS22 PowerXCell 8i, and NVIDIA GTX280. Overall, our auto-tuning optimization methodology results in the fastest multicore stencil performance to date. Finally, we present several key insights into the architectural trade-offs of emerging multicore designs and their implications on scientific algorithm development.
Date: August 22, 2008
Creator: Datta, Kaushik; Murphy, Mark; Volkov, Vasily; Williams, Samuel; Carter, Jonathan; Oliker, Leonid et al.
Partner: UNT Libraries Government Documents Department

The Coupling of ESP-R and Genopt: A Simple Case Study

Description: This paper describes and demonstrates how to use the optimization program GenOpt with the building energy simulation program ESP-r. GenOpt, a generic optimization program, minimises an objective function that is evaluated by an external simulation program. It has been developed for optimization problems that are computationally expensive and that may have nonsmooth objective functions. ESP-r is a research oriented building simulation program that is well validated and has been used to conduct various building energy analysis studies. In this paper, the necessary file preparations are described and a simple optimization example is presented.
Date: July 1, 2010
Creator: Peeters, Leen; D'haeseleer, William; Ferguson, Alex & Wetter, Michael
Partner: UNT Libraries Government Documents Department

Steepest Descent

Description: The steepest descent method has a rich history and is one of the simplest and best known methods for minimizing a function. While the method is not commonly used in practice due to its slow convergence rate, understanding the convergence properties of this method can lead to a better understanding of many of the more sophisticated optimization methods. Here, we give a short introduction and discuss some of the advantages and disadvantages of this method. Some recent results on modified versions of the steepest descent method are also discussed.
Date: February 12, 2010
Creator: Meza, Juan C.
Partner: UNT Libraries Government Documents Department

Computing Criticality of Lines in Power Systems

Description: We propose a computationally efficient method based onnonlinear optimization to identify critical lines, failure of which cancause severe blackouts. Our method computes criticality measure for alllines at a time, as opposed to detecting a single vulnerability,providing a global view of the system. This information on criticality oflines can be used to identify multiple contingencies by selectivelyexploring multiple combinations of broken lines. The effectiveness of ourmethod is demonstrated on the IEEE 30 and 118 bus systems, where we canvery quickly detect the most critical lines in the system and identifysevere multiple contingencies.
Date: October 13, 2006
Creator: Pinar, Ali; Reichert, Adam & Lesieutre, Bernard
Partner: UNT Libraries Government Documents Department

Multi-Robot, Multi-Target Particle Swarm Optimization Search in Noisy Wireless Environments

Description: Multiple small robots (swarms) can work together using Particle Swarm Optimization (PSO) to perform tasks that are difficult or impossible for a single robot to accomplish. The problem considered in this paper is exploration of an unknown environment with the goal of finding a target(s) at an unknown location(s) using multiple small mobile robots. This work demonstrates the use of a distributed PSO algorithm with a novel adaptive RSS weighting factor to guide robots for locating target(s) in high risk environments. The approach was developed and analyzed on multiple robot single and multiple target search. The approach was further enhanced by the multi-robot-multi-target search in noisy environments. The experimental results demonstrated how the availability of radio frequency signal can significantly affect robot search time to reach a target.
Date: May 1, 2009
Creator: Derr, Kurt & Manic, Milos
Partner: UNT Libraries Government Documents Department

APPSPACK 4.0 : asynchronous parallel pattern search for derivative-free optimization.

Description: APPSPACK is software for solving unconstrained and bound constrained optimization problems. It implements an asynchronous parallel pattern search method that has been specifically designed for problems characterized by expensive function evaluations. Using APPSPACK to solve optimization problems has several advantages: No derivative information is needed; the procedure for evaluating the objective function can be executed via a separate program or script; the code can be run in serial or parallel, regardless of whether or not the function evaluation itself is parallel; and the software is freely available. We describe the underlying algorithm, data structures, and features of APPSPACK version 4.0 as well as how to use and customize the software.
Date: December 1, 2004
Creator: Gray, Genetha Anne & Kolda, Tamara Gibson
Partner: UNT Libraries Government Documents Department

Flux Control in Networks of Diffusion Paths

Description: A class of optimization problems in networks of intersecting diffusion domains of a special form of thin paths has been considered. The system of equations describing stationary solutions is equivalent to an electrical circuit built of intersecting conductors. The solution of an optimization problem has been obtained and extended to the analogous electrical circuit. The interest in this network arises from, among other applications, an application to wave-particle diffusion through resonant interactions in plasma.
Date: July 8, 2009
Creator: Fisch, A. I. Zhmoginov and N. J.
Partner: UNT Libraries Government Documents Department

Derivative-free optimization methods for surface structuredetermination of nanosystems

Description: Many properties of nanostructures depend on the atomicconfiguration at the surface. One common technique used for determiningthis surface structure is based on the low energy electron diffraction(LEED) method, which uses a high-fidelity physics model to compareexperimental results with spectra computed via a computer simulation.While this approach is highly effective, the computational cost of thesimulations can be prohibitive for large systems. In this work, wepropose the use of a direct search method in conjunction with an additivesurrogate. This surrogate is constructed from a combination of asimplified physics model and an interpolation that is based on thedifferences between the simplified physics model and the full fidelitymodel.
Date: October 18, 2007
Creator: Meza, Juan C.; Garcia-Lekue, Arantzazu; Abramson, Mark A. & Dennis, John E.
Partner: UNT Libraries Government Documents Department

Analysis of the structure of complex networks at different resolution levels

Description: Modular structure is ubiquitous in real-world complex networks, and its detection is important because it gives insights in the structure-functionality relationship. The standard approach is based on the optimization of a quality function, modularity, which is a relative quality measure for a partition of a network into modules. Recently some authors have pointed out that the optimization of modularity has a fundamental drawback: the existence of a resolution limit beyond which no modular structure can be detected even though these modules might have own entity. The reason is that several topological descriptions of the network coexist at different scales, which is, in general, a fingerprint of complex systems. Here we propose a method that allows for multiple resolution screening of the modular structure. The method has been validated using synthetic networks, discovering the predefined structures at all scales. Its application to two real social networks allows to find the exact splits reported in the literature, as well as the substructure beyond the actual split.
Date: February 28, 2008
Creator: Arenas, A.; Fernandez, A. & Gomez, S.
Partner: UNT Libraries Government Documents Department

Development of a new adaptive ordinal approach to continuous-variable probabilistic optimization.

Description: A very general and robust approach to solving continuous-variable optimization problems involving uncertainty in the objective function is through the use of ordinal optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the uncertainty effects on local design alternatives, rather than on precise quantification of the effects. One simply asks ''Is that alternative better or worse than this one?'' -not ''HOW MUCH better or worse is that alternative to this one?'' The answer to the latter question requires precise characterization of the uncertainty--with the corresponding sampling/integration expense for precise resolution. However, in this report we demonstrate correct decision-making in a continuous-variable probabilistic optimization problem despite extreme vagueness in the statistical characterization of the design options. We present a new adaptive ordinal method for probabilistic optimization in which the trade-off between computational expense and vagueness in the uncertainty characterization can be conveniently managed in various phases of the optimization problem to make cost-effective stepping decisions in the design space. Spatial correlation of uncertainty in the continuous-variable design space is exploited to dramatically increase method efficiency. Under many circumstances the method appears to have favorable robustness and cost-scaling properties relative to other probabilistic optimization methods, and uniquely has mechanisms for quantifying and controlling error likelihood in design-space stepping decisions. The method is asymptotically convergent to the true probabilistic optimum, so could be useful as a reference standard against which the efficiency and robustness of other methods can be compared--analogous to the role that Monte Carlo simulation plays in uncertainty propagation.
Date: November 1, 2006
Creator: Romero, Vicente JosÔe & Chen, Chun-Hung (George Mason University, Fairfax, VA)
Partner: UNT Libraries Government Documents Department

Optimization of the cooling profile to achieve crack-free Yb:S-FAP crystals

Description: Yb:S-FAP [Yb{sup 3+}:Sr{sub 5}(PO{sub 4}){sub 3}F] crystals are an important gain medium for diode-pumped laser applications. Growth of 7.0 cm diameter Yb:S-FAP crystals utilizing the Czochralski (CZ) method from SrF{sub 2}-rich melts often encounter cracks during the post growth cool down stage. To suppress cracking during cool down, a numerical simulation of the growth system was used to understand the correlation between the furnace power during cool down and the radial temperature differences within the crystal. The critical radial temperature difference, above which the crystal cracks, has been determined by benchmarking the simulation results against experimental observations. Based on this comparison, an optimal three-stage ramp-down profile was implemented and produced high quality, crack-free Yb:S-FAP crystals.
Date: August 20, 2007
Creator: Fang, H; Qiu, S; Kheng, L; Schaffers, K; Tassano, J; Caird, J et al.
Partner: UNT Libraries Government Documents Department

Nonlinearly-constrained optimization using asynchronous parallel generating set search.

Description: Many optimization problems in computational science and engineering (CS&E) are characterized by expensive objective and/or constraint function evaluations paired with a lack of derivative information. Direct search methods such as generating set search (GSS) are well understood and efficient for derivative-free optimization of unconstrained and linearly-constrained problems. This paper addresses the more difficult problem of general nonlinear programming where derivatives for objective or constraint functions are unavailable, which is the case for many CS&E applications. We focus on penalty methods that use GSS to solve the linearly-constrained problems, comparing different penalty functions. A classical choice for penalizing constraint violations is {ell}{sub 2}{sup 2}, the squared {ell}{sub 2} norm, which has advantages for derivative-based optimization methods. In our numerical tests, however, we show that exact penalty functions based on the {ell}{sub 1}, {ell}{sub 2}, and {ell}{sub {infinity}} norms converge to good approximate solutions more quickly and thus are attractive alternatives. Unfortunately, exact penalty functions are discontinuous and consequently introduce theoretical problems that degrade the final solution accuracy, so we also consider smoothed variants. Smoothed-exact penalty functions are theoretically attractive because they retain the differentiability of the original problem. Numerically, they are a compromise between exact and {ell}{sub 2}{sup 2}, i.e., they converge to a good solution somewhat quickly without sacrificing much solution accuracy. Moreover, the smoothing is parameterized and can potentially be adjusted to balance the two considerations. Since many CS&E optimization problems are characterized by expensive function evaluations, reducing the number of function evaluations is paramount, and the results of this paper show that exact and smoothed-exact penalty functions are well-suited to this task.
Date: May 1, 2007
Creator: Griffin, Joshua D. & Kolda, Tamara Gibson
Partner: UNT Libraries Government Documents Department

Risk-Informed Monitoring, Verification and Accounting (RI-MVA). An NRAP White Paper Documenting Methods and a Demonstration Model for Risk-Informed MVA System Design and Operations in Geologic Carbon Sequestration

Description: This white paper accompanies a demonstration model that implements methods for the risk-informed design of monitoring, verification and accounting (RI-MVA) systems in geologic carbon sequestration projects. The intent is that this model will ultimately be integrated with, or interfaced with, the National Risk Assessment Partnership (NRAP) integrated assessment model (IAM). The RI-MVA methods described here apply optimization techniques in the analytical environment of NRAP risk profiles to allow systematic identification and comparison of the risk and cost attributes of MVA design options.
Date: September 30, 2011
Creator: Unwin, Stephen D.; Sadovsky, Artyom; Sullivan, E. C. & Anderson, Richard M.
Partner: UNT Libraries Government Documents Department

SAR Imagery Segmentation by Statistical Region Growing and Hierarchical Merging

Description: This paper presents an approach to accomplish synthetic aperture radar (SAR) image segmentation, which are corrupted by speckle noise. Some ordinary segmentation techniques may require speckle filtering previously. Our approach performs radar image segmentation using the original noisy pixels as input data, eliminating preprocessing steps, an advantage over most of the current methods. The algorithm comprises a statistical region growing procedure combined with hierarchical region merging to extract regions of interest from SAR images. The region growing step over-segments the input image to enable region aggregation by employing a combination of the Kolmogorov-Smirnov (KS) test with a hierarchical stepwise optimization (HSWO) algorithm for the process coordination. We have tested and assessed the proposed technique on artificially speckled image and real SAR data containing different types of targets.
Date: May 22, 2010
Creator: Ushizima, Daniela Mayumi; Carvalho, E.A.; Medeiros, F.N.S.; Martins, C.I.O.; Marques, R.C.P. & Oliveira, I.N.S.
Partner: UNT Libraries Government Documents Department

Metamodeling-based Fast Optimization of Nanoscale Ams-socs

Description: Modern consumer electronic systems are mostly based on analog and digital circuits and are designed as analog/mixed-signal systems on chip (AMS-SoCs). the integration of analog and digital circuits on the same die makes the system cost effective. in AMS-SoCs, analog and mixed-signal portions have not traditionally received much attention due to their complexity. As the fabrication technology advances, the simulation times for AMS-SoC circuits become more complex and take significant amounts of time. the time allocated for the circuit design and optimization creates a need to reduce the simulation time. the time constraints placed on designers are imposed by the ever-shortening time to market and non-recurrent cost of the chip. This dissertation proposes the use of a novel method, called metamodeling, and intelligent optimization algorithms to reduce the design time. Metamodel-based ultra-fast design flows are proposed and investigated. Metamodel creation is a one time process and relies on fast sampling through accurate parasitic-aware simulations. One of the targets of this dissertation is to minimize the sample size while retaining the accuracy of the model. in order to achieve this goal, different statistical sampling techniques are explored and applied to various AMS-SoC circuits. Also, different metamodel functions are explored for their accuracy and application to AMS-SoCs. Several different optimization algorithms are compared for global optimization accuracy and convergence. Three different AMS circuits, ring oscillator, inductor-capacitor voltage-controlled oscillator (LC-VCO) and phase locked loop (PLL) that are present in many AMS-SoC are used in this study for design flow application. Metamodels created in this dissertation provide accuracy with an error of less than 2% from the physical layout simulations. After optimal sampling investigation, metamodel functions and optimization algorithms are ranked in terms of speed and accuracy. Experimental results show that the proposed design flow provides roughly 5,000x speedup over conventional design flows. Thus, ...
Date: May 2012
Creator: Garitselov, Oleg
Partner: UNT Libraries