146 Matching Results

Search Results

Advanced search parameters have been applied.

Accelerating DSMC data extraction.

Description: In many direct simulation Monte Carlo (DSMC) simulations, the majority of computation time is consumed after the flowfield reaches a steady state. This situation occurs when the desired output quantities are small compared to the background fluctuations. For example, gas flows in many microelectromechanical systems (MEMS) have mean speeds more than two orders of magnitude smaller than the thermal speeds of the molecules themselves. The current solution to this problem is to collect sufficient samples to achieve the desired resolution. This can be an arduous process because the error is inversely proportional to the square root of the number of samples so we must, for example, quadruple the samples to cut the error in half. This work is intended to improve this situation by employing more advanced techniques, from fields other than solely statistics, for determining the output quantities. Our strategy centers on exploiting information neglected by current techniques, which collect moments in each cell without regard to one another, values in neighboring cells, nor their evolution in time. Unlike many previous acceleration techniques that modify the method itself, the techniques examined in this work strictly post-process so they may be applied to any DSMC code without affecting its fidelity or generality. Many potential methods are drawn from successful applications in a diverse range of areas, from ultrasound imaging to financial market analysis. The most promising methods exploit relationships between variables in space, which always exist in DSMC due to the absence of shocks. Disparate techniques were shown to produce similar error reductions, suggesting that the results shown in this report may be typical of what is possible using these methods. Sample count reduction factors of approximately three to five were found to be typical, although factors exceeding ten were shown on some variables under some techniques.
Date: October 1, 2006
Creator: Gallis, Michail A. & Piekos, Edward Stanley
Partner: UNT Libraries Government Documents Department

Homotopy optimization methods for global optimization.

Description: We define a new method for global optimization, the Homotopy Optimization Method (HOM). This method differs from previous homotopy and continuation methods in that its aim is to find a minimizer for each of a set of values of the homotopy parameter, rather than to follow a path of minimizers. We define a second method, called HOPE, by allowing HOM to follow an ensemble of points obtained by perturbation of previous ones. We relate this new method to standard methods such as simulated annealing and show under what circumstances it is superior. We present results of extensive numerical experiments demonstrating performance of HOM and HOPE.
Date: December 1, 2005
Creator: Dunlavy, Daniel M. & O'Leary, Dianne P. (University of Maryland, College Park, MD)
Partner: UNT Libraries Government Documents Department

A digital accelerometer array utilizing suprathreshold stochastic resonance for detection of sub-Brownian noise floor accelerations.

Description: The goal of this LDRD project was to evaluate the possibilities of utilizing Stochastic resonance in micromechanical sensor systems as a means for increasing signal to noise for physical sensors. A careful study of this field reveals that in the case of a single sensing element, stochastic resonance offers no real advantage. We have, however, identified a system that can utilize very similar concepts to stochastic resonance in order to achieve an arrayed sensor system that could be superior to existing technologies in the field of inertial sensors, and could offer a very low power technique for achieving navigation grade inertial measurement units.
Date: December 1, 2004
Creator: Carr, Dustin Wade & Olsson, Roy H.
Partner: UNT Libraries Government Documents Department

Operational Impacts of Wind Energy Resources in the Bonneville Power Administration Control Area - Phase I Report

Description: This report presents a methodology developed to study the future impact of wind on BPA power system load following and regulation requirements. The methodology uses historical data and stochastic processes to simulate the load balancing processes in the BPA power system, by mimicking the actual power system operations. Therefore, the results are close to reality, yet the study based on this methodology is convenient to conduct. Compared with the proposed methodology, existing methodologies for doing similar analysis include dispatch model simulation and standard deviation evaluation on load and wind data. Dispatch model simulation is constrained by the design of the dispatch program, and standard deviation evaluation is artificial in separating the load following and regulation requirements, both of which usually do not reflect actual operational practice. The methodology used in this study provides not only capacity requirement information, it also analyzes the ramp rate requirements for system load following and regulation processes. The ramp rate data can be used to evaluate generator response/maneuverability requirements, which is another necessary capability of the generation fleet for the smooth integration of wind energy. The study results are presented in an innovative way such that the increased generation capacity or ramp requirements are compared for two different years, across 24 hours a day. Therefore, the impact of different levels of wind energy on generation requirements at different times can be easily visualized.
Date: July 15, 2008
Creator: Makarov, Yuri V. & Lu, Shuai
Partner: UNT Libraries Government Documents Department

Nonlinear optimization for stochastic simulations.

Description: This report describes research targeting development of stochastic optimization algorithms and their application to mission-critical optimization problems in which uncertainty arises. The first section of this report covers the enhancement of the Trust Region Parallel Direct Search (TRPDS) algorithm to address stochastic responses and the incorporation of the algorithm into the OPT++ optimization library. The second section describes the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC) suite of systems analysis tools and motivates the use of stochastic optimization techniques in such non-deterministic simulations. The third section details a batch programming interface designed to facilitate criteria-based or algorithm-driven execution of system-of-system simulations. The fourth section outlines the use of the enhanced OPT++ library and batch execution mechanism to perform systems analysis and technology trade-off studies in the WMD detection and response problem domain.
Date: December 1, 2003
Creator: Johnson, Michael M.; Yoshimura, Ann S.; Hough, Patricia Diane & Ammerlahn, Heidi R.
Partner: UNT Libraries Government Documents Department

A short course on measure and probability theories.

Description: This brief Introduction to Measure Theory, and its applications to Probabilities, corresponds to the lecture notes of a seminar series given at Sandia National Laboratories in Livermore, during the spring of 2003. The goal of these seminars was to provide a minimal background to Computational Combustion scientists interested in using more advanced stochastic concepts and methods, e.g., in the context of uncertainty quantification. Indeed, most mechanical engineering curricula do not provide students with formal training in the field of probability, and even in less in measure theory. However, stochastic methods have been used more and more extensively in the past decade, and have provided more successful computational tools. Scientists at the Combustion Research Facility of Sandia National Laboratories have been using computational stochastic methods for years. Addressing more and more complex applications, and facing difficult problems that arose in applications showed the need for a better understanding of theoretical foundations. This is why the seminar series was launched, and these notes summarize most of the concepts which have been discussed. The goal of the seminars was to bring a group of mechanical engineers and computational combustion scientists to a full understanding of N. WIENER'S polynomial chaos theory. Therefore, these lectures notes are built along those lines, and are not intended to be exhaustive. In particular, the author welcomes any comments or criticisms.
Date: February 1, 2004
Creator: PÔebay, Philippe Pierre
Partner: UNT Libraries Government Documents Department

EARLY GUIDANCE FOR ASSIGNING DISTRIBUTION PARAMETERS TO GEOCHEMICAL INPUT TERMS TO STOCHASTIC TRANSPORT MODELS

Description: Stochastic modeling is being used in the Performance Assessment program to provide a probabilistic estimate of the range of risk that buried waste may pose. The objective of this task was to provide early guidance for stochastic modelers for the selection of the range and distribution (e.g., normal, log-normal) of distribution coefficients (K{sub d}) and solubility values (K{sub sp}) to be used in modeling subsurface radionuclide transport in E- and Z-Area on the Savannah River Site (SRS). Due to the project's schedule, some modeling had to be started prior to collecting the necessary field and laboratory data needed to fully populate these models. For the interim, the project will rely on literature values and some statistical analyses of literature data as inputs. Based on statistical analyses of some literature sorption tests, the following early guidance was provided: (1) Set the range to an order of magnitude for radionuclides with K{sub d} values >1000 mL/g and to a factor of two for K{sub d} values of <1000 mL/g. This decision is based on the literature. (2) Set the range to an order of magnitude for radionuclides with K{sub sp} values <10{sup -6} M and to a factor of two for K{sub d} values of >10{sup -6} M. This decision is based on the literature. (3) The distribution of K{sub d} values with a mean >1000 mL/g will be log-normally distributed. Those with a K{sub d} value <1000 mL/g will be assigned a normal distribution. This is based on statistical analysis of non-site-specific data. Results from on-going site-specific field/laboratory research involving E-Area sediments will supersede this guidance; these results are expected in 2007.
Date: June 30, 2006
Creator: Kaplan, D & Margaret Millings, M
Partner: UNT Libraries Government Documents Department

Power Minimization techniques for Networked Data Centers.

Description: Our objective is to develop a mathematical model to optimize energy consumption at multiple levels in networked data centers, and develop abstract algorithms to optimize not only individual servers, but also coordinate the energy consumption of clusters of servers within a data center and across geographically distributed data centers to minimize the overall energy cost and consumption of brown energy of an enterprise. In this project, we have formulated a variety of optimization models, some stochastic others deterministic, and have obtained a variety of qualitative results on the structural properties, robustness, and scalability of the optimal policies. We have also systematically derived from these models decentralized algorithms to optimize energy efficiency, analyzed their optimality and stability properties. Finally, we have conducted preliminary numerical simulations to illustrate the behavior of these algorithms. We draw the following conclusion. First, there is a substantial opportunity to minimize both the amount and the cost of electricity consumption in a network of datacenters, by exploiting the fact that traffic load, electricity cost, and availability of renewable generation fluctuate over time and across geographical locations. Judiciously matching these stochastic processes can optimize the tradeoff between brown energy consumption, electricity cost, and response time. Second, given the stochastic nature of these three processes, real-time dynamic feedback should form the core of any optimization strategy. The key is to develop decentralized algorithms that can be implemented at different parts of the network as simple, local algorithms that coordinate through asynchronous message passing.
Date: September 28, 2011
Creator: Low, Steven & Tang, Kevin
Partner: UNT Libraries Government Documents Department

Stochastic models: theory and simulation.

Description: Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.
Date: March 1, 2008
Creator: Field, Richard V., Jr.
Partner: UNT Libraries Government Documents Department

Survey of Bayesian Models for Modelling of Stochastic Temporal Processes

Description: This survey gives an overview of popular generative models used in the modeling of stochastic temporal systems. In particular, this survey is organized into two parts. The first part discusses the discrete-time representations of dynamic Bayesian networks and dynamic relational probabilistic models, while the second part discusses the continuous-time representation of continuous-time Bayesian networks.
Date: October 12, 2006
Creator: Ng, B
Partner: UNT Libraries Government Documents Department

Review of Upscaling Methods for Describing Unsaturated Flow

Description: Representing samll-scale features can be a challenge when one wants to model unsaturated flow in large domains. In this report, the various upscaling techniques are reviewed. The following upscaling methods have been identified from the literature: stochastic methods, renormalization methods, volume averaging and homogenization methods. In addition, a final technique, full resolution numerical modeling, is also discussed.
Date: September 26, 2000
Creator: Wood, Brian D.
Partner: UNT Libraries Government Documents Department

Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 developers manual.

Description: The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Date: October 1, 2006
Creator: Griffin, Joshua D. (Sandia National lababoratory, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L. (Sandia National lababoratory, Livermore, CA); Watson, Jean-Paul; Kolda, Tamara Gibson (Sandia National lababoratory, Livermore, CA); Giunta, Anthony Andrew et al.
Partner: UNT Libraries Government Documents Department

DAKOTA, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 reference manual

Description: The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
Date: October 1, 2006
Creator: Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L. (Sandai National Labs, Livermore, CA); Watson, Jean-Paul; Kolda, Tamara Gibson (Sandai National Labs, Livermore, CA); Adams, Brian M. et al.
Partner: UNT Libraries Government Documents Department

DAKOTA, a multilevel parellel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 uers's manual.

Description: The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Date: October 1, 2006
Creator: Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L. (Sandai National Labs, Livermore, CA); Watson, Jean-Paul; Kolda, Tamara Gibson (Sandai National Labs, Livermore, CA); Giunta, Anthony Andrew et al.
Partner: UNT Libraries Government Documents Department

Final report for "Development of generalized mapping tools to improve implementation of data driven computer simulations" (LDRD 04-ERD-083)

Description: Probabilistic inverse techniques, like the Markov Chain Monte Carlo (MCMC) algorithm, have had recent success in combining disparate data types into a consistent model. The Stochastic Engine (SE) initiative was a technique that developed this method and applied it to a number of earth science and national security applications. For instance, while the method was originally developed to solve ground flow problems (Aines et al.), it has also been applied to atmospheric modeling and engineering problems. The investigators of this proposal have applied the SE to regional-scale lithospheric earth models, which have applications to hazard analysis and nuclear explosion monitoring. While this broad applicability is appealing, tailoring the method for each application is inefficient and time-consuming. Stochastic methods invert data by probabilistically sampling the model space and comparing observations predicted by the proposed model to observed data and preferentially accepting models that produce a good fit, generating a posterior distribution. In other words, the method ''inverts'' for a model or, more precisely, a distribution of models, by a series of forward calculations. While powerful, the technique is often challenging to implement, as the mapping from model space to data needs to be ''customized'' for each data type. For example, all proposed models might need to be transformed through sensitivity kernels from 3-D models to 2-D models in one step in order to compute path integrals, and transformed in a completely different manner in the next step. We seek technical enhancements that widen the applicability of the Stochastic Engine by generalizing some aspects of the method (i.e. model-to-data transformation types, configuration, model representation). Initially, we wish to generalize the transformations that are necessary to match the observations to proposed models. These transformations are sufficiently general not to pertain to any single application. This is a new and innovative approach to the ...
Date: February 4, 2005
Creator: Pasyanos, M; Ramirez, A & Franz, G
Partner: UNT Libraries Government Documents Department

The market viability of nuclear hydrogen technologies.

Description: The Department of Energy Office of Nuclear Energy is supporting system studies to gain a better understanding of nuclear power's potential role in a hydrogen economy and what hydrogen production technologies show the most promise. This assessment includes identifying commercial hydrogen applications and their requirements, comparing the characteristics of nuclear hydrogen systems to those market requirements, evaluating nuclear hydrogen configuration options within a given market, and identifying the key drivers and thresholds for market viability of nuclear hydrogen options. One of the objectives of the current analysis phase is to determine how nuclear hydrogen technologies could evolve under a number of different futures. The outputs of our work will eventually be used in a larger hydrogen infrastructure and market analysis conducted for DOE-EE using a system-level market simulation tool now underway. This report expands on our previous work by moving beyond simple levelized cost calculations and looking at profitability, risk, and uncertainty from an investor's perspective. We analyze a number of technologies and quantify the value of certain technology and operating characteristics. Our model to assess the profitability of the above technologies is based on Real Options Theory and calculates the discounted profits from investing in each of the production facilities. We use Monte-Carlo simulations to represent the uncertainty in hydrogen and electricity prices. The model computes both the expected value and the distribution of discounted profits from a production plant. We also quantify the value of the option to switch between hydrogen and electricity production in order to maximize investor profits. Uncertainty in electricity and hydrogen prices can be represented with two different stochastic processes: Geometric Brownian Motion (GBM) and Mean Reversion (MR). Our analysis finds that the flexibility to switch between hydrogen and electricity leads to significantly different results in regards to the relative profitability of the different ...
Date: April 6, 2007
Creator: Botterud, A.; Conzelmann, G.; Petri, M. C. & Yildiz, B.
Partner: UNT Libraries Government Documents Department

Regional Stochastic Generation of Streamflows using an Arima (1, 0, 1) Process and Disaggregation

Description: From abstract: An ARIMA (1, 0, 1) model is used to generate annual flow sequence at three sites in the Juniata River basin, Pennsylvania. The study was designed to analyze low-flow frequency characteristics of a basin. The model preserves the mean, variance, and cross-correlations of the observed station data.
Date: May 1979
Creator: Armbuster, Jeffrey T.
Partner: UNT Libraries Government Documents Department

Nonlinear Dynamic Systems Response to Non-Stationary Excitation Using the Wavelet Transform: Final Report

Description: The objective of this research project has been the development of techniques for estimating the power spectra of stochastic processes using wavelet transform, and the development of related techniques for determining the response of linear/nonlinear systems to excitations which are described via the wavelet transform. Both of the objectives have been achieved, and the research findings have been disseminated in papers in archival journals and technical conferences.
Date: January 15, 2006
Creator: Spanos, Pol D.
Partner: UNT Libraries Government Documents Department

DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, developers manual.

Description: The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Date: May 1, 2010
Creator: Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA) et al.
Partner: UNT Libraries Government Documents Department

DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, user's manual.

Description: The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Date: May 1, 2010
Creator: Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA) et al.
Partner: UNT Libraries Government Documents Department

DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, user's reference manual.

Description: The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
Date: May 1, 2010
Creator: Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA) et al.
Partner: UNT Libraries Government Documents Department

Instability of collective strong-interaction phenomena in hadron production as a possible origin of the weak and electromagnetic interactions

Description: A systematic calculus of long-range Regge cut effects in multiparticle production is constructed in the form of an infrared-divergent stochastic field theory. Total cross sections and two-body overlap integrals in such a theory may depend very sensitively upon internal quantum-numbers of incident particles, resulting in a strong symmetry breaking at ultra-high energies. Such symmetry violations will influence low energy processes through dispersion relations, and a bootstrap of weak interactions becomes possible. A rough analytic estimate of the scale of thresholds for such effects yields a BCS-type gap equation, which expresses the scale of weak and electromagnetic couplings in terms of purely strong-interaction parameters. (auth)
Date: December 1, 1975
Creator: Arnold, R.C.
Partner: UNT Libraries Government Documents Department