821 Matching Results

Search Results

Advanced search parameters have been applied.

Tables and charts for the evaluation of profile drag from wake surveys at high subsonic speeds

Description: Report presents tables and charts for the evaluation of profile drag from wake surveys at high supersonic speeds. Two methods of evaluation are presented: an exact method that can be used on a wake of any shape and a simple approximate method that can be used when the variation of total-pressure loss across the wave has a typical form.
Date: July 1945
Creator: Block, Myron J. & Katzoff, S.
Partner: UNT Libraries Government Documents Department

A divide-and-conquer linear scaling three dimensional fragment method for large scale electronic structure calculations

Description: We present a new linear scaling ab initio total energy electronic structure calculation method based on the divide-and-conquer strategy. This method is simple to implement, easily to parallelize, and produces very accurate results when compared with the direct ab initio method. The method has been tested using up to 8,000 processors, and has been used to calculate nanosystems up to 15,000 atoms.
Date: July 11, 2008
Creator: Wang, Lin-Wang; Zhao, Zhengji; Meza, Juan & Wang, Lin-Wang
Partner: UNT Libraries Government Documents Department

Sixth-Order Lie Group Integrators

Description: In this paper we present the coefficients of several 6th order symplectic integrator of the type developed by R. Ruth. To get these results we fully exploit the connection with Lie groups. This integrator, as well as all the explicit integrators of Ruth, may be used in any equation where some sort of Lie bracket is preserved. In fact, if the Lie operator governing the equation of motion is separable into two solvable parts, the Ruth integrators can be used.
Date: March 1, 1990
Creator: Forest, E.
Partner: UNT Libraries Government Documents Department

Application of the Yoshida-Ruth Techniques to Implicit Integration and Multi-Map Explicit Integration

Description: The full power of Yoshida's technique is exploited to produce an arbitrary order implicit symplectic integrator and multi-map explicit integrator. This implicit integrator uses a characteristic function involving the force term alone. Also we point out the usefulness of the plain Ruth algorithm in computing Taylor series map using the techniques first introduced by Berz in his 'COSY-INFINITY' code.
Date: April 1, 1991
Creator: Forest, E.; Bengtsson, J. & Reusch, M.F.
Partner: UNT Libraries Government Documents Department

Robust Bearing Estimation for 3-Component Stations

Description: A robust bearing estimation process for 3-component stations has been developed and explored. The method, called SEEC for Search, Estimate, Evaluate and Correct, intelligently exploits the in- herent information in the arrival at every step of the process to achieve near-optimal results. In particular, the approach uses a consistent framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, to construct metrics helpful in choosing the better estimates or admitting that the bearing is immeasurable, andjinally to apply bias corrections when calibration information is available to yield a single final estimate. The method was applied to a small but challenging set of events in a seismically active region. The method demonstrated remarkable utility by providing better estimates and insights than previously available. Various monitoring implications are noted fiom these findings.
Date: June 3, 1999
Creator: Claassen, John P.
Partner: UNT Libraries Government Documents Department

Improved characterization through joint hydrogeophysical inversion: Examples of three different approaches

Description: With the increasing application of geophysical methods to hydrogeological problems, approaches for obtaining quantitative estimates of hydrogeological parameters using geophysical data are in great demand. A common approach to hydrogeological parameter estimation using geophysical and hydrogeological data is to first invert the geophysical data using a geophysical inversion procedure, and subsequently use the resulting estimates together with available hydrogeological information to estimate a hydrogeological parameter field. This approach does not allow us to constrain the geophysical inversion by hydrogeological data and prior information, and thus decreases our ability to make valid estimates of the hydrogeological parameter field. Furthermore, it is difficult to quantify the uncertainty in the corresponding estimates and to validate the assumptions made. They are developing alternative approaches that allow for the joint inversion of all available hydrological and geophysical data. In this presentation, they consider three studies and draw various conclusions, such as on the potential benefits of estimating the petrophysical relationships within the inversion framework and of constraining the geophysical estimates on geophysical, as well as hydrogeological data.
Date: July 1, 2004
Creator: Linde, Niklas; Chen, Jinsong; Kowalsky, Michael; Finsterle,Stefan; Rubin, Yoram & Hubbard, Susan
Partner: UNT Libraries Government Documents Department

Balancing a U-Shaped Assembly Line by Applying Nested Partitions Method

Description: In this study, we applied the Nested Partitions method to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions method provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment of tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions method in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO method is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion ...
Date: December 17, 2005
Creator: Bhagwat, Nikhil V.
Partner: UNT Libraries Government Documents Department

Analytic electrostatic solution of an axisymmetric accelerator gap

Description: Numerous computer codes calculate beam dynamics of particles traversing an accelerating gap. In order to carry out these calculations the electric field of a gap must be determined. The electric field is obtained from derivatives of the scalar potential which solves Laplace`s equation and satisfies the appropriate boundary conditions. An integral approach for the solution of Laplace`s equation is used in this work since the objective is to determine the potential and fields without solving on a traditional spatial grid. The motivation is to quickly obtain forces for particle transport, and eliminate the need to keep track of a large number of grid point fields. The problem then becomes one of how to evaluate the appropriate integral. In this work the integral solution has been converted to a finite sum of easily computed functions. Representing the integral solution in this manner provides a readily calculable formulation and avoids a number of difficulties inherent in dealing with an integral that can be weakly convergent in some regimes, and is, in general, highly oscillatory.
Date: March 15, 1995
Creator: Boyd, J. K.
Partner: UNT Libraries Government Documents Department

Shell model the Monte Carlo way

Description: The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
Date: March 1, 1995
Creator: Ormand, W.E.
Partner: UNT Libraries Government Documents Department

Online calculation of the Tevatron collider luminosity using accelerator instrumentation

Description: The luminosity of a collision region may be calculated if one understands the lattice parameters and measures the beam intensities, the transverse and longitudinal emittances, and the individual proton and antiproton beam trajectories (space and time) through the collision region. This paper explores an attempt to make this calculation using beam instrumentation during Run 1b of the Tevatron. The instrumentation used is briefly described. The calculations and their uncertainties are compared to luminosities calculated independently by the Collider Experiments (CDF and D0).
Date: July 1, 1997
Creator: Hahn, A.A.
Partner: UNT Libraries Government Documents Department

Unitarity-based techniques for one-loop calculations in QCD

Description: Perturbative QCD, and jet physics in particular, have matured sufficiently that rather than being merely subjects of experimental studies, they are now tools in the search for new physics. This role can be seen in the search for the top quark, as well as in recent speculations about the implications of supposed high-E{sub T} deviations of the inclusive-jet differential cross section at the Tevatron. One of the important challenges to both theorists and experimenters in coming years will be to hone jet physics as a tool in the quest for physics underlying the standard model. As such, it will be important to measurements of parameters of the theory or of non-perturbative quantities such as the parton distribution functions, as well as to searches for new physics at the LHC. Jet production, or jet production in association with identified photons or electroweak vector bosons, appears likely to provide the best information on the gluon distribution in the proton, and may also provide useful information on the strong coupling {alpha}{sub s}. In order to make use of these final states, the authors need a wider variety of higher-order calculations of matrix elements. Indeed, as the authors shall review, next-to-leading order calculations are in a certain sense the minimal useful ones. On the other hand, these calculations are quite difficult with conventional Feynman rules. In the following sections, the authors will discuss some of the techniques developed in recent years to simplify such calculations.
Date: June 1, 1996
Creator: Bern, Z.; Dixon, L. & Kosower, D.A.
Partner: UNT Libraries Government Documents Department

New computational method for non-LTE, the linear response matrix

Description: We investigate non-local thermodynamic equilibrium atomic kinetics using nonequilibrium thermodynamics and linear response theory. This approach gives a rigorous general framework for exploiting results from non-LTE kinetic calculations and offers a practical data-tabulation scheme suitable for use in plasma simulation codes. We describe how this method has been implemented to supply a fast and accurate non-LTE option in Lasnex.
Date: October 1, 1998
Creator: Fournier, K. B.; Graziani, F. R.; Harte, J. A.; Libby, S. B.; More, R. M.; Rathkopf, J. et al.
Partner: UNT Libraries Government Documents Department

High accuracy capture of curved shock fronts using the method of space-time conservation element and solution elemen

Description: Split numerical methods have been commonly used in computational physics for many years due to their speed, simplicity, and the accessibility of shock capturing methods in one-dimension. For a variety of reasons, it has been challenging to determine just how accurate operator split methods are, especially in the presence of curved wave features. One of these difficulties has been the lack of multidimensional shock capturing methods. Another is the difficulty of mathematical analysis of dis-continuous flow phenomena. Also, computational studies have been limited due to a lack of multidimensional model problems with analytic solutions that probe the nonlinear features of the flow equations. However, a new genuinely unsplit numerical method has been invented. With the advent of the Space-Time Conservation Element/Solution Element (CE/SE) method, it has become possible to attain high accuracy in multidimensional flows, even in the presence of curved shocks. Examples presented here provide some new evidence of the errors committed in the use of operator split techniques, even those employing ´┐Żunsplit´┐Ż corrections. In these problems, the CE/SE method is able to maintain the original cylindrical symmetry of the problem and track the main features of the flow, while the operator split methods fail to maintain symmetry and position the shocks incorrectly, particularly near the focal point of the incomi
Date: October 23, 1998
Creator: Cook, Jr., G
Partner: UNT Libraries Government Documents Department

Seismic Event Location Using Levenberg-Marquardt Least Squares Inversion

Description: The most widely used algorithm for estimating seismic event hypocenters and origin times is iterative linear least squares inversion. In this paper we review the mathematical basis of the algorithm and discuss the major assumptions made during its derivation. We go on to explore the utility of using Levenberg-Marquardt damping to improve the performance of the algorithm in cases where some of these assumptions are violated. We also describe how location parameter uncertainties are calculated. A technique to estimate an initial seismic event location is described in an appendix.
Date: October 2002
Creator: Ballard, Sanford
Partner: UNT Libraries Government Documents Department

The linear parameters and the decoupling matrix for linearly coupled motion in 6 dimensional phase space. Informal report

Description: It will be shown that starting from a coordinate system where the 6 phase space coordinates are linearly coupled, one can go to a new coordinate system, where the motion is uncoupled, by means of a linear transformation. The original coupled coordinates and the new uncoupled coordinates are related by a 6 {times} 6 matrix, R. R will be called the decoupling matrix. It will be shown that of the 36 elements of the 6 {times} 6 decoupling matrix R, only 12 elements are independent. This may be contrasted with the results for motion in 4-dimensional phase space, where R has 4 independent elements. A set of equations is given from which the 12 elements of R can be computed from the one period transfer matrix. This set of equations also allows the linear parameters, {beta}{sub i}, {alpha}{sub i} = 1, 3, for the uncoupled coordinates, to be computed from the one period transfer matrix. An alternative procedure for computing the linear parameters, the {beta}{sub i}, {alpha}{sub i} i = 1, 3, and the 12 independent elements of the decoupling matrix R is also given which depends on computing the eigenvectors of the one period transfer matrix. These results can be used in a tracking program, where the one period transfer matrix can be computed by multiplying the transfer matrices of all the elements in a period, to compute the linear parameters {alpha}{sub i} and {beta}{sub i}, i = 1, 3, and the elements of the decoupling matrix R. The procedure presented here for studying coupled motion in 6-dimensional phase space can also be applied to coupled motion in 4-dimensional phase space, where it may be a useful alternative procedure to the procedure presented by Edwards and Teng. In particular, it gives a simpler programming procedure for computing the beta ...
Date: March 1, 1995
Creator: Parzen, G.
Partner: UNT Libraries Government Documents Department

Charged Local Defects in Extended Systems

Description: The conventional approach to treating charged defects in extended systems in first principles calculations is via the supercell approximation using a neutralizing jellium background charge. I explicitly demonstrate shortcomings of this standard approach and discuss the consequences. Errors in the electrostatic potential surface over the volume of a supercell are shown to be comparable to a band gap energy in semiconductor materials, for cell sizes typically used in first principles simulations. I present an alternate method for eliminating the divergence of the Coulomb potential in supercell calculations of charged defects in extended systems that embodies a correct treatment of the electrostatic potential in the local viciniq of the a charged defect, via a mixed boundary condition approach. I present results of first principles calculations of charged vacancies in NaCl that illustrate the importance of polarization effects once an accurate representation of the local potential is obtained. These polarization effects, poorly captured in small supercells, also impact the energetic on the scale of typical band gap energies.
Date: May 25, 1999
Creator: Schultz, Peter A.
Partner: UNT Libraries Government Documents Department

Estimates and computations for melting and solidification problems

Description: In this paper we focus on melting and solidification processes described by phase-field models and obtain rigorous estimates for such processes. These estimates are derived in Section 2 and guarantee the convergence of solutions to non-constant equilibrium patterns. The most basic results conclude with the inequality (2.31). The estimates in the remainder of Section 2 illustrate what obtains if the initial data is progressively more regular and may be omitted on first reading. We also present some interesting numerical simulations which demonstrate the equilibrium structures and the approach of the system to non-constant equilibrium patterns. The novel feature of these calculations is the linking of the small parameter in the system, {delta}, to the grid spacing, thereby producing solutions with approximate sharp interfaces. Similar ideas have been used by Caginalp and Sokolovsky [1]. A movie of these simulations may be found at http:www.math.cmu.edu/math/people/greenberg.html.
Date: July 16, 2003
Creator: Greenberg, J. M.
Partner: UNT Libraries Government Documents Department

An Imcompressible Rayleigh-Taylor Problem in KULL

Description: The goal of the EZturb mix model in KULL is to predict the turbulent mixing process as it evolves from Rayleigh-Taylor, Richtmyer-Meshkov, or Kelvin-Helmholtz instabilities. In this report we focus on a simple example of the Rayleigh-Taylor instability (which occurs when a heavy fluid lies above a light fluid, and we perturb the interface separating them). It is well known that the late time asymptotic, fully self-similar form for the growth of the mixing zone scales quadratically with time.
Date: September 22, 2005
Creator: Ulitsky, M
Partner: UNT Libraries Government Documents Department