237 Matching Results

Search Results

Advanced search parameters have been applied.

A 3D Contact Smoothing Method

Description: Smoothing of contact surfaces can be used to eliminate the chatter typically seen with node on facet contact and give a better representation of the actual contact surface. The latter affect is well demonstrated for problems with interference fits. In this work we present two methods for the smoothing of contact surfaces for 3D finite element contact. In the first method, we employ Gregory patches to smooth the faceted surface in a node on facet implementation. In the second method, we employ a Bezier interpolation of the faceted surface in a mortar method implementation of contact. As is well known, node on facet approaches can exhibit locking due to the failure of the Babuska-Brezzi condition and in some instances fail the patch test. The mortar method implementation is stable and provides optimal convergence in the energy of error. In the this work we demonstrate the superiority of the smoothed versus the non-smoothed node on facet implementations. We also show where the node on facet method fails and some results from the smoothed mortar method implementation.
Date: May 2, 2002
Creator: Puso, M A & Laursen, T A
Partner: UNT Libraries Government Documents Department

General MoM Solutions for Large Arrays

Description: This paper focuses on a numerical procedure that addresses the difficulties of dealing with large, finite arrays while preserving the generality and robustness of full-wave methods. We present a fast method based on approximating interactions between sufficiently separated array elements via a relatively coarse interpolation of the Green's function on a uniform grid commensurate with the array's periodicity. The interaction between the basis and testing functions is reduced to a three-stage process. The first stage is a projection of standard (e.g., RWG) subdomain bases onto a set of interpolation functions that interpolate the Green's function on the array face. This projection, which is used in a matrix/vector product for each array cell in an iterative solution process, need only be carried out once for a single cell and results in a low-rank matrix. An intermediate stage matrix/vector product computation involving the uniformly sampled Green's function is of convolutional form in the lateral (transverse) directions so that a 2D FFT may be used. The final stage is a third matrix/vector product computation involving a matrix resulting from projecting testing functions onto the Green's function interpolation functions; the low-rank matrix is either identical to (using Galerkin's method) or similar to that for the bases projection. An effective MoM solution scheme is developed for large arrays using a modification of the AIM (Adaptive Integral Method) method. The method permits the analysis of arrays with arbitrary contours and nonplanar elements. Both fill and solve times within the MoM method are improved with respect to more standard MoM solvers.
Date: July 22, 2003
Creator: Fasenfest, B; Capolino, F; Wilton, D R; Jackson, D R & Champagne, N
Partner: UNT Libraries Government Documents Department

Interpolation and Approximation

Description: In this paper, there are three chapters. The first chapter discusses interpolation. Here a theorem about the uniqueness of the solution to the general interpolation problem is proven. Then the problem of how to represent this unique solution is discussed. Finally, the error involved in the interpolation and the convergence of the interpolation process is developed. In the second chapter a theorem about the uniform approximation to continuous functions is proven. Then the best approximation and the least squares approximation (a special case of best approximation) is discussed. In the third chapter orthogonal polynomials as discussed as well as bounded linear functionals in Hilbert spaces, interpolation and approximation and approximation in Hilbert space.
Date: May 1977
Creator: Lal, Ram
Partner: UNT Libraries

Objective analysis of the ARM IOP data: method and sensitivity

Description: Motivated by the need of to obtain accurate objective analysis of field experimental data to force physical parameterizations in numerical models, this paper -first reviews the existing objective analysis methods and interpolation schemes that are used to derive atmospheric wind divergence, vertical velocity, and advective tendencies. Advantages and disadvantages of each method are discussed. It is shown that considerable uncertainties in the analyzed products can result from the use of different analysis schemes and even more from different implementations of a particular scheme. The paper then describes a hybrid approach to combine the strengths of the regular grid method and the line-integral method, together with a variational constraining procedure for the analysis of field experimental data. In addition to the use of upper air data, measurements at the surface and at the top-of-the-atmosphere are used to constrain the upper air analysis to conserve column-integrated mass, water, energy, and momentum. Analyses are shown for measurements taken in the Atmospheric Radiation Measurement Programs (ARM) July 1995 Intensive Observational Period (IOP). Sensitivity experiments are carried out to test the robustness of the analyzed data and to reveal the uncertainties in the analysis. It is shown that the variational constraining process significantly reduces the sensitivity of the final data products.
Date: April 1, 1999
Creator: Cedarwall, R; Lin, J L; Xie, S C; Yio, J J & Zhang, M H
Partner: UNT Libraries Government Documents Department

List mode reconstruction for PET with motion compensation: A simulation study

Description: Motion artifacts can be a significant factor that limits the image quality in high-resolution PET. Surveillance systems have been developed to track the movements of the subject during a scan. Development of reconstruction algorithms that are able to compensate for the subject motion will increase the potential of PET. In this paper we present a list mode likelihood reconstruction algorithm with the ability of motion compensation. The subject moti is explicitly modeled in the likelihood function. The detections of each detector pair are modeled as a Poisson process with time vary ingrate function. The proposed method has several advantages over the existing methods. It uses all detected events and does not introduce any interpolation error. Computer simulations show that the proposed method can compensate simulated subject movements and that the reconstructed images have no visible motion artifacts.
Date: July 3, 2002
Creator: Qi, Jinyi & Huesman, Ronald H.
Partner: UNT Libraries Government Documents Department

List mode reconstruction for PET with motion compensation: A simulation study

Description: Motion artifacts can be a significant factor that limits the image quality in high-resolution PET. Surveillance systems have been developed to track the movements of the subject during a scan. Development of reconstruction algorithms that are able to compensate for the subject motion will increase the potential of PET. In this paper we present a list mode likelihood reconstruction algorithm with the ability of motion compensation. The subject motion is explicitly modeled in the likelihood function. The detections of each detector pair are modeled as a Poisson process with time-varying rate function. The proposed method has several advantages over the existing methods. It uses all detected events and does not introduce any interpolation error. Computer simulations show that the proposed method can compensate simulated subject movements and that the reconstructed images have no visible motion artifacts.
Date: July 1, 2002
Creator: Qi, Jinyi & Huesman, Ronald H.
Partner: UNT Libraries Government Documents Department

Spatially Interpolated Nonlinear Anodization in Synthetic Aperture Radar Imagery

Description: Spatially Interpolated Nonlinear Anodization in Synthetic Aperture Original formulation of spatially variant anodization for complex synthetic aperture radar (SAR) imagery oversampled at twice the Nyquist rate (2.OX). Here we report a spatially interpolating, noninteger-oversampled SVA sidelobe. The pixel's apparent IPR location is assessed by comparing its value to the sum of its value plus weighted comparable for exact interpolation. However, exact interpolation implies an ideal sine interpolator3 and large components may not be necessary. Note that P is the summation of IPR diagonal values. The value of a sine IPR on the diagonals is a sine-squared; values much less than cardinal direction (m, n) values. This implies that cardinal direction interpolation requires higher precision than diagonal interpolation. Consequently, we employed a smaller set. The spatially interpolated SVA used an 8-point/4-point sine interpolator described above. Table 1 shows the Table 1 results show a two-times speed-up using the 1.3x oversampled and spatially interpolated SVA over the Figure 1d. Detected results of 1.3x oversampled sine interpolated spatially variant
Date: June 29, 1999
Creator: Eichel, Paul H.; Jakowatz, Jr., Charles V. & Yocky, David A.
Partner: UNT Libraries Government Documents Department

Are Bilinear Quadrilaterals Better Than Linear Triangles?

Description: This paper compares the theoretical effectiveness of bilinear approximation over quadrilaterals with linear approximation over triangles. Anisotropic mesh transformation is used to generate asymptotically optimally efficient meshes for piecewise linear interpolation over triangles and bilinear interpolation over quadrilaterals. For approximating a convex function, although bilinear quadrilaterals are more efficient, linear triangles are more accurate and may be preferred in finite element computations; whereas for saddle-shaped functions, quadrilaterals may offer a higher order approximation on a well-designed mesh. A surprising finding is different grid orientations may yield an order of magnitude improvement in approximation accuracy.
Date: January 1, 1993
Creator: D'Azevedo, E.F.
Partner: UNT Libraries Government Documents Department

Recent Advances in VisIt: AMR Streamlines and Query-Driven Visualization

Description: Adaptive Mesh Refinement (AMR) is a highly effective method for simulations spanning a large range of spatiotemporal scales such as those encountered in astrophysical simulations. Combining research in novel AMR visualization algorithms and basic infrastructure work, the Department of Energy's (DOEs) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) has extended VisIt, an open source visualization tool that can handle AMR data without converting it to alternate representations. This paper focuses on two recent advances in the development of VisIt. First, we have developed streamline computation methods that properly handle multi-domain data sets and utilize effectively multiple processors on parallel machines. Furthermore, we are working on streamline calculation methods that consider an AMR hierarchy and detect transitions from a lower resolution patch into a finer patch and improve interpolation at level boundaries. Second, we focus on visualization of large-scale particle data sets. By integrating the DOE Scientific Data Management (SDM) Center's FastBit indexing technology into VisIt, we are able to reduce particle counts effectively by thresholding and by loading only those particles from disk that satisfy the thresholding criteria. Furthermore, using FastBit it becomes possible to compute parallel coordinate views efficiently, thus facilitating interactive data exploration of massive particle data sets.
Date: November 12, 2009
Creator: Weber, Gunther; Ahern, Sean; Bethel, Wes; Borovikov, Sergey; Childs, Hank; Deines, Eduard et al.
Partner: UNT Libraries Government Documents Department

VISCOSITY OF AQUEOUS SODIUM CHLORIDE SOLUTIONS FROM 0 - 150o C

Description: A critical evaluation of data on the viscosity of aqueous sodium chloride solutions is presented. The literature was screened through October 1977, and a databank of evaluated data was established. Viscosity values were converted when necessary to units of centigrade, centipoise and molal concentration. The data were correlated with the aid of an empirical equation to facilitate interpolation and computer calculations. The result of the evaluation includes a table containing smoothed values for the viscosity of NaCl solutions to 150 C.
Date: October 1, 1977
Creator: Ozbek, H.; Fair, J.A. & Phillips, S.L.
Partner: UNT Libraries Government Documents Department

Possible problems in ENDF/B-VI.r8

Description: This document lists the problems that we encountered in processing ENDF/B-VI.r8 that we suspect are problems with ENDF/B-VI.r8 itself. It also contains a comparison of linear interpolation methods. Finally, this documents proposes an alternative to the current scheme of reporting problems to the ENDF community.
Date: October 30, 2003
Creator: Brown, D & Hedstrom, G
Partner: UNT Libraries Government Documents Department

EVIDENCE FOR THE ITINERANT ELECTRON MODEL OF FERROMAGNETISM AND FOR SURFACE PHOTOEMISSION FROM ANGLE-RESOLVED PHOTOEMISSION STUDIES OF IRON

Description: Angle-resolved HeI photoemission spectra of Fe(001) are reported and interpreted within the framework of a direct transition model using Callaway's ferromagnetic band structure. The generally good agreement between predicted and experimental peak positions is taken to be strong support for the itinerant electron theory of ferromagnetism. Spectra taken with nearly grazing incidence p-polarized light emphasize the one-dimensional density of states peaks, supporting Kliewer's theoretical predictions of surface photoemission. The importance of electron refraction is noted, as is the value of interpolation calculations for interpreting ARP spectra.
Date: October 1, 1977
Creator: Kevan, S.D.; Wehner, P.S. & Shirley, D.A.
Partner: UNT Libraries Government Documents Department

A New Stabilized Nodal Integration Approach

Description: A new stabilized nodal integration scheme is proposed and implemented. In this work, focus is on the natural neighbor meshless interpolation schemes. The approach is a modification of the stabilized conforming nodal integration (SCNI) scheme and is shown to perform well in several benchmark problems.
Date: February 8, 2006
Creator: Puso, M; Zywicz, E & Chen, J S
Partner: UNT Libraries Government Documents Department

Spectral Interpolation on 3 x 3 Stencils for Prediction and Compression

Description: Many scientific, imaging, and geospatial applications produce large high-precision scalar fields sampled on a regular grid. Lossless compression of such data is commonly done using predictive coding, in which weighted combinations of previously coded samples known to both encoder and decoder are used to predict subsequent nearby samples. In hierarchical, incremental, or selective transmission, the spatial pattern of the known neighbors is often irregular and varies from one sample to the next, which precludes prediction based on a single stencil and fixed set of weights. To handle such situations and make the best use of available neighboring samples, we propose a local spectral predictor that offers optimal prediction by tailoring the weights to each configuration of known nearby samples. These weights may be precomputed and stored in a small lookup table. We show through several applications that predictive coding using our spectral predictor improves compression for various sources of high-precision data.
Date: June 25, 2007
Creator: Ibarria, L; Lindstrom, P & Rossignac, J
Partner: UNT Libraries Government Documents Department

LIP: The Livermore Interpolation Package, Version 1.4

Description: This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the 'LEOS Interpolation Package'. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewise bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a 'LIP interpolation object' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as 'partial setup' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an ...
Date: July 6, 2011
Creator: Fritsch, F N
Partner: UNT Libraries Government Documents Department

Controlled-aperture wave-equation migration

Description: We present a controlled-aperture wave-equation migration method that no1 only can reduce migration artiracts due to limited recording aperlurcs and determine image weights to balance the efl'ects of limited-aperture illumination, but also can improve thc migration accuracy by reducing the slowness perturbations within thc controlled migration regions. The method consists of two steps: migration aperture scan and controlled-aperture migration. Migration apertures for a sparse distribution of shots arc determined using wave-equation migration, and those for the other shots are obtained by interpolation. During the final controlled-aperture niigration step, we can select a reference slowness in c;ontrollecl regions of the slowness model to reduce slowncss perturbations, and consequently increase the accuracy of wave-equation migration inel hods that makc use of reference slownesses. In addition, the computation in the space domain during wavefield downward continuation is needed to be conducted only within the controlled apertures and therefore, the computational cost of controlled-aperture migration step (without including migration aperture scan) is less than the corresponding uncontrolled-aperture migration. Finally, we can use the efficient split-step Fourier approach for migration-aperture scan, then use other, more accurate though more expensive, wave-equation migration methods to perform thc final controlled-apertio.ee migration to produce the most accurate image.
Date: January 1, 2003
Creator: Huang, L. (Lian-Jie); Fehler, Michael C.; Sun, H. (Hongchuan) & Li, Z. (Zhiming)
Partner: UNT Libraries Government Documents Department

A crust and upper mantle model of Eurasia and North Africa for Pn travel time calculation

Description: We develop a Regional Seismic Travel Time (RSTT) model and methods to account for the first-order effect of the three-dimensional crust and upper mantle on travel times. The model parameterization is a global tessellation of nodes with a velocity profile at each node. Interpolation of the velocity profiles generates a 3-dimensional crust and laterally variable upper mantle velocity. The upper mantle velocity profile at each node is represented as a linear velocity gradient, which enables travel time computation in approximately 1 millisecond. This computational speed allows the model to be used in routine analyses in operational monitoring systems. We refine the model using a tomographic formulation that adjusts the average crustal velocity, mantle velocity at the Moho, and the mantle velocity gradient at each node. While the RSTT model is inherently global and our ultimate goal is to produce a model that provides accurate travel time predictions over the globe, our first RSTT tomography effort covers Eurasia and North Africa, where we have compiled a data set of approximately 600,000 Pn arrivals that provide path coverage over this vast area. Ten percent of the tomography data are randomly selected and set aside for testing purposes. Travel time residual variance for the validation data is reduced by 32%. Based on a geographically distributed set of validation events with epicenter accuracy of 5 km or better, epicenter error using 16 Pn arrivals is reduced by 46% from 17.3 km (ak135 model) to 9.3 km after tomography. Relative to the ak135 model, the median uncertainty ellipse area is reduced by 68% from 3070 km{sup 2} to 994 km{sup 2}, and the number of ellipses with area less than 1000 km{sup 2}, which is the area allowed for onsite inspection under the Comprehensive Nuclear Test Ban Treaty, is increased from 0% to 51%.
Date: March 19, 2009
Creator: Myers, S; Begnaud, M; Ballard, S; Pasyanos, M; Phillips, W S; Ramirez, A et al.
Partner: UNT Libraries Government Documents Department

Interpolation of probability densities in ENDF and ENDL

Description: Suppose that we are given two probability densities p{sub 0}(E{prime}) and p{sub 1}(E{prime}) for the energy E{prime} of an outgoing particle, p{sub 0}(E{prime}) corresponding to energy E{sub 0} of the incident particle and p{sub 1}(E{prime}) corresponding to incident energy E{sub 1}. If E{sub 0} < E{sub 1}, the problem is how to define p{sub {alpha}}(E{prime}) for intermediate incident energies E{sub {alpha}} = (1 - {alpha})E{sub 0} + {alpha}E{sub 1} with 0 < {alpha} < 1. In this note the author considers three ways to do it. They begin with unit-base interpolation, which is standard in ENDL and is sometimes used in ENDF. They then describe the equiprobable bins used by some Monte Carlo codes. They then close with a discussion of interpolation by corresponding-points, which is commonly used in ENDF.
Date: January 27, 2006
Creator: Hedstrom, G
Partner: UNT Libraries Government Documents Department

LIP: The Livermore Interpolation Package, Version 1.3

Description: This report describes LIP, the Livermore Interpolation Package. Because LIP is a stand-alone version of the interpolation package in the Livermore Equation of State (LEOS) access library, the initials LIP alternatively stand for the ''LEOS Interpolation Package''. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since the package need not be restricted to equation of state data, which uses variables {rho} (density) and T (temperature). LIP is primarily concerned with the interpolation of two-dimensional data on a rectangular mesh. The interpolation methods provided include piecewise bilinear, reduced (12-term) bicubic, and bicubic Hermite (biherm). There is a monotonicity-preserving variant of the latter, known as bimond. For historical reasons, there is also a biquadratic interpolator, but this option is not recommended for general use. A birational method was added at version 1.3. In addition to direct interpolation of two-dimensional data, LIP includes a facility for inverse interpolation (at present, only in the second independent variable). For completeness, however, the package also supports a compatible one-dimensional interpolation capability. Parametric interpolation of points on a two-dimensional curve can be accomplished by treating the components as a pair of one-dimensional functions with a common independent variable. LIP has an object-oriented design, but it is implemented in ANSI Standard C for efficiency and compatibility with existing applications. First, a ''LIP interpolation object'' is created and initialized with the data to be interpolated. Then the interpolation coefficients for the selected method are computed and added to the object. Since version 1.1, LIP has options to instead estimate derivative values or merely store data in the object. (These are referred to as ''partial setup'' options.) It is then possible to pass the object to functions that interpolate or invert the interpolant at an ...
Date: January 4, 2011
Creator: Fritsch, F N
Partner: UNT Libraries Government Documents Department

SYMPLECTIC INTERPOLATION.

Description: It is important to have symplectic maps for the various electromagnetic elements in an accelerator ring. For some tracking problems we must consider elements which evolve during a ramp. Rather than performing a computationally intensive numerical integration for every turn, it should be possible to integrate the trajectory for a few sets of parameters, and then interpolate the transport map as a function of one or more parameters, such as energy. We present two methods for interpolation of symplectic matrices as a function of parameters: one method is based on the calculation of a representation in terms of a basis of group generators [2, 3] and the other is based on the related but simpler symplectification method of Healy [1]. Both algorithms guarantee a symplectic result.
Date: June 23, 2006
Creator: MACKAY, W.W. & LUCCIO, A.U.
Partner: UNT Libraries Government Documents Department

A Fast MoM Solver (GIFFT) for Large Arrays of Microstrip and Cavity-Backed Antennas

Description: A straightforward numerical analysis of large arrays of arbitrary contour (and possibly missing elements) requires large memory storage and long computation times. Several techniques are currently under development to reduce this cost. One such technique is the GIFFT (Green's function interpolation and FFT) method discussed here that belongs to the class of fast solvers for large structures. This method uses a modification of the standard AIM approach [1] that takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basis functions, such as the RWG basis. The Green's function is then projected onto a sparse regular grid of separable interpolating polynomials. This grid can then be used in a 2D or 3D FFT to accelerate the matrix-vector product used in an iterative solver [2]. The method has been proven to greatly reduce solve time by speeding up the matrix-vector product computation. The GIFFT approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends GIFFT to layered material Green's functions and multiregion interactions via slots in ground planes. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the GIFFT method is reported in [2]; this contribution is limited to presenting new results for array antennas made of slot-excited patches and cavity-backed patch antennas.
Date: February 2, 2005
Creator: Fasenfest, B J; Capolino, F & Wilton, D
Partner: UNT Libraries Government Documents Department

Experiences with BoomerAMG:: A Parallel Algebraic Multigrid Solver and Preconditioner for Large Linear Systems

Description: Algebraic multigrid (AMG) is an attractive choice for solving large linear systems {Lambda}x = b on unstructured grids. While AMG is applicable as a solver for a variety of problems, its robustness may be enhanced by using it as a preconditioner for Krylov solvers, such as GMRES. The sheer size of modern problems, hundreds of millions or billions of unknowns, dictates the use of massively parallel computers. AMG consists of two phases: the setup phase, in which smaller and smaller linear systems are generated by means of linear transfer operators (interpolation and restriction); and the solve phase, which employs a smoothing operator, such as Gauss-Seidel or Jacobi relaxation. Most of these components can be parallelized in a straightforward fashion; however, the coarse-grid selection, in which the grid for a smaller linear system is created on which the error can be approximated, is highly sequential. It is important to develop parallel coarsening techniques. They briefly present here the coarsening algorithms used in the parallel AMG code ''Boomer AMG'' and summarize some performance results for those algorithms. A detailed discussion of the algorithms and numerical results will be found.
Date: February 22, 2000
Creator: Hensor, V E & Yang, U M
Partner: UNT Libraries Government Documents Department