237 Matching Results

Search Results

Advanced search parameters have been applied.

Stress relaxation of silicon nitride at elevated temperatures

Description: The stress relaxation behavior of SN88, SN253, and NCX-5102 silicon nitride materials were experimentally determined in tension at 1300{degrees}C using buttonhead specimens. Specimens were held at constant strain after being loaded at 10 MPa/s to an initial stress of 276 MPa (40 ksi) or 414 MPa (60 ksi). The subsequent decay in tensile stress was measured as a function of time. A non-negative least squares algorithm used in conjunction with a generalized Maxwell model proved to be an efficient means to define characteristic relaxation modulus spectra and stress relaxation behavior. In the last part of this study, the utility of using short-term stress relaxation testing to predict long-term creep performance was examined.
Date: April 1, 1995
Creator: Wereszczak, A.A.; Ferber, M.K.; Kirkland, T.P.; Lara-Curzio, E.; Parthasarathy, V. & Gribb, T.T.
Partner: UNT Libraries Government Documents Department

Development of global medium-energy nucleon-nucleus optical model potentials

Description: The authors report on the development of new global optical model potentials for nucleon-nucleus scattering at medium energies. Using both Schroedinger and Dirac scattering formalisms, the goal is to construct a physically realistic optical potential describing nucleon-nucleus elastic scattering observables for a projectile energy range of (perhaps) 20 meV to (perhaps) 2 GeV and a target mass range of 16 to 209, excluding regions of strong nuclear deformation. They use a phenomenological approach guided by conclusions from recent microscopic studies. The experimental database consists largely of proton-nucleus elastic differential cross sections, analyzing powers, spin-rotation functions, and total reaction cross sections, and neutron-nucleus total cross sections. They will use this database in a nonlinear least-squares adjustment of optical model parameters in both relativistic equivalent Schroedinger (including relativistic kinematics) and Dirac (second-order reduction) formalisms. Isospin will be introduced through the standard Lane model and a relativistic generalization of that model.
Date: August 1, 1997
Creator: Madland, D.G. & Sierk, A.J.
Partner: UNT Libraries Government Documents Department


Description: This paper compares the performance of recursive state estimation techniques for tracking the physical location of a radioactive source within a room based on radiation measurements obtained from a series of detectors at fixed locations. Specifically, the extended Kalman filter, algebraic observer, and nonlinear least squares techniques are investigated. The results of this study indicate that recursive least squares estimation significantly outperforms the other techniques due to the severe model nonlinearity.
Date: September 1, 2000
Creator: MUSKE, K. & HOWSE, J.
Partner: UNT Libraries Government Documents Department

Ideas for fast accelerator model calibration

Description: With the advent of a simple matrix inversion technique, measurement-based storage ring modeling has made rapid progress in recent years. Using fast computers with large memory, the matrix inversion procedure typically adjusts up to 10{sup 3} model variables to fit the order of 10{sup 5} measurements. The results have been surprisingly accurate. Physics aside, one of the next frontiers is to simplify the process and to reduce computation time. In this paper, the authors discuss two approaches to speed up the model calibration process: recursive least-squares fitting and a piecewise fitting approach.
Date: May 1, 1997
Creator: Corbett, J. & LeBlanc, G.
Partner: UNT Libraries Government Documents Department

Neutron Exposure Parameters for the Dosimetry Capsule in the Heavy-Section Steel Irradiation Program Tenth Irradiation Series

Description: This report describes the computational methodology for the least-squares adjustment of the dosimetry data from the HSSI 10.OD dosimetry capsule with neutronics calculations. It presents exposure rates at each dosimetry location for the neutron fluence greater than 1.0 MeV, fluence greater than 0.1 MeV, and displacements per atom. Exposure parameter distributions are also described in terms of three- dimensional fitting functions. When fitting functions are used it is suggested that an uncertainty of 6% (1 o) should be associated with the exposure rate values. The specific activity of each dosimeter at the end of irradiation is listed in the Appendix.
Date: October 1, 1998
Creator: Baldwin, C.A.; Kam, F.B.K. & Remec, I.
Partner: UNT Libraries Government Documents Department

A note on the total least squares problem for coplanar points

Description: The Total Least Squares (TLS) fit to the points (x{sub k}, y{sub k}), k = 1, {hor_ellipsis}, n, minimizes the sum of the squares of the perpendicular distances from the points to the line. This sum is the TLS error, and minimizing its magnitude is appropriate if x{sub k} and y{sub k} are uncertain. A priori formulas for the TLS fit and TLS error to coplanar points were originally derived by Pearson, and they are expressed in terms of the mean, standard deviation and correlation coefficient of the data. In this note, these TLS formulas are derived in a more elementary fashion. The TLS fit is obtained via the ordinary least squares problem and the algebraic properties of complex numbers. The TLS error is formulated in terms of the triangle inequality for complex numbers.
Date: September 1, 1994
Creator: Lee, S. L.
Partner: UNT Libraries Government Documents Department

High-order ENO schemes for unstructured meshes based on least-squares reconstruction

Description: High-order accurate schemes for conservation laws for unstructured meshes are not nearly so well advanced as such schemes for structured meshes. Consequently, little or nothing is known about the possible practical advantages of high-order discretization on unstructured meshes. This article is part of an ongoing effort to develop high-order schemes for unstructured meshes to the point where meaningful information can be obtained about the trade-offs involved in using spatial discretizations of higher than second-order accuracy on unstructured meshes. This article describes a high-order accurate ENO reconstruction scheme, called DD-L{sub 2}-ENO, for use with vertex-centered upwind flow solution algorithms on unstructured meshes. The solution of conservation equations in this context can be broken naturally into three phases: (1) solution reconstruction, in which a polynomial approximation of the solution is obtained in each control volume. (2) Flux integration around each control volume, using an appropriate flux function and a quadrature rule with accuracy commensurate with that of the reconstruction. (3) Time evolution, which may be implicit, explicit, multigrid, or some hybrid.
Date: March 1, 1997
Creator: Ollivier-Gooch, C.F.
Partner: UNT Libraries Government Documents Department

Constrained minimization for monotonic reconstruction

Description: The authors present several innovations in a method for monotonic reconstructions. It is based on the application of constrained minimization techniques for the imposition of monotonicity on a reconstruction. In addition, they present extensions of several classical TVD limiters to a genuinely multidimensional setting. In this case the linear least squares reconstruction method is expanded upon. They also clarify data dependent weighting techniques used with the minimization process.
Date: August 20, 1996
Creator: Rider, W.J. & Kothe, D.B.
Partner: UNT Libraries Government Documents Department

Partial least squares, conjugate gradient and the fisher discriminant

Description: The theory of multivariate regression has been extensively studied and is commonly used in many diverse scientific areas. A wide variety of techniques are currently available for solving the problem of multivariate calibration. The volume of literature on this subject is so extensive that understanding which technique to apply can often be very confusing. A common class of techniques for solving linear systems, and consequently applications of linear systems to multivariate analysis, are iterative methods. While common linear system solvers typically involve the factorization of the coefficient matrix A in solving the system Ax = b, this method can be impractical if A is large and sparse. Iterative methods such as Gauss-Seidel, SOR, Chebyshev semi-iterative, and related methods also often depend upon parameters that require calibration and which are sometimes hard to choose properly. An iterative method which surmounts many of these difficulties is the method of conjugate gradient. Algorithms of this type find solutions iteratively, by optimally calculating the next approximation from the residuals.
Date: December 1996
Creator: Faber, V.
Partner: UNT Libraries Government Documents Department

Radiographic least-squares fitting technique accurately measures dimensions and x-ray attenuation

Description: In support of stockpile stewardship and other important nondestructive test (NDT) applications, the authors seek improved methods for rapid evaluation of materials to detect degradation, warping, and shrinkage. Typically, such tests involve manual measurements of dimensions on radiographs. The authors seek to speed the process and reduce the costs of performing NDT by analyzing radiographic data using a least-squares fitting technique for rapid evaluation of industrial parts. In 1985, Whitman, Hanson, and Mueller demonstrated a least-squares fitting technique that very accurately locates the edges of cylindrically symmetrical objects in radiographs. To test the feasibility of applying this technique to a large number of parts, the authors examine whether an automated least squares algorithm can be routinely used for measuring the dimensions and attenuations of materials in two nested cylinders. The proposed technique involves making digital radiographs of the cylinders and analyzing the images. In the authors` preliminary study, however, they use computer simulations of radiographs.
Date: October 1, 1997
Creator: Kelley, T.A. & Stupin, D.M.
Partner: UNT Libraries Government Documents Department

Positive Scattering Cross Sections using Constrained Least Squares

Description: A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented.
Date: September 27, 1999
Creator: Dahl, J.A.; Ganapol, B.D. & Morel, J.E.
Partner: UNT Libraries Government Documents Department

Moving Least-Squares: A Numerical Differentiation Method for Irregularly Spaced Calculation Points

Description: Numerical methods may require derivatives of functions whose values are known only on irregularly spaced calculation points. This document presents and quantifies the performance of Moving Least-Squares (MLS), a method of derivative evaluation on irregularly spaced points that has a number of inherent advantages. The user selects both the spatial dimension of the problem and order of the highest conserved moment. The accuracy of calculations is maintained on highly irregularly spaced points. Not required are creation of additional calculation points or interpolation of the calculation points onto a regular grid. Implementation of the method requires the use of only a relatively small number of calculation points. The method is fast, robust and provides smooth results even as the order of the derivative increases.
Date: June 1, 2001
Partner: UNT Libraries Government Documents Department

Approximation by hinge functions

Description: Breiman has defined {open_quotes}hinge functions{close_quotes} for use as basis functions in least squares approximations to data. A hinge function is the max (or min) function of two linear functions. In this paper, the author assumes the existence of smooth function f(x) and a set of samples of the form (x, f(x)) drawn from a probability distribution {rho}(x). The author hopes to find the best fitting hinge function h(x) in the least squares sense. There are two problems with this plan. First, Breiman has suggested an algorithm to perform this fit. The author shows that this algorithm is not robust and also shows how to create examples on which the algorithm diverges. Second, if the author tries to use the data to minimize the fit in the usual discrete least squares sense, the functional that must be minimized is continuous in the variables, but has a derivative which jumps at the data. This paper takes a different approach. This approach is an example of a method that the author has developed called {open_quotes}Monte Carlo Regression{close_quotes}. (A paper on the general theory is in preparation.) The author shall show that since the function f is continuous, the analytic form of the least squares equation is continuously differentiable. A local minimum is solved for by using Newton`s method, where the entries of the Hessian are estimated directly from the data by Monte Carlo. The algorithm has the desirable properties that it is quadratically convergent from any starting guess sufficiently close to a solution and that each iteration requires only a linear system solve.
Date: May 1, 1997
Creator: Faber, V.
Partner: UNT Libraries Government Documents Department

A linear mixture analysis-based compression for hyperspectral image analysis

Description: In this paper, the authors present a fully constrained least squares linear spectral mixture analysis-based compression technique for hyperspectral image analysis, particularly, target detection and classification. Unlike most compression techniques that directly deal with image gray levels, the proposed compression approach generates the abundance fractional images of potential targets present in an image scene and then encodes these fractional images so as to achieve data compression. Since the vital information used for image analysis is generally preserved and retained in the abundance fractional images, the loss of information may have very little impact on image analysis. In some occasions, it even improves analysis performance. Airborne visible infrared imaging spectrometer (AVIRIS) data experiments demonstrate that it can effectively detect and classify targets while achieving very high compression ratios.
Date: June 30, 2000
Creator: Chang, C. I. & Ginsberg, I. W.
Partner: UNT Libraries Government Documents Department

Review of the Palisades pressure vessel accumulated fluence estimate and of the least squares methodology employed

Description: This report provides a review of the Palisades submittal to the Nuclear Regulatory Commission requesting endorsement of their accumulated neutron fluence estimates based on a least squares adjustment methodology. This review highlights some minor issues in the applied methodology and provides some recommendations for future work. The overall conclusion is that the Palisades fluence estimation methodology provides a reasonable approach to a {open_quotes}best estimate{close_quotes} of the accumulated pressure vessel neutron fluence and is consistent with the state-of-the-art analysis as detailed in community consensus ASTM standards.
Date: May 1, 1998
Creator: Griffin, P.J.
Partner: UNT Libraries Government Documents Department

Results of the analysis of the blood lymphocyte proliferation test data from the National Jewish Center

Description: A new approach to the analysis of the blood beryllium lymphocyte proliferation test (LPT) was presented to the Committee to Accredit Beryllium Sensitization Testing-Beryllium Industry Scientific Advisory Committee in April, 1994. Two new outlier resistant methods were proposed for the analysis of the blood LPT and compared with the approach then in use by most labs. The National Jewish Center (NJC) agreed to provide data from a study that was underway at that time. Three groups of LPT data are considered: (1) a sample of 168 beryllium exposed (BE) workers and 20 nonexposed (NE) persons; (2) 25 unacceptable LPTs, and (3) 32 abnormal LPTs for individuals known to have chronic beryllium disease (CBD). The LAV method described in ORNL-6818 was applied to each LPT. Graphical and numerical summaries similar to those presented for the ORISE data are given. Three methods were used to identify abnormal LPTs. All three methods correctly identified the 32 known CBD cases as abnormal.
Date: March 1, 1997
Creator: From, E.L.; Newman, L.S. & Mroz, M.M.
Partner: UNT Libraries Government Documents Department

Model refinement using transient response

Description: A method is presented for estimating uncertain or unknown parameters in a mathematical model using measurements of transient response. The method is based on a least squares formulation in which the differences between the model and test-based responses are minimized. An application of the method is presented for a nonlinear structural dynamic system. The method is also applied to a model of the Department of Energy armored tractor trailer. For the subject problem, the transient response was generated by driving the vehicle over a bump of prescribed shape and size. Results from the analysis and inspection of the test data revealed that a linear model of the vehicle`s suspension is not adequate to accurately predict the response caused by the bump.
Date: December 1, 1997
Creator: Dohrmann, C.R. & Carne, T.G.
Partner: UNT Libraries Government Documents Department