97 Matching Results

Search Results

Advanced search parameters have been applied.

Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

Description: The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimates for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.
Date: June 1996
Creator: Stoneking, M. R. & Den Hartog, D. J.
Partner: UNT Libraries Government Documents Department

Measurement of the top quark mass at D0

Description: D{null} has measured the top quark mass using a sample of 32 single- lepton events selected from approximately 115 pb{sup -1} of {radical}s = 1.8 TeV {ital p}{ital {anti p}} collisions collected from 1992-1996. The result is {ital m}{sub t} = 169 {+-} 8({ital stat}){+-} 8 ({ital syst}) GeV/c{sup 2}. Using a sample of 3 {ital e{mu}} events, D{null} measures {ital m}{sub t} = 158 {+-} 24({ital stat}) {+-} 10({ital syst}) GeV/c{sup 2}.
Date: November 1, 1996
Creator: Varnes, E.W.
Partner: UNT Libraries Government Documents Department

A tool to identify parameter errors in finite element models

Description: A popular method for updating finite element models with modal test data utilizes optimization of the model based on design sensitivities. The attractive feature of this technique is that it allows some estimate and update of the physical parameters affecting the hardware dynamics. Two difficulties are knowing which physical parameters are important and which of those important parameters are in error. If this is known, the updating process is simply running through the mechanics of the optimization. Most models of real systems have a myriad of parameters. This paper discusses an implementation of a tool which uses the model and test data together to discover which parameters are most important and most in error. Some insight about the validity of the model form may also be obtained. Experience gained from applications to complex models will be shared.
Date: February 1, 1997
Creator: Mayes, R. L.
Partner: UNT Libraries Government Documents Department

Measurement of the W boson mass

Description: We present a preliminary measurement of the {ital W} boson mass using data collected by the D{null} experiment at the Fermilab Tevatron during the 1994-1995 collider run 1b. We use {ital W} {r_arrow} {ital e}{nu} decays to extract the {ital W} mass from the observed spectrum of transverse mass of the electron ({vert_bar}{eta}{vert_bar} {lt} 1. 2) and the inferred neutrino We use {ital Z}{sup 0} {r_arrow} {ital ee} decays to constrain our model of the detector response. We measure {ital m}{sub W}/{ital m}{sub Z} = 0.8815 {+-} 0.0011({ital stat}) {+-} 0.0014({ital syst}) and {ital m}{sub W} = 80.38 {+-} 0.07 ({ital W stat}) {+-} 0.13({ital syst}) GeV. Combining this result with our previous measurement from the 1992-1993 data, we obtain {ital m}{sub W} = 80.37 {+-} 0.15 GeV (errors combined in quadrature).
Date: November 1, 1996
Creator: Kotwal, A.V.
Partner: UNT Libraries Government Documents Department

Interferometric SAR coherence classification utility assessment

Description: The classification utility of a dual-antenna interferometric synthetic aperture radar (IFSAR) is explored by comparison of maximum likelihood classification results for synthetic aperture radar (SAR) intensity images and IPSAR intensity and coherence images. The addition of IFSAR coherence improves the overall classification accuracy for classes of trees, water, and fields. A threshold intensity-coherence classifier is also compared to the intensity-only classification results.
Date: March 1, 1998
Creator: Yocky, D.A.
Partner: UNT Libraries Government Documents Department

Statistical modeling of targets and clutter in single-look non-polarimetric SAR imagery

Description: This paper presents a Generalized Logistic (gLG) distribution as a unified model for Log-domain synthetic aperture Radar (SAR) data. This model stems from a special case of the G-distribution known as the G{sup 0}-distribution. The G-distribution arises from a multiplicative SAR model and has the classical K-distribution as another special case. The G{sup 0}-distribution, however, can model extremely heterogeneous clutter regions that the k-distribution cannot model. This flexibility is preserved in the unified gLG model, which is capable of modeling non-polarimetric SAR returns from clutter as well as man-made objects. Histograms of these two types of SAR returns have opposite skewness. The flexibility of the gLG model lies in its shape and shift parameters. The shape parameter describes the differing skewness between target and clutter data while the shift parameter compensates for movements in the mean as the shape parameter changes. A Maximum Likelihood (ML) estimate of the shape parameter gives an optimal measure of the skewness of the SAR data. This measure provides a basis for an optimal target detection algorithm.
Date: August 1, 1998
Creator: Salazar, J.S.; Hush, D.R.; Koch, M.W.; Fogler, R.J. & Hostetler, L.D.
Partner: UNT Libraries Government Documents Department

Insights into multivariate calibration using errors-in-variables modeling

Description: A {ital q}-vector of responses, y, is related to a {ital p}-vector of explanatory variables, x, through a causal linear model. In analytical chemistry, y and x might represent the spectrum and associated set of constituent concentrations of a multicomponent sample which are related through Beer`s law. The model parameters are estimated during a calibration process in which both x and y are available for a number of observations (samples/specimens) which are collectively referred to as the calibration set. For new observations, the fitted calibration model is then used as the basis for predicting the unknown values of the new x`s (concentrations) form the associated new y`s (spectra) in the prediction set. This prediction procedure can be viewed as parameter estimation in an errors-in-variables (EIV) framework. In addition to providing a basis for simultaneous inference about the new x`s, consideration of the EIV framework yields a number of insights relating to the design and execution of calibration studies. A particularly interesting result is that predictions of the new x`s for individual samples can be improved by using seemingly unrelated information contained in the y`s from the other members of the prediction set. Furthermore, motivated by this EIV analysis, this result can be extended beyond the causal modeling context to a broader range of applications of multivariate calibration which involve the use of principal components regression.
Date: September 1, 1996
Creator: Thomas, E.V.
Partner: UNT Libraries Government Documents Department

Time profiles and pulse structure of bright, long gamma-ray bursts using BATSE TTS data

Description: The time profiles of many gamma-ray bursts observed by BATSE consist of distinct pulses, which offer the possibility of characterizing the temporal structure of these bursts using a relatively small set of pulse-shape parameters. This pulse analysis has previously been performed on some bright, long bursts using binned data, and on some short bursts using BATSE Time-Tagged Event (TTE) data. The BATSE Time- to-Spill (TTS) burst data records the times required to accumulate a fixed number of photons, giving variable time resolution. The spill times recorded in the TTS data behave as a gamma distribution. We have developed an interactive pulse-fitting program using the pulse model of Norris et al. and a maximum-likelihood fitting algorithm to the gamma distribution of the spill times. We then used this program to analyze a number of bright, long bursts for which TTS data is available. We present statistical information on the attributes of pulses comprising these bursts.
Date: April 1, 1996
Creator: Lee, A.; Bloom, E. & Scargle, J.
Partner: UNT Libraries Government Documents Department

Preliminary measurements of B{sup 0} and B{sup +} lifetimes at SLD

Description: Hydrolysis and carbonate complexation reactions were examined for NpO{sub 2}{sup 2+} and NpO{sub 2}{sup +} ions by a variety of techniques including potentiometric titration, UV-Vis-NIR and NMR spectroscopy. The equilibrium constant for the reaction 3NpO{sub 2}(CO{sub 3}){sub 3}{sup 4+} + 3H{sup +} {rightleftharpoons} (NpO{sub 2}){sub 3}(CO{sub 3}){sub 6}{sup 6{minus}} + 3HCO{sub 3}{sup {minus}} was determined to be logK = 19.7 ({plus_minus}0.8) (I = 2.5 m). {sup 17}O NMR spectroscopy of NpO{sub 2}{sup n+} ions (n = 1,2) reveals a readily observable {sup 17}O resonance for n = 2, but not for n = 1. The first hydrolysis constant for NpO{sub 2}{sup +} was studied as a function of temperature, and the functional form for the temperature-dependent equilibrium constant for the reaction written as NpO{sub 2}{sup +} + H{sub 2}O {rightleftharpoons} NpO{sub 2}OH + H{sup +} was found to be logK = 2.28 {minus} 3780/T, where T is in {degree}K. Finally, the temperature dependence of neptunium(V) carbonate complexation constants was studied. For the first carbonate complexation constant, the appropriate functional form was found to be log{beta}{sub 01} = 1.47 + 786/T.
Date: August 1, 1995
Creator: SLD Collaboration
Partner: UNT Libraries Government Documents Department

A Measure of the goodness of fit in unbinned likelihood fits; end of Bayesianism?

Description: Maximum likelihood fits to data can be done using binned data (histograms) and unbinned data. With binned data, one gets not only the fitted parameters but also a measure of the goodness of fit. With unbinned data, currently, the fitted parameters are obtained but no measure of goodness of fit is available. This remains, to date, an unsolved problem in statistics. Using Bayes' theorem and likelihood ratios, they provide a method by which both the fitted quantities and a measure of the goodness of fit are obtained for unbinned likelihood fits, as well as errors in the fitted quantities. The quantity, conventionally interpreted as a Bayesian prior, is seen in this scheme to be a number not a distribution, that is determined from data.
Date: March 12, 2004
Creator: Raja, Rajendran
Partner: UNT Libraries Government Documents Department

Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

Description: The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method. The problem involves a probability model for underwater noise due to distant shipping.
Date: August 12, 1994
Creator: Glosup, J.G. & Axelrod, M.C.
Partner: UNT Libraries Government Documents Department

Impact of new collider data on fits and extrapolations of cross sections and slopes

Description: The latest Collider data are compared with our earlier extrapolations. Fits that include the new data are made. Those for which sigma/sub tot/ grows as log/sup 2/(s/s/sub o/) indefinitely give a significantly poorer chi/sup 2/ than those for which sigma/sub tot/ eventually levels out. For the proposed SSC energy for the former fits predict sigma/sub tot/(..sqrt..s = 40 TeV) approx. =200 mb while the latter give sigma/sub tot/(..sqrt..s = 40 TeV) approx. = 100 mb. 6 refs.
Date: August 1, 1985
Creator: Block, M.M. & Cahn, R.N.
Partner: UNT Libraries Government Documents Department

The high sensitivity of the maximum likelihood estimator method of tomographic image reconstruction

Description: Positron Emission Tomography (PET) images obtained by the MLE iterative method of image reconstruction converge towards strongly deteriorated versions of the original source image. The image deterioration is caused by an excessive attempt by the algorithm to match the projection data with high counts. We can modulate this effect. We compared a source image with reconstructions by filtered backprojection to the MLE algorithm to show that the MLE images can have similar noise to the filtered backprojection images at regions of high activity and very low noise, comparable to the source image, in regions of low activity, if the iterative procedure is stopped at an appropriate point.
Date: January 1, 1987
Creator: Llacer, J. & Veklerov, E.
Partner: UNT Libraries Government Documents Department

Multivariate calibration applied to the quantitative analysis of infrared spectra

Description: Multivariate calibration methods are very useful for improving the precision, accuracy, and reliability of quantitative spectral analyses. Spectroscopists can more effectively use these sophisticated statistical tools if they have a qualitative understanding of the techniques involved. A qualitative picture of the factor analysis multivariate calibration methods of partial least squares (PLS) and principal component regression (PCR) is presented using infrared calibrations based upon spectra of phosphosilicate glass thin films on silicon wafers. Comparisons of the relative prediction abilities of four different multivariate calibration methods are given based on Monte Carlo simulations of spectral calibration and prediction data. The success of multivariate spectral calibrations is demonstrated for several quantitative infrared studies. The infrared absorption and emission spectra of thin-film dielectrics used in the manufacture of microelectronic devices demonstrate rapid, nondestructive at-line and in-situ analyses using PLS calibrations. Finally, the application of multivariate spectral calibrations to reagentless analysis of blood is presented. We have found that the determination of glucose in whole blood taken from diabetics can be precisely monitored from the PLS calibration of either mind- or near-infrared spectra of the blood. Progress toward the non-invasive determination of glucose levels in diabetics is an ultimate goal of this research. 13 refs., 4 figs.
Date: January 1, 1991
Creator: Haaland, D. M.
Partner: UNT Libraries Government Documents Department

Averaging sigma(n,. gamma. ) in the transition region E/sub n/ = 1-150 keV

Description: The technique of determining a smooth average neutron capture cross section by least squares adjustment of strength functions is illustrated for /sup 179/Hf(n,..gamma..) high resolution data from ORELA. The s-, p- and d-wave neutron strength functions and GAMMA/sub ..gamma..// <D/sub l=0> found agree well with systematics, model calculations and other experimental information despite their strong correlation when determined solely from the capture data.
Date: January 1, 1982
Creator: Macklin, R.L.
Partner: UNT Libraries Government Documents Department

Nuclear-data evaluation based on direct and indirect measurements with general correlations

Description: Optimum procedures for the statistical improvement, or updating, of an existing nuclear-data evaluation are reviewed and redeveloped from first principles, consistently employing a minimum-variance viewpoint. A set of equations is derived which provides improved values of the data and their covariances, taking into account information from supplementary measurements and allowing for general correlations among all measurements. The minimum-variance solutions thus obtained, which we call the method of ''partitioned least squares,'' are found to be equivalent to a method suggested by Yu. V. Linnik and applied by a number of authors to the analysis of fission-reactor integral experiments; however, up to now, the partitioned-least-squares formulae have not found widespread use in the field of basic data evaluation. This approach is shown to give the same results as the more commonly applied Normal equations, but with reduced matrix inversion requirements. Examples are provided to indicate potential areas of application. 10 refs.
Date: January 1, 1988
Creator: Muir, D.W.
Partner: UNT Libraries Government Documents Department

A technique for code validation for criticality safety calculations

Description: There are probably as many techniques to validate computer codes for criticality safety purposes as there are computer codes and code validators. One method used at Martin Marietta Energy Systems, Inc., to validate the KENO code and associated cross sections consists of determining a single-sided, uniform-width, closed-interval, lower tolerance band for k{sub eff} of critical systems. For application, this lower tolerance band becomes the upper safety limit (USL) acceptance criteria for subcriticality based upon the KENO calculations. A system is considered acceptably subcritical if a calculated k{sub eff} plus 2 standard deviations lies below this upper safety limit (i.e., k{sub eff} + 2{sigma} < USL). 5 refs., 1 fig.
Date: January 1, 1991
Creator: Dyer, H.R.; Jordan, W.C. (Oak Ridge National Lab., TN (United States)) & Cain, V.R. (Oak Ridge Y-12 Plant, TN (United States))
Partner: UNT Libraries Government Documents Department

Spectrum unfolding by the least-squares methods

Description: The method of least squares is briefly reviewed, and the conditions under which it may be used are stated. From this analysis, a least-squares approach to the solution of the dosimetry neutron spectrum unfolding problem is introduced. The mathematical solution to this least-squares problem is derived from the general solution. The existence of this solution is analyzed in some detail. A chi/sup 2/-test is derived for the consistency of the input data which does not require the solution to be obtained first. The fact that the problem is technically nonlinear, but should be treated in general as a linear one, is argued. Therefore, the solution should not be obtained by iteration. Two interpretations are made for the solution of the code STAY'SL, which solves this least-squares problem. The relationship of the solution to this least-squares problem to those obtained currently by other methods of solving the dosimetry neutron spectrum unfolding problem is extensively discussed. It is shown that the least-squares method does not require more input information than would be needed by current methods in order to estimate the uncertainties in their solutions. From this discussion it is concluded that the proposed least-squares method does provide the best complete solution, with uncertainties, to the problem as it is understood now. Finally, some implications of this method are mentioned regarding future work required in order to exploit its potential fully.
Date: January 1, 1977
Creator: Perey, F.G.
Partner: UNT Libraries Government Documents Department

Uncertainty analysis of dosimetry spectrum unfolding

Description: The propagation of uncertainties in the input data is analyzed for the usual dosimetry unfolding solution. A new formulation of the dosimetry unfolding problem is proposed in which the most likely value of the spectrum is obtained. The relationship of this solution to the usual one is discussed.
Date: January 1, 1977
Creator: Perey, F.G.
Partner: UNT Libraries Government Documents Department

Precise measurement and analysis of neutron transmission through /sup 232/Th. [6. 0 MeV to 0. 1 MeV]

Description: Three sets of transmission time spectra through up to eight samples of /sup 232/Th have been measured for neutron energies from 6.0 MeV to 0.1 MeV by use of a flight-time technique over 22- and 40-m path lengths, the ORELA pulsed neutron source, and a 1-mm-thick lithium glass detector. The resulting total cross section from 0.1 to 20.0 eV seems to be smaller than that contained in the ENDF/B-V evaluation. Least-squares analysis of the transmissions from 9 to 440 eV using a multilevel Breit-Wigner formalism results in neutron widths consistent with those previously reported. An average radiation width of 25.2 MeV is obtained for 19 low-energy s-wave resonances. 3 figures, 5 tables.
Date: January 1, 1980
Creator: Olsen, D.K.; Ingle, R.W. & Portney, J.L.
Partner: UNT Libraries Government Documents Department

Weighted fit of parametric functions to distributions: The new interface of HOBOOK with MINUIT

Description: The fitting routines of the HBOOK package allow weighted fit of parametric functions to the contents of a one, two or N-dimensional distribution, and analysis of the function in the neighborhood of its minimum, through an interface with the MINUIT package. These routines have been rewritten so as to interface the new version of MINUIT and to allow for smooth transitions to future versions of both packages. We discuss the interface and its capabilities: it is more stable than the previous version and presents a more accurate error analysis. The fitting algorithm is based on the Fletcher method, known for its reliability. Exponential, Gaussian and polynomial fitting are provided, as well as arbitrary user-defined fitting, to one, two and N-dimensional distributions. For the latter, the user is required to provide a smooth parametric function and is given the ability to guide the algorithm in finding the desired minimum. Examples are given. 6 refs., 1 fig.
Date: August 1, 1989
Creator: Lessner, E.S.
Partner: UNT Libraries Government Documents Department

Subpixel measurement of image features based on paraboloid surface fit

Description: A digital image processing inspection system is under development at Oak Ridge National Laboratory that will locate image features on printed material and measure distances between them to accuracies of 0.001 in. An algorithm has been developed for this system that can locate unique image features to subpixel accuracies. It is based on a least-squares fit of a paraboloid function to the surface generated by correlating a reference image feature against a test image search area. normalizing the correlation surface makes the algorithm robust in the presence of illumination variations and local flaws. Subpixel accuracies better than 1/16 of a pixel have been achieved using a variety of different reference image features. 5 refs., 6 figs.
Date: January 1, 1990
Creator: Gleason, S.S.; Hunt, M.A. & Jatko, W.B.
Partner: UNT Libraries Government Documents Department

Statistical estimation of process holdup

Description: Estimates of potential process holdup and their random and systematic error variances are derived to improve the inventory difference (ID) estimate and its associated measure of uncertainty for a new process at the Savannah River Plant. Since the process is in a start-up phase, data have not yet accumulated for statistical modelling. The material produced in the facility will be a very pure, highly enriched 235U with very small isotopic variability. Therefore, data published in LANL&#x27;s unclassified report on Estimation Methods for Process Holdup of a Special Nuclear Materials was used as a starting point for the modelling process. LANL&#x27;s data were gathered through a series of designed measurements of special nuclear material (SNM) holdup at two of their materials-processing facilities. Also, they had taken steps to improve the quality of data through controlled, larger scale, experiments outside of LANL at highly enriched uranium processing facilities. The data they have accumulated are on an equipment component basis. Our modelling has been restricted to the wet chemistry area. We have developed predictive models for each of our process components based on the LANL data. 43 figs.
Date: January 1, 1988
Creator: Harris, S P
Partner: UNT Libraries Government Documents Department

PC-based calculation of activation energy using linear regression

Description: During a severe accident, various plant locations will be subjected to harsh environments: high temperature, high humidity, high radiation, etc. Equipment required for accident mitigation located in these areas must be capable of withstanding these conditions, i.e., environmentally qualified. Qualification is normally accomplished by type-testing. The equipment undergoes accelerated aging to achieve a condition equivalent to the end-of-life condition. This aging consists of accelerated thermal aging and radiation aging. The aged equipment is then subjected to simulated seismic vibration and other vibration. After a radiation exposure simulating some maximum accident exposure, the equipment is then mounted in a test chamber and operated during a simulated design basis event (DBE) environment (high temperature, pressure, humidity, and possible submergence) and post-accident conditions. This paper describes two PC-based methods of applying the linear regression method to the thermal aging data to obtain an activation energy. 7 refs., 2 figs.
Date: January 1, 1989
Creator: Bornt, F W
Partner: UNT Libraries Government Documents Department