227 Matching Results

Search Results

Advanced search parameters have been applied.

A Measure of the goodness of fit in unbinned likelihood fits; end of Bayesianism?

Description: Maximum likelihood fits to data can be done using binned data (histograms) and unbinned data. With binned data, one gets not only the fitted parameters but also a measure of the goodness of fit. With unbinned data, currently, the fitted parameters are obtained but no measure of goodness of fit is available. This remains, to date, an unsolved problem in statistics. Using Bayes' theorem and likelihood ratios, they provide a method by which both the fitted quantities and a measure of the goodness of fit are obtained for unbinned likelihood fits, as well as errors in the fitted quantities. The quantity, conventionally interpreted as a Bayesian prior, is seen in this scheme to be a number not a distribution, that is determined from data.
Date: March 12, 2004
Creator: Raja, Rajendran
Partner: UNT Libraries Government Documents Department

A Search for the Rare Decay $B\rightarrow\gamma\gamma$

Description: We report the result of a search for the rare decay B{sup 0} {yields} {gamma}{gamma} in 426 fb{sup -1} of data, corresponding to 226 million B{sup 0}{bar B}{sup 0} pairs, collected on the {Upsilon}(4S) resonance at the PEP-II asymmetric-energy e{sup +}e{sup -} collider using the BABAR detector. We use a maximum likelihood fit to extract the signal yield and observe 21{sub -12}{sup +13} signal events with a statistical signficance of 1.9 {sigma}. This corresponds to a branching fraction {Beta}(B{sup 0} {yields} {gamma}{gamma}) = (1.7 {+-} 1.1(stat.) {+-} 0.2(syst.)) x 10{sup -7}. Based on this result, we set a 90% confidence level upper limit of {Beta}(B{sup 0} {yields} {gamma}{gamma}) < 3.2 x 10{sup -7}.
Date: June 2, 2011
Creator: del Amo Sanchez, P.; Lees, J.P.; Poireau, V.; Prencipe, E.; Tisserand, V.; /Annecy, LAPP et al.
Partner: UNT Libraries Government Documents Department

An articulatorily constrained, maximum entropy approach to speech recognition and speech coding

Description: Hidden Markov models (HMM`s) are among the most popular tools for performing computer speech recognition. One of the primary reasons that HMM`s typically outperform other speech recognition techniques is that the parameters used for recognition are determined by the data, not by preconceived notions of what the parameters should be. This makes HMM`s better able to deal with intra- and inter-speaker variability despite the limited knowledge of how speech signals vary and despite the often limited ability to correctly formulate rules describing variability and invariance in speech. In fact, it is often the case that when HMM parameter values are constrained using the limited knowledge of speech, recognition performance decreases. However, the structure of an HMM has little in common with the mechanisms underlying speech production. Here, the author argues that by using probabilistic models that more accurately embody the process of speech production, he can create models that have all the advantages of HMM`s, but that should more accurately capture the statistical properties of real speech samples--presumably leading to more accurate speech recognition. The model he will discuss uses the fact that speech articulators move smoothly and continuously. Before discussing how to use articulatory constraints, he will give a brief description of HMM`s. This will allow him to highlight the similarities and differences between HMM`s and the proposed technique.
Date: December 31, 1996
Creator: Hogden, J.
Partner: UNT Libraries Government Documents Department

Statistical Validation of Engineering and Scientific Models: A Maximum Likelihood Based Metric

Description: Two major issues associated with model validation are addressed here. First, we present a maximum likelihood approach to define and evaluate a model validation metric. The advantage of this approach is it is more easily applied to nonlinear problems than the methods presented earlier by Hills and Trucano (1999, 2001); the method is based on optimization for which software packages are readily available; and the method can more easily be extended to handle measurement uncertainty and prediction uncertainty with different probability structures. Several examples are presented utilizing this metric. We show conditions under which this approach reduces to the approach developed previously by Hills and Trucano (2001). Secondly, we expand our earlier discussions (Hills and Trucano, 1999, 2001) on the impact of multivariate correlation and the effect of this on model validation metrics. We show that ignoring correlation in multivariate data can lead to misleading results, such as rejecting a good model when sufficient evidence to do so is not available.
Date: January 1, 2002
Creator: HILLS, RICHARD GUY & TRUCANO, TIMOTHY G.
Partner: UNT Libraries Government Documents Department

Modeling patterns in count data using loglinear and related models

Description: This report explains the use of loglinear and logit models, for analyzing Poisson and binomial counts in the presence of explanatory variables. The explanatory variables may be unordered categorical variables or numerical variables, or both. The report shows how to construct models to fit data, and how to test whether a model is too simple or too complex. The appropriateness of the methods with small data sets is discussed. Several example analyses, using the SAS computer package, illustrate the methods.
Date: December 1, 1995
Creator: Atwood, C.L.
Partner: UNT Libraries Government Documents Department

Measurement of the top quark mass at D0

Description: D{null} has measured the top quark mass using a sample of 32 single- lepton events selected from approximately 115 pb{sup -1} of {radical}s = 1.8 TeV {ital p}{ital {anti p}} collisions collected from 1992-1996. The result is {ital m}{sub t} = 169 {+-} 8({ital stat}){+-} 8 ({ital syst}) GeV/c{sup 2}. Using a sample of 3 {ital e{mu}} events, D{null} measures {ital m}{sub t} = 158 {+-} 24({ital stat}) {+-} 10({ital syst}) GeV/c{sup 2}.
Date: November 1, 1996
Creator: Varnes, E.W.
Partner: UNT Libraries Government Documents Department

Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

Description: The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.
Date: November 5, 1996
Creator: Hogden, J.
Partner: UNT Libraries Government Documents Department

Maximum likelihood continuity mapping for fraud detection

Description: The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Date: May 1, 1997
Creator: Hogden, J.
Partner: UNT Libraries Government Documents Department

Statistical modeling of targets and clutter in single-look non-polarimetric SAR imagery

Description: This paper presents a Generalized Logistic (gLG) distribution as a unified model for Log-domain synthetic aperture Radar (SAR) data. This model stems from a special case of the G-distribution known as the G{sup 0}-distribution. The G-distribution arises from a multiplicative SAR model and has the classical K-distribution as another special case. The G{sup 0}-distribution, however, can model extremely heterogeneous clutter regions that the k-distribution cannot model. This flexibility is preserved in the unified gLG model, which is capable of modeling non-polarimetric SAR returns from clutter as well as man-made objects. Histograms of these two types of SAR returns have opposite skewness. The flexibility of the gLG model lies in its shape and shift parameters. The shape parameter describes the differing skewness between target and clutter data while the shift parameter compensates for movements in the mean as the shape parameter changes. A Maximum Likelihood (ML) estimate of the shape parameter gives an optimal measure of the skewness of the SAR data. This measure provides a basis for an optimal target detection algorithm.
Date: August 1, 1998
Creator: Salazar, J.S.; Hush, D.R.; Koch, M.W.; Fogler, R.J. & Hostetler, L.D.
Partner: UNT Libraries Government Documents Department

MALCOM X: Combining maximum likelihood continuity mapping with Gaussian mixture models

Description: GMMs are among the best speaker recognition algorithms currently available. However, the GMM`s estimate of the probability of the speech signal does not change if the authors randomly shuffle the temporal order of the feature vectors, even though the actual probability of observing the shuffled signal would be dramatically different--probably near zero. A potential way to improve the performance of GMMs is to incorporate temporal information into the estimate of the probability of the data. Doing so could improve speech recognition, speaker recognition, and potentially aid in detecting lies (abnormalities) in speech data. As described in other documents (Hogden, 1996), MALCOM is an algorithm that can be used to estimate the probability of a sequence of categorical data. MALCOM can also be applied to speech (and other real valued sequences) if windows of the speech are first categorized using a technique such as vector quantization (Gray, 1984). However, by quantizing the windows of speech, MALCOM ignores information about the within-category differences of the speech windows. Thus, MALCOM and GMMs complement each other: MALCOM is good at using sequence information whereas GMMs capture within-category differences better than the vector quantization typically used by MALCOM. An extension of MALCOM (MALCOM X) that can be used for estimating the probability of a speech sequence is described here.
Date: November 1, 1998
Creator: Hogden, J. & Scovel, J.C.
Partner: UNT Libraries Government Documents Department

Time profiles and pulse structure of bright, long gamma-ray bursts using BATSE TTS data

Description: The time profiles of many gamma-ray bursts observed by BATSE consist of distinct pulses, which offer the possibility of characterizing the temporal structure of these bursts using a relatively small set of pulse-shape parameters. This pulse analysis has previously been performed on some bright, long bursts using binned data, and on some short bursts using BATSE Time-Tagged Event (TTE) data. The BATSE Time- to-Spill (TTS) burst data records the times required to accumulate a fixed number of photons, giving variable time resolution. The spill times recorded in the TTS data behave as a gamma distribution. We have developed an interactive pulse-fitting program using the pulse model of Norris et al. and a maximum-likelihood fitting algorithm to the gamma distribution of the spill times. We then used this program to analyze a number of bright, long bursts for which TTS data is available. We present statistical information on the attributes of pulses comprising these bursts.
Date: April 1, 1996
Creator: Lee, A.; Bloom, E. & Scargle, J.
Partner: UNT Libraries Government Documents Department

Statistical Validation of Engineering and Scientific Models: Validation Experiments to Application

Description: Several major issues associated with model validation are addressed here. First, we extend the application-based, model validation metric presented in Hills and Trucano (2001) to the Maximum Likelihood approach introduced in Hills and Trucano (2002). This method allows us to use the target application of the code to weigh the measurements made from a validation experiment so that those measurements that are most important for the application are more heavily weighted. Secondly, we further develop the linkage between suites of validation experiments and the target application so that we can (1) provide some measure of coverage of the target application and, (2) evaluate the effect of uncertainty in the measurements and model parameters on application level validation. We provide several examples of this approach based on steady and transient heat conduction, and shock physics applications.
Date: March 1, 2003
Creator: HILLS, RICHARD G. & LESLIE, IAN H.
Partner: UNT Libraries Government Documents Department

The Measurement of CP Asymmetries And Branching Fractions in Neutral B Meson Decays to Charged Rhos And Pions (Kaons) With the BaBar Detector

Description: The authors present measurements of branching ratios and CP-violating asymmetries for neutral B decays into quasi two-body final states dominated by the modes {rho}{sup {+-}}{pi}{sup {-+}} and {rho}{sup {+-}}K{sup {-+}}. The data set used for these measurements was recorded during the 1999-2002 period, and corresponds to a total integrated luminosity of 81.9 fb{sup -1} taken on the {Upsilon}(4S) peak, and 9.5 fb{sup -1} taken 40 MeV off-peak. From a time-dependent maximum likelihood fit they find for the branching fractions {Beta}({rho}{sup 2}{pi}{sup {-+}}) = (22.6 {+-} 1.8(stat) {+-} 2.2(syst)) x 10{sup -6}, {Beta}({rho}{sup {+-}}K{sup {-+}}) = (7.3{sub -1.2}{sup +1.3}(stat) {+-} 1.3(syst)) x 10{sup -6}. For the CP violation parameters, they measure: {Alpha}{sub CP}{sup pK} = 0.28 {+-} 0.17(stat) {+-} 0.080(syst), {Alpha}{sub CP}{sup pk} = -0.18 {+-} 0.08(stat) {+-} 0.029(syst), C{sub pk} = 0.36 {+-} 0.18(stat) {+-} 0.041(syst), S{sub pt} = 0.19 {+-} 0.24(stat) {+-} 0.031(syst), and for the remaining parameters, required to fully describe the time dependence of the B{sup 0}({bar B}{sup 0} {yields} {rho}{sup {+-}}{pi}{sup {-+}}) decays, they obtain {Delta}C{sub pn} = 0.28{sub -0.19}{sup +0.18}(stat) {+-} 0.043(syst), {Delta}S{sub pk} = 0.15 {+-} 0.25(stat) {+-} 0.025(syst).
Date: September 23, 2005
Creator: Liu, Ran
Partner: UNT Libraries Government Documents Department

Measurement of CP Violation in B0 to Phi K0, and of Branching Fraction and CP Violation in B0 to F0(980) K0(S)

Description: The authors measure the time-dependent CP asymmetry parameters in B{sup 0} {yields} K{sup +}K{sup -}K{sup 0} based on a data sample of approximately 277 million B-meson pairs recorded at the {Upsilon}(4S) resonance with the BABAR detector at the PEP-II B-meson Factory at SLAC. They reconstruct two-body B{sup 0} decays to {phi}(1020)K{sub s}{sup 0} and {phi}(1020)K{sub L}{sup 0}. Using a time-dependent maximum-likelihood fit, they measure sin2{beta}{sub eff}({phi}K{sup 0}) = 0.48 {+-} 0.28 {+-} 0.10, and C({phi}K{sup 0}) = 0.16 {+-} 0.25 {+-} 0.09, where the first error is statistical, and the second is systematic. They also present measurements of the CP-violating asymmetries in the decay B{sup 0} {yields} f{sub 0}({yields} {pi}{sup +}{pi}{sup -})K{sub s}{sup 0}. The results are obtained from a data sample of 209 x 10{sup 6} {Upsilon}(4S) {yields} B{bar B} decays, also collected with the BABAR detector at the PEP-II asymmetric-energy B Factory at SLAC. From a time-dependent maximum-likelihood fit they measure the mixing-induced CP violation parameter S(f{sub 0}(980)K{sub S}{sup 0}) = - sin 2{beta}{sub eff}f{sub 0}(980)K{sub S}{sup 0} = -0.95{sub -0.23}{sup +0.32} {+-} 0.10 and the direct CP violation parameter C(f{sub 0}(980)K{sub S}{sup 0}) = - 0.24 {+-} 0.31 {+-} 0.15, where the first errors are statistical and the second systematic. Finally, they present a measurement of the branching fraction of the decay B{sup 0} {yields} f{sub 0}({yields} {pi}{sup +}{pi}{sup -})K{sub S}{sup 0}. From a time-dependent maximum likelihood fit to a data sample of 123 x 10{sup 6} {Upsilon}(4S) {yields} B{bar B} decays they find 93.6 {+-} 13.6 {+-} 6.4 signal events corresponding to a branching fraction of {Beta}(B{sup 0} {yields} f{sub 0}(980)({yields} {pi}{sup +}{pi}{sup -})K{sup 0}) = (6.0 {+-} 0.9 {+-} 0.6 {+-} 1.2) x 10{sup -6}, where the first error is statistical, the second systematic, and the third due to model uncertainties.
Date: March 10, 2008
Creator: Kutter, Paul E.
Partner: UNT Libraries Government Documents Department

Improved Measurements of Branching Fractions for B0 -> pi+pi-, K+pi-, and Search for B0 -> K+K-

Description: We present preliminary measurements of branching fractions for the charmless two-body decays B{sup 0} {yields} {pi}{sup +}{pi}{sup -} and K{sup +}{pi}{sup -}, and a search for B{sup 0} {yields} K{sup +}K{sup -} using a data sample of approximately 227 million B{bar B} decays. Signal yields are extracted with a multi-dimensional maximum likelihood fit, and the efficiency is corrected for the effects of final-state radiation. We find the charge-averaged branching fractions (in units of 10{sup -6}): {Beta}(B{sup 0} {yields} {pi}{sup +}{pi}{sup -}) = 5.5 {+-} 0.4 {+-} 0.3; {Beta}(B{sup 0} {yields} K{sup +}{pi}{sup -}) = 19.2 {+-} 0.6 {+-} 0.6; and {Beta}(B{sup 0} {yields} K{sup +}K{sup -}) = < 0.40. The errors are statistical followed by systematic, and the upper limit on K{sup +}K{sup -} represents a confidence level of 90%.
Date: September 28, 2005
Creator: Aubert, B.; Barate, R.; Boutigny, D.; Couderc, F.; Karyotakis, Y.; Lees, J. P. et al.
Partner: UNT Libraries Government Documents Department

Improved Measurements of Neutral B Decay Branching Fractions to K0s pi+ pi- and the Charge Asymmetry of B0 -> K*+ pi-

Description: The authors analyze the decay B{sup 0} {yields} K{sub S}{sup 0}{pi}{sup +}{pi}{sup -} using a sample of 232 million {Upsilon}(4S) {yields} B{bar B} decays collected with the BABAR detector at the SLAC PEP-II asymmetric-energy B factory. A maximum likelihood fit finds the following branching fractions: {Beta}(B{sup 0} {yields} K{sup 0}{pi}{sup +}{pi}{sup -}) = (43.0 {+-} 2.3 {+-} 2.3) x 10{sup -6}, {Beta}(B{sup 0} {yields} f{sub 0}({yields} {pi}{sup +}{pi}{sup -})K{sup 0}) = (5.5 {+-} 0.7 {+-} 0.5 {+-} 0.3) x 10{sup -6} and {Beta}(B{sup 0} {yields} K*{sup +}{pi}{sup -}) = (11.0 {+-} 1.5 {+-} 0.5 {+-} 0.5) x 10{sup -6}. For these results, the first uncertainty is statistical, the second is systematic, and the third (if present) is due to the effect of interference from other resonances. They also measure the CP-violating charge asymmetry in the decay B{sup 0} {yields} K*{sup +}{pi}{sup -}, {Alpha}{sub K*{pi}} = -0.11 {+-} 0.14 {+-} 0.05.
Date: August 26, 2005
Creator: Aubert, B.; Barate, R.; Boutigny, D.; Couderc, F.; Karyotakis, Y.; Lees, J. P. et al.
Partner: UNT Libraries Government Documents Department

Measurement of CP Asymmetry in B0 to Ks pi0 pi0 Decays

Description: We present a measurement of the time-dependent CP asymmetry for the neutral B-meson decay into the CP = +1 final state K{sub S}{sup 0}{pi}{sup 0}{pi}{sup 0}, with K{sub S}{sup 0} {yields} {pi}{sup +}{pi}{sup -}. We use a sample of approximately 227 million B-meson pairs recorded at the {Upsilon}(4S) resonance with the BABAR detector at the PEP-II B-Factory at SLAC. From an unbinned maximum likelihood fit we extract the mixing-induced CP-violation parameter S = 0.72 {+-} 0.71 {+-} 0.08 and the direct CP-violation parameter C = 0.23 {+-} 0.52 {+-} 0.13, where the first uncertainty is statistical and the second systematic.
Date: February 6, 2007
Creator: Aubert, B.
Partner: UNT Libraries Government Documents Department

Measurement of Time-dependent CP Asymmetries inB0->D(*)pi and B0->Drho Decays

Description: We present updated results on time-dependent CP asymmetries in fully reconstructed B{sup 0} {yields} D{sup (*){+-}}{pi}{sup {-+}} and B{sup 0} {yields} D{sup {+-}}{rho}{sup {-+}} decays in approximately 232 million {Upsilon}(4S) {yields} B{bar B} events collected with the BABAR detector at the PEP-II asymmetric-energy B factory at SLAC. From a time-dependent maximum likelihood fit we obtain for the parameters related to the CP violation angle 2{beta} + {gamma}: a{sup D{pi}} = -0.010 {+-} 0.023 {+-} 0.007, c{sub lep}{sup D{pi}} = -0.033 {+-} 0.042 {+-} 0.012, a{sup D*{pi}} = -0.040 {+-} 0.023 {+-} 0.010, C{sub lep}{sup D*{pi}} = 0.049 {+-} 0.042 {+-} 0.015, a{sup D{rho}} = -0.024 {+-} 0.031 {+-} 0.009, c{sub lep}{sup D{rho}} = -0.098 {+-} 0.055 {+-} 0.018, where the first error is statistical and the second is systematic. Using other measurements and theoretical assumptions, we interpret the results in terms of the angles of the Cabibbo-Kobayashi-Maskawa unitarity triangle and find |sin(2{beta}+{gamma})| > 0.64 (0.40) at 68% (90%) confidence level.
Date: March 3, 2006
Creator: Aubert, B.
Partner: UNT Libraries Government Documents Department

Measurement of the Neutral B Meson-B Bar Meson Oscillation Frequency Using Dilepton Events at BABAR

Description: This dissertation describes the measurement of the B{sup 0}{bar B}{sup 0} oscillation frequency {Delta}m{sub d} with a sample of 122 x 10{sup 6} B{bar B} pairs collected with the BABAR detector at the PEP-II asymmetric B Factory at the Stanford Linear Accelerator Center. A fully inclusive approach is used to select dilepton events in which B meson decays semileptonically and the lepton's charge is employed to identify the flavor of each B meson. The oscillation frequency {Delta}m{sub d} is extracted from the time evolution of the dilepton events. A maximum likelihood fit to the same sign and opposite sign events simultaneously gives {Delta}m{sub d} = (0.485 {+-} 0.009(stat.) {+-} 0.010(syst.)) ps{sup 1} where the first uncertainty is statistical and the second is systematic. This is one of the single most precise measurements of the B{sup 0}{bar B}{sup 0} oscillation frequency to date.
Date: June 6, 2006
Creator: Chao, Ming & /UC, Irvine
Partner: UNT Libraries Government Documents Department

Measurement of the CP Asymmetry and BranchingFraction of B^0 to \rho^{0}K^0

Description: The authors present a measurement of the branching fraction and time-dependent CP asymmetry of B{sup 0} {yields} {rho}{sup 0}K{sup 0}. The results are obtained from a data sample of 227 x 10{sup 6} {Upsilon}(4S) {yields} B{bar B} decays collected with the BABAR detector at the PEP-II asymmetric-energy B Factory at SLAC. From a time-dependent maximum likelihood fit yielding 111 {+-} 19 signal events they find {Beta}(B{sup 0} {yields} {rho}{sup 0}K{sup 0}) = (4.9 {+-} 0.8 {+-} 0.9) x 10{sup -6}, where the first error is statistical and the second systematic. They report the measurement of the CP parameters S{sub {rho}{sup 0}K{sub S}{sup 0}} = 0.20 {+-} 0.52 {+-} 0.24 and C{sub {rho}{sup 0}K{sub S}{sup 0}} = 0.64 {+-} 0.41 {+-} 0.20.
Date: August 23, 2006
Creator: Aubert, B.
Partner: UNT Libraries Government Documents Department

Application of robust/resistant techniques to crystal structure refinement

Description: The least squares technique commonly used for refining crystal structures is neither robust (min variance with errors in experimental data) nor resistant (not highly dependent on any small subset of data). Other methods of fitting are being developed which are more robust/resistant, and this paper endeavors to apply these methods to crystal structure refinement. The Tukey biweight W-function succeeds in minimizing the influence of poor data. (DLC)
Date: August 1, 1976
Creator: Nicholson, W L & Prince, E
Partner: UNT Libraries Government Documents Department

XRAYL: a program for producing idealized powder diffraction line profiles from overlapped powder patterns

Description: The X-ray diffraction patterns of samples of polycrystalline materials are used to identify and characterize phases. Very often the total (or composite) profile consists of a series of overlapping profiles. In many applications it is necessary to separate the component profiles from the total profile. (In this document the terms {ital profile, line}, and {ital peak} are used interchangeably to represent these features of X-ray or neutron diffraction patterns.) A computer program, XRAYL, first developed in the 1980s and subsequently enlarged and improved, allows the fitting of analytical functions to powder diffraction lines. The fitting process produces parameters of chosen profile functions, diffraction line by diffraction line. The resulting function parameters may then be used to generate ``idealized`` powder diffraction lines as counts at steps in 2{Omega}. The generated lines are effectively free of statistical noise and contributions from overlapping lines. Each separated line extends to background on both sides of the generated profile. XRAYL may, therefore, be used in X-ray powder diffraction profile analysis as a preprocessor program that is, separating peaks and feeding the ``resolved`` data to subsequent analysis programs. This self- contained document includes: (1) a description of the fitting functions coded into XRAYL, (2) an outline of the least-squares algorithm used in fitting the profile function, (3) the file formats and contents utilized by the computer code, (4) the user options and their presentation requirements for execution of the program, (5) an example of input and output for a test case, and (6) source code listings on a diskette.
Date: September 1, 1996
Creator: Hubbard, C.R.; Morosin, B. & Stewart, J.M.
Partner: UNT Libraries Government Documents Department

A tool to identify parameter errors in finite element models

Description: A popular method for updating finite element models with modal test data utilizes optimization of the model based on design sensitivities. The attractive feature of this technique is that it allows some estimate and update of the physical parameters affecting the hardware dynamics. Two difficulties are knowing which physical parameters are important and which of those important parameters are in error. If this is known, the updating process is simply running through the mechanics of the optimization. Most models of real systems have a myriad of parameters. This paper discusses an implementation of a tool which uses the model and test data together to discover which parameters are most important and most in error. Some insight about the validity of the model form may also be obtained. Experience gained from applications to complex models will be shared.
Date: February 1, 1997
Creator: Mayes, R. L.
Partner: UNT Libraries Government Documents Department

Measurement of the W boson mass

Description: We present a preliminary measurement of the {ital W} boson mass using data collected by the D{null} experiment at the Fermilab Tevatron during the 1994-1995 collider run 1b. We use {ital W} {r_arrow} {ital e}{nu} decays to extract the {ital W} mass from the observed spectrum of transverse mass of the electron ({vert_bar}{eta}{vert_bar} {lt} 1. 2) and the inferred neutrino We use {ital Z}{sup 0} {r_arrow} {ital ee} decays to constrain our model of the detector response. We measure {ital m}{sub W}/{ital m}{sub Z} = 0.8815 {+-} 0.0011({ital stat}) {+-} 0.0014({ital syst}) and {ital m}{sub W} = 80.38 {+-} 0.07 ({ital W stat}) {+-} 0.13({ital syst}) GeV. Combining this result with our previous measurement from the 1992-1993 data, we obtain {ital m}{sub W} = 80.37 {+-} 0.15 GeV (errors combined in quadrature).
Date: November 1, 1996
Creator: Kotwal, A.V.
Partner: UNT Libraries Government Documents Department