108 Matching Results

Search Results

Advanced search parameters have been applied.

Assessing Measurement Equivalence of the English and Spanish Versions on an Employee Attitude Survey Using Multigroup Analysis in Structural Equation Modeling.

Description: The study utilized the covariance structure comparison methodology - Multigroup Analysis in Structural Equation Modeling - evaluating measurement equivalence of English and Spanish versions of an employee opinion survey. The concept of measurement equivalence was defined as consisting of four components: sample equivalence, semantic equivalence, conceptual equivalence and scalar equivalence. The results revealed that the two language versions of the survey exhibited acceptable measurement equivalence across five survey dimensions Communications, Supervision, Leadership, Job Content & Satisfaction and Company Image & Commitment. Contrary to the study second hypothesis, there was no meaningful difference in opinion scores between English-speaking and Spanish-speaking respondents on the latent construct of Job Content & Satisfaction.
Date: August 2003
Creator: Koulikov, Mikhail
Partner: UNT Libraries

Bias and Precision of the Squared Canonical Correlation Coefficient under Nonnormal Data Conditions

Description: This dissertation: (a) investigated the degree to which the squared canonical correlation coefficient is biased in multivariate nonnormal distributions and (b) identified formulae that adjust the squared canonical correlation coefficient (Rc2) such that it most closely approximates the true population effect under normal and nonnormal data conditions. Five conditions were manipulated in a fully-crossed design to determine the degree of bias associated with Rc2: distribution shape, variable sets, sample size to variable ratios, and within- and between-set correlations. Very few of the condition combinations produced acceptable amounts of bias in Rc2, but those that did were all found with first function results. The sample size to variable ratio (n:v)was determined to have the greatest impact on the bias associated with the Rc2 for the first, second, and third functions. The variable set condition also affected the accuracy of Rc2, but for the second and third functions only. The kurtosis levels of the marginal distributions (b2), and the between- and within-set correlations demonstrated little or no impact on the bias associated with Rc2. Therefore, it is recommended that researchers use n:v ratios of at least 10:1 in canonical analyses, although greater n:v ratios have the potential to produce even less bias. Furthermore,because it was determined that b2 did not impact the accuracy of Rc2, one can be somewhat confident that, with marginal distributions possessing homogenous kurtosis levels ranging anywhere from -1 to 8, Rc2 will likely be as accurate as that resulting from a normal distribution. Because the majority of Rc2 estimates were extremely biased, it is recommended that all Rc2 effects, regardless of which function from which they result, be adjusted using an appropriate adjustment formula. If no rationale exists for the use of another formula, the Rozeboom-2 would likely be a safe choice given that it produced the greatest ...
Date: August 2006
Creator: Leach, Lesley Ann Freeny
Partner: UNT Libraries

A Comparison of Multivariate Normal and Elliptical Estimation Methods in Structural Equation Models

Description: In the present study, parameter estimates, standard errors and chi-square statistics were compared using normal and elliptical estimation methods given three research conditions: population data contamination (10%, 20%, and 30%), sample size (100, 400, and 1000), and kurtosis (kappa =1,10, 20).
Date: August 1999
Creator: Cheevatanarak, Suchittra
Partner: UNT Libraries

The Generalization of the Logistic Discriminant Function Analysis and Mantel Score Test Procedures to Detection of Differential Testlet Functioning

Description: Two procedures for detection of differential item functioning (DIF) for polytomous items were generalized to detection of differential testlet functioning (DTLF). The methods compared were the logistic discriminant function analysis procedure for uniform and non-uniform DTLF (LDFA-U and LDFA-N), and the Mantel score test procedure. Further analysis included comparison of results of DTLF analysis using the Mantel procedure with DIF analysis of individual testlet items using the Mantel-Haenszel (MH) procedure. Over 600 chi-squares were analyzed and compared for rejection of null hypotheses. Samples of 500, 1,000, and 2,000 were drawn by gender subgroups from the NELS:88 data set, which contains demographic and test data from over 25,000 eighth graders. Three types of testlets (totalling 29) from the NELS:88 test were analyzed for DTLF. The first type, the common passage testlet, followed the conventional testlet definition: items grouped together by a common reading passage, figure, or graph. The other two types were based upon common content and common process. as outlined in the NELS test specification.
Date: August 1994
Creator: Kinard, Mary E.
Partner: UNT Libraries

Adaptable Multivariate Calibration Models for Spectral Applications

Description: Multivariate calibration techniques have been used in a wide variety of spectroscopic situations. In many of these situations spectral variation can be partitioned into meaningful classes. For example, suppose that multiple spectra are obtained from each of a number of different objects wherein the level of the analyte of interest varies within each object over time. In such situations the total spectral variation observed across all measurements has two distinct general sources of variation: intra-object and inter-object. One might want to develop a global multivariate calibration model that predicts the analyte of interest accurately both within and across objects, including new objects not involved in developing the calibration model. However, this goal might be hard to realize if the inter-object spectral variation is complex and difficult to model. If the intra-object spectral variation is consistent across objects, an effective alternative approach might be to develop a generic intra-object model that can be adapted to each object separately. This paper contains recommendations for experimental protocols and data analysis in such situations. The approach is illustrated with an example involving the noninvasive measurement of glucose using near-infrared reflectance spectroscopy. Extensions to calibration maintenance and calibration transfer are discussed.
Date: December 20, 1999
Creator: THOMAS,EDWARD V.
Partner: UNT Libraries Government Documents Department

Multi-Window Classical Least Squares Multivariate Calibration Methods for Quantitative ICP-AES Analyses

Description: The advent of inductively coupled plasma-atomic emission spectrometers (ICP-AES) equipped with charge-coupled-device (CCD) detector arrays allows the application of multivariate calibration methods to the quantitative analysis of spectral data. We have applied classical least squares (CLS) methods to the analysis of a variety of samples containing up to 12 elements plus an internal standard. The elements included in the calibration models were Ag, Al, As, Au, Cd, Cr, Cu, Fe, Ni, Pb, Pd, and Se. By performing the CLS analysis separately in each of 46 spectral windows and by pooling the CLS concentration results for each element in all windows in a statistically efficient manner, we have been able to significantly improve the accuracy and precision of the ICP-AES analyses relative to the univariate and single-window multivariate methods supplied with the spectrometer. This new multi-window CLS (MWCLS) approach simplifies the analyses by providing a single concentration determination for each element from all spectral windows. Thus, the analyst does not have to perform the tedious task of reviewing the results from each window in an attempt to decide the correct value among discrepant analyses in one or more windows for each element. Furthermore, it is not necessary to construct a spectral correction model for each window prior to calibration and analysis: When one or more interfering elements was present, the new MWCLS method was able to reduce prediction errors for a selected analyte by more than 2 orders of magnitude compared to the worst case single-window multivariate and univariate predictions. The MWCLS detection limits in the presence of multiple interferences are 15 rig/g (i.e., 15 ppb) or better for each element. In addition, errors with the new method are only slightly inflated when only a single target element is included in the calibration (i.e., knowledge of all other elements is excluded ...
Date: October 1999
Creator: Chambers, William B.; Haaland, David M.; Keenan, Michael R. & Melgaard, David K.
Partner: UNT Libraries Government Documents Department

Search for the top quark at D0 using multivariate methods

Description: We report on the search for the top quark in p{bar p} collisions at the Fermilab Tevatron ({radical}s = 1.8 TeV) in the di-lepton and lepton+jets channels using multivariate methods. An H-matrix analysis of the e{mu} data corresponding to an integrated luminosity of 13.5{plus_minus}1.6 pb{sup {minus}1} yields one event whose likelihood to be a top quark event, assuming m{sub top} = 180 GeV/c{sup 2}, is ten times more than that of WW and eighteen times more than that of Z {yields} {tau}{tau}. A neural network analysis of the e+jets channel using a data sample corresponding to an integrated luminosity of 47.9{plus_minus}5.7 pb{sup {minus}1} shows an excess of events in the signal region and yields a cross-section for t{bar t} production of 6.7{plus_minus}2.3 (stat.) pb, assuming a top mass of 200 GeV/c{sup 2}. An analysis of the e+jets data using the probability density estimation method yields a cross-section that is consistent with the above result.
Date: July 1, 1995
Creator: Bhat, P.C.
Partner: UNT Libraries Government Documents Department

The role of amenities and other factors in influencing the location of nonmanufacturing industry in the United States

Description: Consumer and producer services, the latter in particular, are expected to become an important means of diversification and employment growth to the economy of Nevada. It has been suggested that the siting of the nuclear waste repository at Yucca Mountain, Nevada, will lead to a significant reduction in the amenity value of the state and, consequently, the ability of the state to attract these nonmanufacturing industries. This report reviews the literature dealing with factors important to the location of services, with an emphasis on producer services, to determine whether amenities, which have been shown to be an important locational consideration for some manufacturing firms, similarly affect the location of services. The report finds little substantive evidence to link amenities with the location of service firms, although the process by which these firms` locations are chosen is not well understood. Research in this area is comparatively recent, and although a number of theories of service location have been developed, the majority of research is exploratory in scope.
Date: July 1990
Creator: Allison, T. & Calzonetti, F. J.
Partner: UNT Libraries Government Documents Department

Steam generator mock-up for assessment of inservice inspection technology.

Description: A steam generator mock-up has been assembled for round-robin studies of the effectiveness of currently practiced inservice inspection (ISI) technology for detection of current-day flaws. The mock-up will also be used to evaluate emerging inspection technologies. The 3.66-m (12-ft.)-tall mock-up contains 400 tube openings, each consisting of 9 test sections that can be used to simulate current-day field-induced flaws and artifacts. Included in the mock-up are simulations of tube support plate (TSP) intersections and the tube sheet (TS). Cracks are present at the TSP, TS, and in the free span sections of the mock-up. For initial evaluation of the round-robin results, various eddy current methods, as well as multivariate models for data analysis techniques, are being used to estimate the depth and length of defects in the mock-up. To ensure that the round-robin is carried out with procedures as close as possible to those implemented in the field, input was obtained from industry experts on the protocol and procedures to be used for the exercise. One initial assembly of the mock-up with a limited number of flaws and artifact has been completed and tested. A second completed configuration with additional flaw and artifacts simulations will be used for the round-robin.
Date: September 11, 1999
Creator: Bakhtiari, S.; Kupperman, D. S. & Muscara, J.
Partner: UNT Libraries Government Documents Department

Development of new VOC exposure metrics and their relationship to ``Sick Building Syndrome`` symptoms

Description: Volatile organic compounds (VOCs) are suspected to contribute significantly to ``Sick Building Syndrome`` (SBS), a complex of subchronic symptoms that occurs during and in general decreases away from occupancy of the building in question. A new approach takes into account individual VOC potencies, as well as the highly correlated nature of the complex VOC mixtures found indoors. The new VOC metrics are statistically significant predictors of symptom outcomes from the California Healthy Buildings Study data. Multivariate logistic regression analyses were used to test the hypothesis that a summary measure of the VOC mixture, other risk factors, and covariates for each worker will lead to better prediction of symptom outcome. VOC metrics based on animal irritancy measures and principal component analysis had the most influence in the prediction of eye, dermal, and nasal symptoms. After adjustment, a water-based paints and solvents source was found to be associated with dermal and eye irritation. The more typical VOC exposure metrics used in prior analyses were not useful in symptom prediction in the adjusted model (total VOC (TVOC), or sum of individually identified VOCs ({Sigma}VOC{sub i})). Also not useful were three other VOC metrics that took into account potency, but did not adjust for the highly correlated nature of the data set, or the presence of VOCs that were not measured. High TVOC values (2--7 mg m{sup {minus}3}) due to the presence of liquid-process photocopiers observed in several study spaces significantly influenced symptoms. Analyses without the high TVOC values reduced, but did not eliminate the ability of the VOC exposure metric based on irritancy and principal component analysis to explain symptom outcome.
Date: August 1, 1995
Creator: Ten Brinke, J.
Partner: UNT Libraries Government Documents Department

Using the bootstrap in a multivariadte data problem: An example

Description: The use of the bootstrap in the multivariate version of the paired t-test is considered and demonstrated through an example. The problem of interest involves comparing two different techniques for measuring the chemical constituents of an sample item. The bootstrap is used to form an empirical significance level for Hotelling`s one-sample T-squared statistic. The bootstrap was selected to determine empirical significance levels because the implicit assumption of multivariate normality in the classic Hotelling`s one-sample test night not hold. The results of both the classic and bootstrap test are presented and contrasted.
Date: August 1, 1995
Creator: Glosup, J.G. & Axelrod, M.C.
Partner: UNT Libraries Government Documents Department

A method for detecting changes in long time series

Description: Modern scientific activities, both physical and computational, can result in time series of many thousands or even millions of data values. Here the authors describe a statistically motivated algorithm for quick screening of very long time series data for the presence of potentially interesting but arbitrary changes. The basic data model is a stationary Gaussian stochastic process, and the approach to detecting a change is the comparison of two predictions of the series at a time point or contiguous collection of time points. One prediction is a ``forecast``, i.e. based on data from earlier times, while the other a ``backcast``, i.e. based on data from later times. The statistic is the absolute value of the log-likelihood ratio for these two predictions, evaluated at the observed data. A conservative procedure is suggested for specifying critical values for the statistic under the null hypothesis of ``no change``.
Date: September 1, 1995
Creator: Downing, D.J.; Lawkins, W.F.; Morris, M.D. & Ostrouchov, G.
Partner: UNT Libraries Government Documents Department

Visual cluster analysis and pattern recognition template and methods

Description: This invention is comprised of a method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.
Date: December 31, 1993
Creator: Osbourn, G.C. & Martinez, R.F.
Partner: UNT Libraries Government Documents Department

Measurement issues in assessing employee performance: A generalizability theory approach

Description: Increasingly, organizations are assessing employee performance through the use of rating instruments employed in the context of varied data collection strategies. For example, the focus may be on obtaining multiple perspectives regarding employee performance (360{degree} evaluation). From the standpoint of evaluating managers, upward assessments and ``peer to peer`` evaluations are perhaps two of the more common examples of such a multiple perspective approach. Unfortunately, it is probably fair to say that the increased interest and use of such data collection strategies has not been accompanied by a corresponding interest in addressing both validity and reliability concerns that have traditionally been associated with other forms of employee assessment (e.g., testing, assessment centers, structured interviews). As a consequence, many organizations may be basing decisions upon information collected under less than ideal measurement conditions. To the extent that such conditions produce unreliable measurements, the process may be both dysfunctional to the organization and/or unfair to the individual(s) being evaluated. Conversely, the establishment of reliable and valid measurement processes may in itself support the utilization of results in pursuit of organizational goals and enhance the credibility of the measurement process (see McEvoy (1990), who found the acceptance of subordinate ratings to be related to perceived accuracy and fairness of the measurement process). The present paper discusses a recent ``peer to peer`` evaluation conducted in our organization. The intent is to focus on the design of the study and present a Generalizability Theory (GT) approach to assessing the overall quality of the data collection strategy, along with suggestions for improving future designs. 9 refs., 3 tabs.
Date: August 1, 1996
Creator: Stephenson, B.O.
Partner: UNT Libraries Government Documents Department

An analysis of the impacts of economic incentive programs on commercial nuclear power plant operations and maintenance costs

Description: Operations and Maintenance (O and M) expenditures by nuclear power plant owner/operators possess a very logical and vital link in considerations relating to plant safety and reliability. Since the determinants of O and M outlays are considerable and varied, the potential linkages to plant safety, both directly and indirectly, can likewise be substantial. One significant issue before the US Nuclear Regulatory Commission is the impact, if any, on O and M spending from state programs that attempt to improve plant operating performance, and how and to what extent these programs may affect plant safety and pose public health risks. The purpose of this study is to examine the role and degree of impacts from state promulgated economic incentive programs (EIPs) on plant O and M spending. A multivariate regression framework is specified, and the model is estimated on industry data over a five-year period, 1986--1990. Explanatory variables for the O and M spending model include plant characteristics, regulatory effects, financial strength factors, replacement power costs, and the performance incentive programs. EIPs are found to have statistically significant effects on plant O and M outlays, albeit small in relation to other factors. Moreover, the results indicate that the relatively financially weaker firms are more sensitive in their O and M spending to the presence of such programs. Formulations for linking spending behavior and EIPs with plant safety performance remains for future analysis.
Date: February 1, 1996
Creator: Kavanaugh, D.C.; Monroe, W.H. & Wood, R.S.
Partner: UNT Libraries Government Documents Department

Statistical classification methods applied to seismic discrimination

Description: To verify compliance with a Comprehensive Test Ban Treaty (CTBT), low energy seismic activity must be detected and discriminated. Monitoring small-scale activity will require regional (within {approx}2000 km) monitoring capabilities. This report provides background information on various statistical classification methods and discusses the relevance of each method in the CTBT seismic discrimination setting. Criteria for classification method selection are explained and examples are given to illustrate several key issues. This report describes in more detail the issues and analyses that were initially outlined in a poster presentation at a recent American Geophysical Union (AGU) meeting. Section 2 of this report describes both the CTBT seismic discrimination setting and the general statistical classification approach to this setting. Seismic data examples illustrate the importance of synergistically using multivariate data as well as the difficulties due to missing observations. Classification method selection criteria are presented and discussed in Section 3. These criteria are grouped into the broad classes of simplicity, robustness, applicability, and performance. Section 4 follows with a description of several statistical classification methods: linear discriminant analysis, quadratic discriminant analysis, variably regularized discriminant analysis, flexible discriminant analysis, logistic discriminant analysis, K-th Nearest Neighbor discrimination, kernel discrimination, and classification and regression tree discrimination. The advantages and disadvantages of these methods are summarized in Section 5.
Date: June 11, 1996
Creator: Ryan, F.M.; Anderson, D.N.; Anderson, K.K.; Hagedorn, D.N.; Higbee, K.T.; Miller, N.E. et al.
Partner: UNT Libraries Government Documents Department

Insights into multivariate calibration using errors-in-variables modeling

Description: A {ital q}-vector of responses, y, is related to a {ital p}-vector of explanatory variables, x, through a causal linear model. In analytical chemistry, y and x might represent the spectrum and associated set of constituent concentrations of a multicomponent sample which are related through Beer`s law. The model parameters are estimated during a calibration process in which both x and y are available for a number of observations (samples/specimens) which are collectively referred to as the calibration set. For new observations, the fitted calibration model is then used as the basis for predicting the unknown values of the new x`s (concentrations) form the associated new y`s (spectra) in the prediction set. This prediction procedure can be viewed as parameter estimation in an errors-in-variables (EIV) framework. In addition to providing a basis for simultaneous inference about the new x`s, consideration of the EIV framework yields a number of insights relating to the design and execution of calibration studies. A particularly interesting result is that predictions of the new x`s for individual samples can be improved by using seemingly unrelated information contained in the y`s from the other members of the prediction set. Furthermore, motivated by this EIV analysis, this result can be extended beyond the causal modeling context to a broader range of applications of multivariate calibration which involve the use of principal components regression.
Date: September 1, 1996
Creator: Thomas, E.V.
Partner: UNT Libraries Government Documents Department

Comparing Candidate Hospital Report Cards

Description: We present graphical and analytical methods that focus on multivariate outlier detection applied to the hospital report cards data. No two methods agree which hospitals are unusually good or bad, so we also present ways to compare the agreement between two methods. We identify factors that have a significant impact on the scoring.
Date: December 31, 1997
Creator: Burr, T.L.; Rivenburgh, R.D.; Scovel, J.C. & White, J.M.
Partner: UNT Libraries Government Documents Department

Chemometrics applied to vibrational spectroscopy: overview, challenges and pitfalls

Description: Chemometric multivariate calibration methods are rapidly impacting quantitative infrared spectroscopy in many positive ways. The combination of vibrational spectroscopy and chemometrics has been used by industry for quality control and process monitoring. The growth of these methods has been phenomenal in the past decade. Yet, as with any new technology, there are growing pains. The methods are so powerful at finding correlations in the data, that when used without great care they can readily yield results that are not valid for the analysis of future unknown samples. In this paper, the power of the multivariate calibration methods is discussed while pointing out common pitfalls and some remaining challenges that may slow the implementation of chemometrics in research and industry.
Date: October 1, 1996
Creator: Haaland, D.M.
Partner: UNT Libraries Government Documents Department

State analysis of nonlinear systems using local canonical variate analysis

Description: There are many instances in which time series measurements are used to derive an empirical model of a dynamical system. State space reconstruction from time series measurement has applications in many scientific and engineering disciplines including structural engineering, biology, chemistry, climatology, control theory, and physics. Prediction of future time series values from empirical models was attempted as early as 1927 by Yule, who applied linear prediction methods to the sunspot values. More recently, efforts in this area have centered on two related aspects of time series analysis, namely prediction and modeling. In prediction future time series values are estimated from past values, in modeling, fundamental characteristics of the state model underlying the measurements are estimated, such as dimension and eigenvalues. In either approach a measured time series, [{bold y}(t{sub i})], i= 1,... N is assumed to derive from the action of a smooth dynamical system, s(t+{bold {tau}})=a(s(t)), where the bold notation indicates the (potentially ) multivariate nature of the time series. The time series is assumed to derive from the state evolution via a measurement function c. {bold y}(t)=c(s(t)) In general the states s(t), the state evolution function a and the measurement function c are In unknown, and must be inferred from the time series measurements. We approach this problem from the standpoint of time series analysis. We review the principles of state space reconstruction. The specific model formulation used in the local canonical variate analysis algorithm and a detailed description of the state space reconstruction algorithm are included. The application of the algorithm to a single-degree-of- freedom Duffing-like Oscillator and the difficulties involved in reconstruction of an unmeasured degree of freedom in a four degree of freedom nonlinear oscillator are presented. The advantages and current limitations of state space reconstruction are summarized.
Date: January 1, 1997
Creator: Hunter, N.F.
Partner: UNT Libraries Government Documents Department

Partial least squares, conjugate gradient and the fisher discriminant

Description: The theory of multivariate regression has been extensively studied and is commonly used in many diverse scientific areas. A wide variety of techniques are currently available for solving the problem of multivariate calibration. The volume of literature on this subject is so extensive that understanding which technique to apply can often be very confusing. A common class of techniques for solving linear systems, and consequently applications of linear systems to multivariate analysis, are iterative methods. While common linear system solvers typically involve the factorization of the coefficient matrix A in solving the system Ax = b, this method can be impractical if A is large and sparse. Iterative methods such as Gauss-Seidel, SOR, Chebyshev semi-iterative, and related methods also often depend upon parameters that require calibration and which are sometimes hard to choose properly. An iterative method which surmounts many of these difficulties is the method of conjugate gradient. Algorithms of this type find solutions iteratively, by optimally calculating the next approximation from the residuals.
Date: December 1996
Creator: Faber, V.
Partner: UNT Libraries Government Documents Department

Adaptive scanning probe microscopies

Description: This work is comprised of two major sections. In the first section the authors develop multivariate image classification techniques to distinguish and identify surface electronic species directly from multiple-bias scanning tunneling microscope (STM) images. Multiple measurements at each site are used to distinguish and categorize inequivalent electronic or atomic species on the surface via a computerized classification algorithm. Then, comparison with theory or other suitably chosen experimental data enables the identification of each class. They demonstrate the technique by analyzing dual-polarity constant-current topographs of the Ge(111) surface. Just two measurements, negative- and positive-bias topography height, permit pixels to be separated into seven different classes. Labeling four of the classes as adatoms, first-layer atoms, and two inequivalent rest-atom sites, they find excellent agreement with the c(2 x 8) structure. The remaining classes are associated with structural defects and contaminants. This work represents a first step toward developing a general electronic/chemical classification and identification tool for multivariate scanning probe microscopy imagery. In the second section they report measurements of the diffusion of Si dimers on the Si(001) surface at temperatures between room temperature and 128 C using a novel atom-tracking technique that can resolve every diffusion event. The atom tracker employs lateral-positioning feedback to lock the STM probe tip into position above selected atoms with sub-Angstrom precision. Once locked the STM tracks the position of the atoms as they migrate over the crystal surface. By tracking individual atoms directly, the ability of the instrument to measure dynamic events is increased by a factor of {approximately} 1,000 over conventional STM imaging techniques.
Date: February 1, 1997
Creator: Swartzentruber, B.S.; Bouchard, A.M. & Osbourn, G.C.
Partner: UNT Libraries Government Documents Department

Nuclear transparency, B physics, and double beta decay. Annual report, February 1, 1996 - January 31, 1997

Description: This report describes the publication of results of a search for neutrinoless double-beta decay of molybdenum 100 and preparation of a paper on statistical analysis techniques used, developments related to purification techniques for the molybdenum, and other related work; progress in redesign, rebuilding, and installation of the Brookhaven EVA detector`s superconducting magnet and cryogenic system; and the testing of detector components for SLAC`s BaBar experiment. 3 refs.
Date: July 1, 1996
Creator: Nicholson, H.W.
Partner: UNT Libraries Government Documents Department

Nuclear transparency and double beta decay of molybdenum 100. Annual report, February 1, 1995 - January 31, 1996

Description: This report describes progress in data analysis for a search for neutrinoless double-beta decay of molybdenum 100 and related work, Brookhaven National Laboratory`s Experiment 850 on color transparency, and work on Brookhaven`s EVA detector and the Stanford Linear Accelerator Center`s B factory experiment. 6 refs.
Date: July 1, 1995
Creator: Nicholson, H.W.
Partner: UNT Libraries Government Documents Department