202 Matching Results

Search Results

Advanced search parameters have been applied.

An Analysis of Salinity in Streams of the Green River Basin

Description: Abstract: Dissolved-solids concentrations and loads can be estimated from streamflow records using a regression model derived from chemical analyses of monthly samples. The model takes seasonal effects into account by the inclusion of simple-harmonic time functions. Monthly mean dissolved-solids loads simulated for a 6-year period at U.S. Geological Survey water-quality stations in the Green River Basin of Wyoming agree closely with corresponding loads estimated from daily specific-conductance records. In a demonstration of uses of the model, an average gain of 114,000 tons of dissolved solids per year was estimated for 6-year period in a 70-mile reach of the Green River from Fontemelle Reservoir to the town of Green River, including the lower 30-mile reach of the Big Sandy River.
Date: October 1977
Creator: DeLong, Lewis L.
Partner: UNT Libraries Government Documents Department

Structural damage detection using the holder exponent.

Description: This paper implements a damage detection strategy that identifies damage sensitive features associated with nonlinearities. Some rion-linezlrities result from discontinuities introduced into the data by certain types of darnage. These discontinuities may also result from noise in the measured dynamic response data or can be caused by random excitation of the system. The Holder Exponent, which is a measure of the degree to which a signal is differentiable, is used to detect the discontinuities. By studying the Holder bponent as a function af time, a statistical model is developed that classifies changes in the Holder Exponent that are associated with clamage-induced discontinuities. The results show that for certain cases, the Holder Exponent is an effective technique to detect damage.
Date: January 1, 2002
Creator: Farrar, C. R. (Charles R.); Do, N. B. (Nguyen B.); Green, S. R. (Scott R.) & Schwartz, T. A. (Timothy A.)
Partner: UNT Libraries Government Documents Department

Extracting a Whisper from the DIN: A Bayesian-Inductive Approach to Learning an Anticipatory Model of Cavitation

Description: For several reasons, Bayesian parameter estimation is superior to other methods for inductively learning a model for an anticipatory system. Since it exploits prior knowledge, the analysis begins from a more advantageous starting point than other methods. Also, since "nuisance parameters" can be removed from the Bayesian analysis, the description of the model need not be as complete as is necessary for such methods as matched filtering. In the limit of perfectly random noise and a perfect description of the model, the signal-to-noise ratio improves as the square root of the number of samples in the data. Even with the imperfections of real-world data, Bayesian methods approach this ideal limit of performance more closely than other methods. These capabilities provide a strategy for addressing a major unsolved problem in pump operation: the identification of precursors of cavitation. Cavitation causes immediate degradation of pump performance and ultimate destruction of the pump. However, the most efficient point to operate a pump is just below the threshold of cavitation. It might be hoped that a straightforward method to minimize pump cavitation damage would be to simply adjust the operating point until the inception of cavitation is detected and then to slightly readjust the operating point to let the cavitation vanish. However, due to the continuously evolving state of the fluid moving through the pump, the threshold of cavitation tends to wander. What is needed is to anticipate cavitation, and this requires the detection and identification of precursor features that occur just before cavitation starts.
Date: November 7, 1999
Creator: Kercel, S.W.
Partner: UNT Libraries Government Documents Department

UNCERTAINTY IN PHASE ARRIVAL TIME PICKS FOR REGIONAL SEISMIC EVENTS: AN EXPERIMENTAL DESIGN

Description: The detection and timing of seismic arrivals play a critical role in the ability to locate seismic events, especially at low magnitude. Errors can occur with the determination of the timing of the arrivals, whether these errors are made by automated processing or by an analyst. One of the major obstacles encountered in properly estimating travel-time picking error is the lack of a clear and comprehensive discussion of all of the factors that influence phase picks. This report discusses possible factors that need to be modeled to properly study phase arrival time picking errors. We have developed a multivariate statistical model, experimental design, and analysis strategy that can be used in this study. We have embedded a general form of the International Data Center(IDC)/U.S. National Data Center (USNDC) phase pick measurement error model into our statistical model. We can use this statistical model to optimally calibrate a picking error model to regional data. A follow-on report will present the results of this analysis plan applied to an implementation of an experiment/data-gathering task.
Date: February 1, 2001
Creator: VELASCO, A. & AL, ET
Partner: UNT Libraries Government Documents Department

A new global hydrogen equation of state model

Description: Simple statistical mechanics models have been assembled into a wide-range equation of state for the hydrogen isotopes. The solid is represented by an Einstein-Grtineisen model delimited by a Lindemann melting curve. The fluid is represented by an ideal gas plus a soft-sphere fluid configurational term. Dissociation and ionization are approximated by modifying the ideal gas chemical-equilibrium formulation. The T = 0 isotherm and dissociation models have been fitted to new diamond-anvil isotherm and laser-generated shock data. The main limitation of the model is in ionization at high compression.
Date: June 25, 1999
Creator: Young, D
Partner: UNT Libraries Government Documents Department

Population dynamics of minimally cognitive individuals. Part I: Introducing knowledge into the dynamics

Description: The author presents a new approach for modeling the dynamics of collections of objects with internal structure. Based on the fact that the behavior of an individual in a population is modified by its knowledge of other individuals, a procedure for accounting for knowledge in a population of interacting objects is presented. It is assumed that each object has partial (or complete) knowledge of some (or all) other objects in the population. The dynamical equations for the objects are then modified to include the effects of this pairwise knowledge. This procedure has the effect of projecting out what the population will do from the much larger space of what it could do, i.e., filtering or smoothing the dynamics by replacing the complex detailed physical model with an effective model that produces the behavior of interest. The procedure therefore provides a minimalist approach for obtaining emergent collective behavior. The use of knowledge as a dynamical quantity, and its relationship to statistical mechanics, thermodynamics, information theory, and cognition microstructure are discussed.
Date: July 1, 1995
Creator: Schmieder, R.W.
Partner: UNT Libraries Government Documents Department

AutomaDeD: Automata-Based Debugging for Dissimilar Parallel Tasks

Description: Today's largest systems have over 100,000 cores, with million-core systems expected over the next few years. This growing scale makes debugging the applications that run on them a daunting challenge. Few debugging tools perform well at this scale and most provide an overload of information about the entire job. Developers need tools that quickly direct them to the root cause of the problem. This paper presents AutomaDeD, a tool that identifies which tasks of a large-scale application first manifest a bug at a specific code region at a specific point during program execution. AutomaDeD creates a statistical model of the application's control-flow and timing behavior that organizes tasks into groups and identifies deviations from normal execution, thus significantly reducing debugging effort. In addition to a case study in which AutomaDeD locates a bug that occurred during development of MVAPICH, we evaluate AutomaDeD on a range of bugs injected into the NAS parallel benchmarks. Our results demonstrate that detects the time period when a bug first manifested itself with 90% accuracy for stalls and hangs and 70% accuracy for interference faults. It identifies the subset of processes first affected by the fault with 80% accuracy and 70% accuracy, respectively and the code region where where the fault first manifested with 90% and 50% accuracy, respectively.
Date: March 23, 2010
Creator: Bronevetsky, G; Laguna, I; Bagchi, S; de Supinski, B R; Ahn, D & Schulz, M
Partner: UNT Libraries Government Documents Department

Characterizing the Response of Commercial and Industrial Facilities to Dynamic Pricing Signals from the Utility

Description: We describe a method to generate statistical models of electricity demand from Commercial and Industrial (C&I) facilities including their response to dynamic pricing signals. Models are built with historical electricity demand data. A facility model is the sum of a baseline demand model and a residual demand model; the latter quantifies deviations from the baseline model due to dynamic pricing signals from the utility. Three regression-based baseline computation methods were developed and analyzed. All methods performed similarly. To understand the diversity of facility responses to dynamic pricing signals, we have characterized the response of 44 C&I facilities participating in a Demand Response (DR) program using dynamic pricing in California (Pacific Gas and Electric's Critical Peak Pricing Program). In most cases, facilities shed load during DR events but there is significant heterogeneity in facility responses. Modeling facility response to dynamic price signals is beneficial to the Independent System Operator for scheduling supply to meet demand, to the utility for improving dynamic pricing programs, and to the customer for minimizing energy costs.
Date: July 1, 2010
Creator: Mathieu, Johanna L.; Gadgil, Ashok J.; Callaway, Duncan S.; Price, Phillip N. & Kiliccote, Sila
Partner: UNT Libraries Government Documents Department

Correction of closed orbit distortions in the horizontal direction

Description: Many computer programs with a variety of algorithms exist for controlling the closed orbit in synchrotrons. The scope of this note is rather modest in comparison. Based on a simple model, a study has been made to find out statistically how much kick angle is needed by each steering element and how much residual closed orbit deviation should be expected when the closed orbit is steered to go through the center of seven position monitors (M{sub 2} through M{sub 8}) in each cell. Seven independent kicks are supplied by two trim dipoles B{sub U} and B{sub D}, and six steering elements (H{sub 1} through H{sub 6}) with H{sub 3} and H{sub 4} assumed to have the same kick angle. If it is necessary to remove H{sub 3} to make a space there for a correction skew quadrupole (in every other cell), the kick angle of H{sub 4} would have to be doubled.
Date: February 1, 1988
Creator: Ohnuma, S.
Partner: UNT Libraries Government Documents Department

Heuristic estimates of weighted binomial statistics for use in detecting rare point source transients

Description: The ALEXIS (Array of Low Energy X-ray Imaging Sensors) satellite scans nearly half the sky every fifty seconds, and downlinks time-tagged photon data twice a day. The standard science quicklook processing produces over a dozen sky maps at each downlink, and these maps are automatically searched for potential transient point sources. We are interested only in {ital highly significant} point source detections, and based on earlier Monte-Carlo studies, only consider {ital p} < 10{sup -7}, which is about 5.2 {sigma}. Our algorithms are therefore required to operate on the far tail of the distribution, where many traditional approximations break down. Although an exact solution is available for the case of unweighted counts, the problem is more difficult in the case of weighted counts. We have found that a heuristic modification of a formula derived by Li and Ma (1983) provides reasonably accurate estimates of p-values for point source detections even for very low p-value detections.
Date: December 1996
Creator: Theiler, J. & Bloch, J.
Partner: UNT Libraries Government Documents Department

Adaptive measurement control for calorimetric assay

Description: The performance of a calorimeter is usually evaluated by constructing a Shewhart control chart of its measurement errors for a collection of reference standards. However, Shewhart control charts were developed in a manufacturing setting where observations occur in batches. Additionally, the Shewhart control chart expects the variance of the charted variable to be known or at least well estimated from previous experimentation. For calorimetric assay, observations are collected singly in a time sequence with a (possibly) changing mean, and extensive experimentation to calculate the variance of the measurement errors is seldom feasible. These facts pose problems in constructing a control chart. In this paper, the authors propose using the mean squared successive difference to estimate the variance of measurement errors based solely on prior observations. This procedure reduces or eliminates estimation bias due to a changing mean. However, the use of this estimator requires an adjustment to the definition of the alarm and warning limits for the Shewhart control chart. The authors propose adjusted limits based on an approximate Student`s t-distribution for the measurement errors and discuss the limitations of this approximation. Suggestions for the practical implementation of this method are provided also.
Date: October 1, 1994
Creator: Glosup, J.G. & Axelrod, M.C.
Partner: UNT Libraries Government Documents Department

Joint probabilities of noncommuting observables and the Einstein-Podolsky-Rosen question in Wiener-Siegel quantum theory

Description: Ordinary quantum theory is a statistical theory without an underlying probability space. The Wiener-Siegel theory provides a probability space, defined in terms of the usual wave function and its ``stochastic coordinates``; i.e., projections of its components onto differentials of complex Wiener processes. The usual probabilities of quantum theory emerge as measures of subspaces defined by inequalities on stochastic coordinates. Since each point {alpha} of the probability space is assigned values (or arbitrarily small intervals) of all observables, the theory gives a pseudo-classical or ``hidden-variable`` view in which normally forbidden concepts are allowed. Joint probabilities for values of noncommuting variables are well-defined. This paper gives a brief description of the theory, including a new generalization to incorporate spin, and reports the first concrete calculation of a joint probability for noncommuting components of spin of a single particle. Bohm`s form of the Einstein-Podolsky-Rosen Gedankenexperiment is discussed along the lines of Carlen`s paper at this Congress. It would seem that the ``EPR Paradox`` is avoided, since to each {alpha} the theory assigns opposite values for spin components of two particles in a singlet state, along any axis. In accordance with Bell`s ideas, the price to pay for this attempt at greater theoretical detail is a disagreement with usual quantum predictions. The disagreement is computed and found to be large.
Date: February 1, 1996
Creator: Warnock, R.L.
Partner: UNT Libraries Government Documents Department

A Bayesian analysis of the solar neutrino problem

Description: We illustrate how the Bayesian approach can be used to provide a simple but powerful way to analyze data from solar neutrino experiments. The data are analyzed assuming that the neutrinos are unaltered during their passage from the Sun to the Earth. We derive quantitative and easily understood information pertaining to the solar neutrino problem.
Date: September 1, 1996
Creator: Bhat, C.M.; Bhat, P.C.; Paterno, M. & Prosper, H.B.
Partner: UNT Libraries Government Documents Department

PCR+ In Diesel Fuels and Emissions Research

Description: In past work for the U.S. Department of Energy (DOE) and Oak Ridge National Laboratory (ORNL), PCR+ was developed as an alternative methodology for building statistical models. PCR+ is an extension of Principal Components Regression (PCR), in which the eigenvectors resulting from Principal Components Analysis (PCA) are used as predictor variables in regression analysis. The work was motivated by the observation that most heavy-duty diesel (HDD) engine research was conducted with test fuels that had been ''concocted'' in the laboratory to vary selected fuel properties in isolation from each other. This approach departs markedly from the real world, where the reformulation of diesel fuels for almost any purpose leads to changes in a number of interrelated properties. In this work, we present new information regarding the problems encountered in the conventional approach to model-building and how the PCR+ method can be used to improve research on the relationship between fuel characteristics and engine emissions. We also discuss how PCR+ can be applied to a variety of other research problems related to diesel fuels.
Date: April 15, 2002
Creator: McAdams, H.T.
Partner: UNT Libraries Government Documents Department

Distribution-free discriminant analysis

Description: This report describes our experience in implementing a non-parametric (distribution-free) discriminant analysis module for use in a wide range of pattern recognition problems. Issues discussed include performance results on both real and simulated data sets, comparisons to other methods, and the computational environment. In some cases, this module performs better than other existing methods. Nearly all cases can benefit from the application of multiple methods.
Date: May 1, 1997
Creator: Burr, T. & Doak, J.
Partner: UNT Libraries Government Documents Department

Development of a Mathematical Air-Leakage Model from MeasuredData

Description: A statistical model was developed to relate residential building shell leakage to building characteristics such as building height, floor area, floor leakage, duct leakage, and year built or the age of the house. Statistical regression techniques were used to determine which of the potential building characteristics best described the data. Seven preliminary regressions were performed to investigate the influence of each variable. The results of the eighth and last multivariable linear regression form the predictive model. The major factors that influence the tightness of a residential building are participation in an energy efficiency program (40% tighter than ordinary homes), having low-income occupants (145% leakier than ordinary) and the age of a house (1% increase in Normalized Leakage per year). This predictive model may be applied to data within the range of the data that was used to develop the model.
Date: May 1, 2006
Creator: McWilliams, Jennifer & Jung, Melanie
Partner: UNT Libraries Government Documents Department

Prospects of measuring CP violation in B decays at CDF

Description: We summarize the prospects of measuring the CP asymmetry in B{sup 0} {r_arrow} J/{psi}K{sup 0}{sub s} and B{sup 0} {r_arrow} {pi}{sup +}{pi}{sub -} with the CDF detector in Run II in 1999. We also explore the feasibility to determine sin 2{gamma} and discuss Run I results relevant to measurement of CP violation at CDF.
Date: September 1, 1996
Creator: Paulini, M.
Partner: UNT Libraries Government Documents Department

Statistical validation of physical system models

Description: It is common practice in applied mechanics to develop mathematical models for mechanical system behavior. Frequently, the actual physical system being modeled is also available for testing, and sometimes the test data are used to help identify the parameters of the mathematical model. However, no general-purpose technique exists for formally, statistically judging the quality of a model. This paper suggests a formal statistical procedure for the validation of mathematical models of physical systems when data taken during operation of the physical system are available. The statistical validation procedure is based on the bootstrap, and it seeks to build a framework where a statistical test of hypothesis can be run to determine whether or not a mathematical model is an acceptable model of a physical system with regard to user-specified measures of system behavior. The approach to model validation developed in this study uses experimental data to estimate the marginal and joint confidence intervals of statistics of interest of the physical system. These same measures of behavior are estimated for the mathematical model. The statistics of interest from the mathematical model are located relative to the confidence intervals for the statistics obtained from the experimental data. These relative locations are used to judge the accuracy of the mathematical model. A numerical example is presented to demonstrate the application of the technique.
Date: October 1, 1996
Creator: Paez, T. L.; Barney, P.; Hunter, N. F.; Ferregut, C. & Perez, L.E.
Partner: UNT Libraries Government Documents Department

An Evaluation of Parallel Job Scheduling for ASCI Blue-Pacific

Description: In this paper we analyze the behavior of a gang-scheduling strategy that we are developing for the ASCI Blue-Pacific machines. Using actual job logs for one of the ASCI machines we generate a statistical model of the current workload with hyper Erlang distributions. We then vary the parameters of those distributions to generate various workloads, representative of different operating points of the machine. Through simulation we obtain performance parameters for three different scheduling strategies: (i) first-come first-serve, (ii) gang-scheduling, and (iii) backfilling. Our results show that backfilling, can be very effective for the common operating points in the 60-70% utilization range. However, for higher utilization rates, time-sharing techniques such as gang-scheduling offer much better performance.
Date: November 9, 1999
Creator: Franke, H.; Jann, J.; Moreira, J.; Pattnaik, P. & Jette, M.
Partner: UNT Libraries Government Documents Department

Advanced Model for SBS of a Randomized Laser Beam and Application to Polarization Smoothing Experiments with Preformed Underdense Plasmas

Description: An advanced statistical model is presented, which describes the SBS of a randomized laser beam interacting with an underdense, expanding plasma. The model accounts for the self-focusing of speckles and for its influence on the speckles SBS reflectivity in the regime where the effect of plasma heating is important. Plasma heating has an important effect on speckle self-focusing and it decreases the SBS threshold and also decreases the SBS reflectivity. The model exhibit a good agreement with the measured SBS levels at the LULI multi-beam facility for a broad range of the laser and plasma parameters and both types of beam smoothing--RPP and PS. Both the model and the experiments confirm that the PS technique allows to control the SBS level more efficiently than RPP.
Date: June 30, 2000
Creator: Labaune, C.; Depierreux, S.; Baldis, H. A.; Huller, S; Myatt, J. & Pesme, D.
Partner: UNT Libraries Government Documents Department

A Validation of Bayesian Finite Element Model Updating for Linear Dynamics

Description: This work addresses the issue of statistical model updating and correlation. The updating procedure is formulated to improve the predictive quality of a structural model by minimizing out-of-balance modal forces. It is shown how measurement and modeling uncertainties can be taken into account to provide not only the correlated model but also the associated confidence levels. Hence, a Bayesian parameter estimation technique is derived and its numerical implementation is discussed. Two demonstration examples that involve test-analysis correlation with real test data are presented. First, the validation of an engine cradle model used in the automotive industry shows how the design's uncertainties can be reduced via model updating. The second example consists of employing test-analysis correlation for identifying the degree of nonlinearity of the LANL 8-DOF testbed.
Date: February 8, 1999
Creator: Hemez, F.M. & Doebling, S.W.
Partner: UNT Libraries Government Documents Department

Systematic approach to establishing criticality biases

Description: A systematic approach has been developed to determine benchmark biases and apply those biases to code results to meet the requirements of DOE Order 5480.24 regarding documenting criticality safety margins. Previously, validation of the code against experimental benchmarks to prove reasonable agreement was sufficient. However, DOE Order 5480.24 requires contractors to adhere to the requirements of ANSI/ANS-8.1 and establish subcritical margins. A method was developed to incorporate biases and uncertainties from benchmark calculations into a k{sub eff} value with quantifiable uncertainty. The method produces a 95% confidence level in both the k{sub eff} value of the scenario modeled and the distribution of the k{sub eff}S calculated by the Monte Carlo code. Application of the method to a group of benchmarks modeled using the KENO-Va code and the SCALE 27 group cross sections is also presented.
Date: September 1, 1995
Creator: Larson, S.L.
Partner: UNT Libraries Government Documents Department

The components of geostatistical simulation

Description: There are many approaches to geostatistical simulation that can be used to generate realizations of random fields. These approaches differ fundamentally in a number of ways. First, each approach is inherently different and will produce fields with different statistical and geostatistical properties. Second, the approaches differ with respect to the choice of the features of the region that are to be modeled, and how closely the generated realizations reproduce these features. Some fluctuation in the statistical and geostatistical properties of different realizations of the same random field are natural and desirable, but the proper amount of deviation is an open question. Finally the approaches differ in how the conditioning information is incorporated. Depending on the source of randomness and the uncertainty in the given data, direct conditioning of realizations is not always desirable. In this paper, we discuss and illustrate these differences in order to emphasize the importance of these components in geostatistical simulation.
Date: March 1, 1996
Creator: Gotway, C. A. & Rutherford, B. M.
Partner: UNT Libraries Government Documents Department

A Statistical Model and Computer program for Preliminary Calculations Related to the Scaling of Sensor Arrays

Description: Recent advances in sensor technology and engineering have made it possible to assemble many related sensors in a common array, often of small physical size. Sensor arrays may report an entire vector of measured values in each data collection cycle, typically one value per sensor per sampling time. The larger quantities of data provided by larger arrays certainly contain more information, however in some cases experience suggests that dramatic increases in array size do not always lead to corresponding improvements in the practical value of the data. The work leading to this report was motivated by the need to develop computational planning tools to approximate the relative effectiveness of arrays of different size (or scale) in a wide variety of contexts. The basis of the work is a statistical model of a generic sensor array. It includes features representing measurement error, both common to all sensors and independent from sensor to sensor, and the stochastic relationships between the quantities to be measured by the sensors. The model can be used to assess the effectiveness of hypothetical arrays in classifying objects or events from two classes. A computer program is presented for evaluating the misclassification rates which can be expected when arrays are calibrated using a given number of training samples, or the number of training samples required to attain a given level of classification accuracy. The program is also available via email from the first author for a limited time.
Date: April 1, 2001
Creator: Morris, Max
Partner: UNT Libraries Government Documents Department