73 Matching Results

Search Results

Advanced search parameters have been applied.

Stopping power, its meaning, and its general characteristics

Description: This essay presents remarks on the meaning of stopping, power and of its magnitude. More precisely, the first set of remarks concerns the connection of stopping power with elements of particle-transport theory, which describes particle transport and its consequences in full detail, including its stochastic aspects. The second set of remarks concerns the magnitude of the stopping power of a material and its relation with the material`s electronic structure and other properties.
Date: June 1, 1995
Creator: Inokuti, Mitio
Partner: UNT Libraries Government Documents Department

First passage failure: Analysis alternatives

Description: Most mechanical and structural failures can be formulated as first passage problems. The traditional approach to first passage analysis models barrier crossings as Poisson events. The crossing rate is established and used in the Poisson framework to approximate the no-crossing probability. While this approach is accurate in a number of situations, it is desirable to develop analysis alternatives for those situations where traditional analysis is less accurate and situations where it is difficult to estimate parameters of the traditional approach. This paper develops an efficient simulation approach to first passage failure analysis. It is based on simulation of segments of complex random processes with the Karhunen-Loeve expansion, use of these simulations to estimate the parameters of a Markov chain, and use of the Markov chain to estimate the probability of first passage failure. Some numerical examples are presented.
Date: April 17, 2000
Creator: PAEZ,THOMAS L.; NGUYEN,H.P. & WIRSCHING,PAUL H.
Partner: UNT Libraries Government Documents Department

Performance issues, downtime recovery and tuning in the Next Linear Collider (NLC)

Description: The Next Linear Collider (NLC) consists of several large subsystems, each of which must be operational and tuned in order to deliver luminosity. Considering specific examples, we study how the different subsystems respond to various perturbations such as ground motion, temperature changes, drifts of beam-position monitors etc., and we estimate the overall time requirements for tuning and downtime recovery of each subsystem. The succession of subsystem failures and recoveries as well as other performance degradations can be modeled as a Markov process, where each subsystem is characterized, e.g., by its failure rate and recovery time. Such a model allows the prediction of the overall NLC availability. Our mathematical description of a linear collider is benchmarked against the known performance of the Stanford Linear Collider (SLC).
Date: May 1, 1997
Creator: Zimmermann, F.; Adolphsen, C. & Assmann, R.
Partner: UNT Libraries Government Documents Department

Mathematical and geological approaches to minimizing the data requirements for statistical analysis of heterogeneity: summary technical progress report

Description: This relates to hydraulic conductivity distributions and to aquifer characterization. The following was completed: air permeameter calibration, ``architectural element`` mapping, lithofacies mapping, air permeability measurements at ``architectural element`` scale, depositional environment characterization of Bosque site, quantification of ``architectural element`` scale geometries, and Markovian simulation techniques.
Date: December 31, 1991
Partner: UNT Libraries Government Documents Department

Integration of geologic interpretation into geostatistical simulation

Description: Embedded Markov chain analysis has been used to quantify geologic interpretation of juxtapositional tendencies of geologic facies. Such interpretations can also be translated into continuous-lag Markov chain models of spatial variability for use in geostatistical simulation of facies architecture.
Date: June 1, 1997
Creator: Carle, S.F.
Partner: UNT Libraries Government Documents Department

Bayesian Inference for Neural Electromagnetic Source Localization: Analysis of MEG Visual Evoked Activity

Description: We have developed a Bayesian approach to the analysis of neural electromagnetic (MEG/EEG) data that can incorporate or fuse information from other imaging modalities and addresses the ill-posed inverse problem by sarnpliig the many different solutions which could have produced the given data. From these samples one can draw probabilistic inferences about regions of activation. Our source model assumes a variable number of variable size cortical regions of stimulus-correlated activity. An active region consists of locations on the cortical surf ace, within a sphere centered on some location in cortex. The number and radi of active regions can vary to defined maximum values. The goal of the analysis is to determine the posterior probability distribution for the set of parameters that govern the number, location, and extent of active regions. Markov Chain Monte Carlo is used to generate a large sample of sets of parameters distributed according to the posterior distribution. This sample is representative of the many different source distributions that could account for given data, and allows identification of probable (i.e. consistent) features across solutions. Examples of the use of this analysis technique with both simulated and empirical MEG data are presented.
Date: February 1, 1999
Creator: George, J.S.; Schmidt, D.M. & Wood, C.C.
Partner: UNT Libraries Government Documents Department

Recent results on analytical plasma turbulence theory: Realizability, intermittency, submarginal turbulence, and self-organized criticality

Description: Recent results and future challenges in the systematic analytical description of plasma turbulence are described. First, the importance of statistical realizability is stressed, and the development and successes of the Realizable Markovian Closure are briefly reviewed. Next, submarginal turbulence (linearly stable but nonlinearly self-sustained fluctuations) is considered and the relevance of nonlinear instability in neutral-fluid shear flows to submarginal turbulence in magnetized plasmas is discussed. For the Hasegawa-Wakatani equations, a self-consistency loop that leads to steady-state vortex regeneration in the presence of dissipation is demonstrated and a partial unification of recent work of Drake (for plasmas) and of Waleffe (for neutral fluids) is given. Brief remarks are made on the difficulties facing a quantitatively accurate statistical description of submarginal turbulence. Finally, possible connections between intermittency, submarginal turbulence, and self-organized criticality (SOC) are considered and outstanding questions are identified.
Date: January 18, 2000
Creator: Krommes, J.A.
Partner: UNT Libraries Government Documents Department

Renormalized dissipation in the nonconservatively forced Burgers equation

Description: A previous calculation of the renormalized dissipation in the nonconservatively forced one-dimensional Burgers equation, which encountered a catastrophic long-wavelength divergence approximately [k min]-3, is reconsidered. In the absence of velocity shear, analysis of the eddy-damped quasi-normal Markovian closure predicts only a benign logarithmic dependence on kmin. The original divergence is traced to an inconsistent resonance-broadening type of diffusive approximation, which fails in the present problem. Ballistic scaling of renormalized pulses is retained, but such scaling does not, by itself, imply a paradigm of self-organized criticality. An improved scaling formula for a model with velocity shear is also given.
Date: January 19, 2000
Creator: Krommes, J.A.
Partner: UNT Libraries Government Documents Department

On some additional recollections, and the absence thereof, about the early history of computer simulations in statistical mechanics

Description: This lecture is an extension and correction of a previous lecture given by the author ten years ago at ``Corso 97`` in Varenna. Here again he emphasizes that his early work was exclusively with applications of the Metropolis Monte Carlo method. His only connection with the early work on the molecular dynamics method was in collaboration with Alder and Wainwright in their joint effort to reconcile the results of the Monte Carlo and molecular dynamics methods for hard spheres. Here he amplifies a point suggested by a question asked by Professor Ciccotti: Namely, when was it discovered that the Metropolis method consists in the generation of a realization of a Markov chain, for which there was a large body of mathematical theory that made the justification of the method quite a simple matter?
Date: September 1, 1995
Creator: Wood, W.W.
Partner: UNT Libraries Government Documents Department

Experimental Results on Statistical Approaches to Page Replacement Policies

Description: This paper investigates the questions of what statistical information about a memory request sequence is useful to have in making page replacement decisions: Our starting point is the Markov Request Model for page request sequences. Although the utility of modeling page request sequences by the Markov model has been recently put into doubt, we find that two previously suggested algorithms (Maximum Hitting Time and Dominating Distribution) which are based on the Markov model work well on the trace data used in this study. Interestingly, both of these algorithms perform equally well despite the fact that the theoretical results for these two algorithms differ dramatically. We then develop succinct characteristics of memory access patterns in an attempt to approximate the simpler of the two algorithms. Finally, we investigate how to collect these characteristics in an online manner in order to have a purely online algorithm.
Date: December 8, 2000
Creator: LEUNG,VITUS J. & IRANI,SANDY
Partner: UNT Libraries Government Documents Department

Transport in statistical media. Final report, May 1, 1988--May 1, 1990

Description: The technical content of these five papers is summarized in this report: Benchmark results for particle transport in a binary Markov statistical medium; Statistics, renewal theory, and particle transport; asymptotic limits of a statistical transport description; renormalized equations for linear transport in stochastic media; and solution methods for discrete state Markovian initial value problems.
Date: July 1, 1990
Creator: Pomraning, G.C.
Partner: UNT Libraries Government Documents Department

Optimal interdiction of unreactive Markovian evaders

Description: The interdiction problem arises in a variety of areas including military logistics, infectious disease control, and counter-terrorism. In the typical formulation of network interdiction. the task of the interdictor is to find a set of edges in a weighted network such that the removal of those edges would increase the cost to an evader of traveling on a path through the network. Our work is motivated by cases in which the evader has incomplete information about the network or lacks planning time or computational power, e.g. when authorities set up roadblocks to catch bank robbers, the criminals do not know all the roadblock locations or the best path to use for their escape. We introduce a model of network interdiction in which the motion of one or more evaders is described by Markov processes on a network and the evaders are assumed not to react to interdiction decisions. The interdiction objective is to find a node or set. of size at most B, that maximizes the probability of capturing the evaders. We prove that similar to the classical formulation this interdiction problem is NP-hard. But unlike the classical problem our interdiction problem is submodular and the optimal solution can be approximated within 1-lie using a greedy algorithm. Additionally. we exploit submodularity to introduce a priority evaluation strategy that speeds up the greedy algorithm by orders of magnitude. Taken together the results bring closer the goal of finding realistic solutions to the interdiction problem on global-scale networks.
Date: January 1, 2009
Creator: Hagberg, Aric; Pan, Feng & Gutfraind, Alex
Partner: UNT Libraries Government Documents Department

Microstructural dependence of cavitation damage in polycrystalline materials. Final report, 1 November 1992--31 October 1994

Description: Microstructure of a sample of Inconel X-750 damaged by ISCC (intergranular stress corrosion cracking) was examined after fatigue precracking in a high-temperature environment of deaerated water. Orientation imaging microscopy was used to reveal the microstructure adjacent to the crack path. General high-angle boundaries were found to be most susceptible to cracking. An ordering of the susceptibilities to ISCC damage was proposed; all boundaries have been classified into one of 12 categories. A model is proposed to predict the crack path for ISCC based on ex situ record of damage probabilities. The cracking is modeled as a Markov chain on a regular hexagonal array of grain boundaries representing the connectivity of the network.
Date: February 5, 1996
Creator: Adams, B.L.
Partner: UNT Libraries Government Documents Department

Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

Description: The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.
Date: November 5, 1996
Creator: Hogden, J.
Partner: UNT Libraries Government Documents Department

Non-Gaussian statistics, classical field theory, and realizable Langevin models

Description: The direct-interaction approximation (DIA) to the fourth-order statistic Z {approximately}{l_angle}{lambda}{psi}{sup 2}){sup 2}{r_angle}, where {lambda} is a specified operator and {psi} is a random field, is discussed from several points of view distinct from that of Chen et al. [Phys. Fluids A 1, 1844 (1989)]. It is shown that the formula for Z{sub DIA} already appeared in the seminal work of Martin, Siggia, and Rose (Phys. Rev. A 8, 423 (1973)] on the functional approach to classical statistical dynamics. It does not follow from the original generalized Langevin equation (GLE) of Leith [J. Atmos. Sd. 28, 145 (1971)] and Kraichnan [J. Fluid Mech. 41, 189 (1970)] (frequently described as an amplitude representation for the DIA), in which the random forcing is realized by a particular superposition of products of random variables. The relationship of that GLE to renormalized field theories with non-Gaussian corrections (``spurious vertices``) is described. It is shown how to derive an improved representation, that realizes cumulants through O({psi}{sup 4}), by adding to the GLE a particular non-Gaussian correction. A Markovian approximation Z{sub DIA}{sup M} to Z{sub DIA} is derived. Both Z{sub DIA} and Z{sub DIA}{sup M} incorrectly predict a Gaussian kurtosis for the steady state of a solvable three-mode example.
Date: November 1, 1995
Creator: Krommes, J.A.
Partner: UNT Libraries Government Documents Department

A Markov Model for Assessing the Reliability of a Digital Feedwater Control System

Description: A Markov approach has been selected to represent and quantify the reliability model of a digital feedwater control system (DFWCS). The system state, i.e., whether a system fails or not, is determined by the status of the components that can be characterized by component failure modes. Starting from the system state that has no component failure, possible transitions out of it are all failure modes of all components in the system. Each additional component failure mode will formulate a different system state that may or may not be a system failure state. The Markov transition diagram is developed by strictly following the sequences of component failures (i.e., failure sequences) because the different orders of the same set of failures may affect the system in completely different ways. The formulation and quantification of the Markov model, together with the proposed FMEA (Failure Modes and Effects Analysis) approach, and the development of the supporting automated FMEA tool are considered the three major elements of a generic conceptual framework under which the reliability of digital systems can be assessed.
Date: February 11, 2009
Creator: Chu,T.L.; Yue, M.; Martinez-Guridi, G. & Lehner, J.
Partner: UNT Libraries Government Documents Department

The effects of internal fluctuations on a class of nonequilibrium statistical field theories

Description: A class of models with applications to swarm behavior as well as many other types of spatially extended complex biological and physical systems is studied. Internal fluctuations can play an active role in the organization of the phase structure of such systems. In particular, for the class of models studied here the effect of internal fluctuations due to finite size is a renormalized decrease in the temperature near the point of spontaneous symmetry breaking.
Date: July 1, 1993
Creator: Millonas, M. M.
Partner: UNT Libraries Government Documents Department

Uncertainty estimation in reconstructed deformable models

Description: One of the hallmarks of the Bayesian approach to modeling is the posterior probability, which summarizes all uncertainties regarding the analysis. Using a Markov Chain Monte Carlo (MCMC) technique, it is possible to generate a sequence of objects that represent random samples drawn from the posterior distribution. We demonstrate this technique for reconstructions of two-dimensional objects from noisy projections taken from two directions. The reconstructed object is modeled in terms of a deformable geometrically-defined boundary with a constant interior density yielding a nonlinear reconstruction problem. We show how an MCMC sequence can be used to estimate uncertainties in the location of the edge of the reconstructed object.
Date: December 31, 1996
Creator: Hanson, K.M.; Cunningham, G.S. & McKee, R.
Partner: UNT Libraries Government Documents Department

Mathematical and geological approaches to minimizing the data requirements for statistical analysis of hydraulic conductivity. Technical completion report

Description: Field scale heterogeneity has been recognized as a dominant control on solute dispersion in groundwater. Numerous random field models exist for quantifying heterogeneity and its influence on solute transport. Minimizing data requirements in model selection and subsequent parameterization will be necessary for efficient application of quantitative models in contaminated subsurface environments. In this study, a detailed quantitative sedimentological study is performed to address the issue of incorporating geologic information into the geostatistical characterization process. A field air-minipermeameter is developed for rapid in-situ measurements. The field study conducted on an outcrop of fluvial/interfluvial deposits of the Pliocene- Pleistocene Sierra Ladrones Formation in the Albuquerque Basin of central New Mexico. Architectural element analysis is adopted for mapping and analysis of depositional environment. Geostatistical analysis is performed at two scales. At the architectural element scale, geostatistical analysis of assigned mean log-permeabilities of a 0.16 km{sup 2} peninsular region indicates that the directions of maximum and minimum correlation correspond to the directions of the large-scale depositional processes. At the facies scale, permeability is found to be adequately represented as a log-normal process. Log-permeability within individual lithofacies appears uncorrelated. The overall correlation structure at the facies scale is found to be a function of the mean log-permeability and spatial distribution of the individual lithofacies. Based on field observations of abrupt spatial changes in lithology and hydrologic properties, an algorithm for simulating multi-dimensional discrete Markov random fields. Finally, a conceptual model is constructed relating the information inferred from dimensional environment analysis to the various random fields of heterogeneity.
Date: December 1, 1992
Creator: Phillips, F.M.; Wilson, J.L.; Gutjahr, A.L.; Love, D.W.; Davis, J.M.; Lohmann, R.C. et al.
Partner: UNT Libraries Government Documents Department

An investigation of time efficiency in wavelet-based Markov parameter extraction methods

Description: This paper investigates the time efficiency of using a wavelet transform-based method to extract the impulse response characteristics of a structural dynamic system. Traditional time domain procedures utilize the measured disturbances and response histories of a system to develop a set of auto and cross correlation functions. Through deconvolution of these functions, or matrix inversion, the Markov parameters of the system may be found. By transforming these functions into a wavelet basis, the size of the problem to be solved can be reduced as well as the computation time decreased. Fourier transforms are also used in this capacity as they may increase the time efficiency even more, but at the cost of accuracy. This paper will therefore compare the time requirements associated with a time, wavelet, and Fourier-based method of Markov parameter extraction, as well as their relative accuracy in modeling the system.
Date: July 1, 1998
Creator: Robertson, A.N. & Park, K.C.
Partner: UNT Libraries Government Documents Department

System identification for robust control design

Description: System identification for the purpose of robust control design involves estimating a nominal model of a physical system and the uncertainty bounds of that nominal model via the use of experimentally measured input/output data. Although many algorithms have been developed to identify nominal models, little effort has been directed towards identifying uncertainty bounds. Therefore, in this document, a discussion of both nominal model identification and bounded output multiplicative uncertainty identification will be presented. This document is divided into several sections. Background information relevant to system identification and control design will be presented. A derivation of eigensystem realization type algorithms will be presented. An algorithm will be developed for calculating the maximum singular value of output multiplicative uncertainty from measured data. An application will be given involving the identification of a complex system with aliased dynamics, feedback control, and exogenous noise disturbances. And, finally, a short discussion of results will be presented.
Date: April 1, 1995
Creator: Dohner, J.L.
Partner: UNT Libraries Government Documents Department

The applicability of certain Monte Carlo methods to the analysis of interacting polymers

Description: The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at {beta}{sub crit} {approx} 0.99, and to recalculate the known value of the critical exponent {nu} {approx} 0.58 of the system for {beta} = {beta}{sub crit}. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of {nu}. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of {beta}{sub crit} using smaller values of N is 1.01 {+-} 0.01, and the estimate for {nu} at this value of {beta} is 0.59 {+-} 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.
Date: May 1, 1998
Creator: Krapp, D.M. Jr.
Partner: UNT Libraries Government Documents Department

Adaptive importance sampling of random walks on continuous state spaces

Description: The authors consider adaptive importance sampling for a random walk with scoring in a general state space. Conditions under which exponential convergence occurs to the zero-variance solution are reviewed. These results generalize previous work for finite, discrete state spaces in Kollman (1993) and in Kollman, Baggerly, Cox, and Picard (1996). This paper is intended for nonstatisticians and includes considerable explanatory material.
Date: November 1, 1998
Creator: Baggerly, K.; Cox, D. & Picard, R.
Partner: UNT Libraries Government Documents Department

Posterior sampling with improved efficiency

Description: The Markov Chain Monte Carlo (MCMC) technique provides a means to generate a random sequence of model realizations that sample the posterior probability distribution of a Bayesian analysis. That sequence may be used to make inferences about the model uncertainties that derive from measurement uncertainties. This paper presents an approach to improving the efficiency of the Metropolis approach to MCMC by incorporating an approximation to the covariance matrix of the posterior distribution. The covariance matrix is approximated using the update formula from the BFGS quasi-Newton optimization algorithm. Examples are given for uncorrelated and correlated multidimensional Gaussian posterior distributions.
Date: December 1, 1998
Creator: Hanson, K.M. & Cunningham, G.S.
Partner: UNT Libraries Government Documents Department