72 Matching Results

Search Results

Advanced search parameters have been applied.

Stopping power, its meaning, and its general characteristics

Description: This essay presents remarks on the meaning of stopping, power and of its magnitude. More precisely, the first set of remarks concerns the connection of stopping power with elements of particle-transport theory, which describes particle transport and its consequences in full detail, including its stochastic aspects. The second set of remarks concerns the magnitude of the stopping power of a material and its relation with the material`s electronic structure and other properties.
Date: June 1, 1995
Creator: Inokuti, Mitio
Partner: UNT Libraries Government Documents Department

First passage failure: Analysis alternatives

Description: Most mechanical and structural failures can be formulated as first passage problems. The traditional approach to first passage analysis models barrier crossings as Poisson events. The crossing rate is established and used in the Poisson framework to approximate the no-crossing probability. While this approach is accurate in a number of situations, it is desirable to develop analysis alternatives for those situations where traditional analysis is less accurate and situations where it is difficult to estimate parameters of the traditional approach. This paper develops an efficient simulation approach to first passage failure analysis. It is based on simulation of segments of complex random processes with the Karhunen-Loeve expansion, use of these simulations to estimate the parameters of a Markov chain, and use of the Markov chain to estimate the probability of first passage failure. Some numerical examples are presented.
Date: April 17, 2000
Partner: UNT Libraries Government Documents Department

Optimal interdiction of unreactive Markovian evaders

Description: The interdiction problem arises in a variety of areas including military logistics, infectious disease control, and counter-terrorism. In the typical formulation of network interdiction. the task of the interdictor is to find a set of edges in a weighted network such that the removal of those edges would increase the cost to an evader of traveling on a path through the network. Our work is motivated by cases in which the evader has incomplete information about the network or lacks planning time or computational power, e.g. when authorities set up roadblocks to catch bank robbers, the criminals do not know all the roadblock locations or the best path to use for their escape. We introduce a model of network interdiction in which the motion of one or more evaders is described by Markov processes on a network and the evaders are assumed not to react to interdiction decisions. The interdiction objective is to find a node or set. of size at most B, that maximizes the probability of capturing the evaders. We prove that similar to the classical formulation this interdiction problem is NP-hard. But unlike the classical problem our interdiction problem is submodular and the optimal solution can be approximated within 1-lie using a greedy algorithm. Additionally. we exploit submodularity to introduce a priority evaluation strategy that speeds up the greedy algorithm by orders of magnitude. Taken together the results bring closer the goal of finding realistic solutions to the interdiction problem on global-scale networks.
Date: January 1, 2009
Creator: Hagberg, Aric; Pan, Feng & Gutfraind, Alex
Partner: UNT Libraries Government Documents Department

Performance issues, downtime recovery and tuning in the Next Linear Collider (NLC)

Description: The Next Linear Collider (NLC) consists of several large subsystems, each of which must be operational and tuned in order to deliver luminosity. Considering specific examples, we study how the different subsystems respond to various perturbations such as ground motion, temperature changes, drifts of beam-position monitors etc., and we estimate the overall time requirements for tuning and downtime recovery of each subsystem. The succession of subsystem failures and recoveries as well as other performance degradations can be modeled as a Markov process, where each subsystem is characterized, e.g., by its failure rate and recovery time. Such a model allows the prediction of the overall NLC availability. Our mathematical description of a linear collider is benchmarked against the known performance of the Stanford Linear Collider (SLC).
Date: May 1, 1997
Creator: Zimmermann, F.; Adolphsen, C. & Assmann, R.
Partner: UNT Libraries Government Documents Department

Mathematical and geological approaches to minimizing the data requirements for statistical analysis of heterogeneity: summary technical progress report

Description: This relates to hydraulic conductivity distributions and to aquifer characterization. The following was completed: air permeameter calibration, ``architectural element`` mapping, lithofacies mapping, air permeability measurements at ``architectural element`` scale, depositional environment characterization of Bosque site, quantification of ``architectural element`` scale geometries, and Markovian simulation techniques.
Date: December 31, 1991
Partner: UNT Libraries Government Documents Department

Integration of geologic interpretation into geostatistical simulation

Description: Embedded Markov chain analysis has been used to quantify geologic interpretation of juxtapositional tendencies of geologic facies. Such interpretations can also be translated into continuous-lag Markov chain models of spatial variability for use in geostatistical simulation of facies architecture.
Date: June 1, 1997
Creator: Carle, S.F.
Partner: UNT Libraries Government Documents Department

Bayesian Inference for Neural Electromagnetic Source Localization: Analysis of MEG Visual Evoked Activity

Description: We have developed a Bayesian approach to the analysis of neural electromagnetic (MEG/EEG) data that can incorporate or fuse information from other imaging modalities and addresses the ill-posed inverse problem by sarnpliig the many different solutions which could have produced the given data. From these samples one can draw probabilistic inferences about regions of activation. Our source model assumes a variable number of variable size cortical regions of stimulus-correlated activity. An active region consists of locations on the cortical surf ace, within a sphere centered on some location in cortex. The number and radi of active regions can vary to defined maximum values. The goal of the analysis is to determine the posterior probability distribution for the set of parameters that govern the number, location, and extent of active regions. Markov Chain Monte Carlo is used to generate a large sample of sets of parameters distributed according to the posterior distribution. This sample is representative of the many different source distributions that could account for given data, and allows identification of probable (i.e. consistent) features across solutions. Examples of the use of this analysis technique with both simulated and empirical MEG data are presented.
Date: February 1, 1999
Creator: George, J.S.; Schmidt, D.M. & Wood, C.C.
Partner: UNT Libraries Government Documents Department

Recent results on analytical plasma turbulence theory: Realizability, intermittency, submarginal turbulence, and self-organized criticality

Description: Recent results and future challenges in the systematic analytical description of plasma turbulence are described. First, the importance of statistical realizability is stressed, and the development and successes of the Realizable Markovian Closure are briefly reviewed. Next, submarginal turbulence (linearly stable but nonlinearly self-sustained fluctuations) is considered and the relevance of nonlinear instability in neutral-fluid shear flows to submarginal turbulence in magnetized plasmas is discussed. For the Hasegawa-Wakatani equations, a self-consistency loop that leads to steady-state vortex regeneration in the presence of dissipation is demonstrated and a partial unification of recent work of Drake (for plasmas) and of Waleffe (for neutral fluids) is given. Brief remarks are made on the difficulties facing a quantitatively accurate statistical description of submarginal turbulence. Finally, possible connections between intermittency, submarginal turbulence, and self-organized criticality (SOC) are considered and outstanding questions are identified.
Date: January 18, 2000
Creator: Krommes, J.A.
Partner: UNT Libraries Government Documents Department

Renormalized dissipation in the nonconservatively forced Burgers equation

Description: A previous calculation of the renormalized dissipation in the nonconservatively forced one-dimensional Burgers equation, which encountered a catastrophic long-wavelength divergence approximately [k min]-3, is reconsidered. In the absence of velocity shear, analysis of the eddy-damped quasi-normal Markovian closure predicts only a benign logarithmic dependence on kmin. The original divergence is traced to an inconsistent resonance-broadening type of diffusive approximation, which fails in the present problem. Ballistic scaling of renormalized pulses is retained, but such scaling does not, by itself, imply a paradigm of self-organized criticality. An improved scaling formula for a model with velocity shear is also given.
Date: January 19, 2000
Creator: Krommes, J.A.
Partner: UNT Libraries Government Documents Department

On some additional recollections, and the absence thereof, about the early history of computer simulations in statistical mechanics

Description: This lecture is an extension and correction of a previous lecture given by the author ten years ago at ``Corso 97`` in Varenna. Here again he emphasizes that his early work was exclusively with applications of the Metropolis Monte Carlo method. His only connection with the early work on the molecular dynamics method was in collaboration with Alder and Wainwright in their joint effort to reconcile the results of the Monte Carlo and molecular dynamics methods for hard spheres. Here he amplifies a point suggested by a question asked by Professor Ciccotti: Namely, when was it discovered that the Metropolis method consists in the generation of a realization of a Markov chain, for which there was a large body of mathematical theory that made the justification of the method quite a simple matter?
Date: September 1, 1995
Creator: Wood, W.W.
Partner: UNT Libraries Government Documents Department

Experimental Results on Statistical Approaches to Page Replacement Policies

Description: This paper investigates the questions of what statistical information about a memory request sequence is useful to have in making page replacement decisions: Our starting point is the Markov Request Model for page request sequences. Although the utility of modeling page request sequences by the Markov model has been recently put into doubt, we find that two previously suggested algorithms (Maximum Hitting Time and Dominating Distribution) which are based on the Markov model work well on the trace data used in this study. Interestingly, both of these algorithms perform equally well despite the fact that the theoretical results for these two algorithms differ dramatically. We then develop succinct characteristics of memory access patterns in an attempt to approximate the simpler of the two algorithms. Finally, we investigate how to collect these characteristics in an online manner in order to have a purely online algorithm.
Date: December 8, 2000
Partner: UNT Libraries Government Documents Department

Transport in statistical media. Final report, May 1, 1988--May 1, 1990

Description: The technical content of these five papers is summarized in this report: Benchmark results for particle transport in a binary Markov statistical medium; Statistics, renewal theory, and particle transport; asymptotic limits of a statistical transport description; renormalized equations for linear transport in stochastic media; and solution methods for discrete state Markovian initial value problems.
Date: July 1, 1990
Creator: Pomraning, G.C.
Partner: UNT Libraries Government Documents Department

A Markov Model for Assessing the Reliability of a Digital Feedwater Control System

Description: A Markov approach has been selected to represent and quantify the reliability model of a digital feedwater control system (DFWCS). The system state, i.e., whether a system fails or not, is determined by the status of the components that can be characterized by component failure modes. Starting from the system state that has no component failure, possible transitions out of it are all failure modes of all components in the system. Each additional component failure mode will formulate a different system state that may or may not be a system failure state. The Markov transition diagram is developed by strictly following the sequences of component failures (i.e., failure sequences) because the different orders of the same set of failures may affect the system in completely different ways. The formulation and quantification of the Markov model, together with the proposed FMEA (Failure Modes and Effects Analysis) approach, and the development of the supporting automated FMEA tool are considered the three major elements of a generic conceptual framework under which the reliability of digital systems can be assessed.
Date: February 11, 2009
Creator: Chu,T.L.; Yue, M.; Martinez-Guridi, G. & Lehner, J.
Partner: UNT Libraries Government Documents Department

Microstructural dependence of cavitation damage in polycrystalline materials. Final report, 1 November 1992--31 October 1994

Description: Microstructure of a sample of Inconel X-750 damaged by ISCC (intergranular stress corrosion cracking) was examined after fatigue precracking in a high-temperature environment of deaerated water. Orientation imaging microscopy was used to reveal the microstructure adjacent to the crack path. General high-angle boundaries were found to be most susceptible to cracking. An ordering of the susceptibilities to ISCC damage was proposed; all boundaries have been classified into one of 12 categories. A model is proposed to predict the crack path for ISCC based on ex situ record of damage probabilities. The cracking is modeled as a Markov chain on a regular hexagonal array of grain boundaries representing the connectivity of the network.
Date: February 5, 1996
Creator: Adams, B.L.
Partner: UNT Libraries Government Documents Department

Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

Description: The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.
Date: November 5, 1996
Creator: Hogden, J.
Partner: UNT Libraries Government Documents Department

Non-Gaussian statistics, classical field theory, and realizable Langevin models

Description: The direct-interaction approximation (DIA) to the fourth-order statistic Z {approximately}{l_angle}{lambda}{psi}{sup 2}){sup 2}{r_angle}, where {lambda} is a specified operator and {psi} is a random field, is discussed from several points of view distinct from that of Chen et al. [Phys. Fluids A 1, 1844 (1989)]. It is shown that the formula for Z{sub DIA} already appeared in the seminal work of Martin, Siggia, and Rose (Phys. Rev. A 8, 423 (1973)] on the functional approach to classical statistical dynamics. It does not follow from the original generalized Langevin equation (GLE) of Leith [J. Atmos. Sd. 28, 145 (1971)] and Kraichnan [J. Fluid Mech. 41, 189 (1970)] (frequently described as an amplitude representation for the DIA), in which the random forcing is realized by a particular superposition of products of random variables. The relationship of that GLE to renormalized field theories with non-Gaussian corrections (``spurious vertices``) is described. It is shown how to derive an improved representation, that realizes cumulants through O({psi}{sup 4}), by adding to the GLE a particular non-Gaussian correction. A Markovian approximation Z{sub DIA}{sup M} to Z{sub DIA} is derived. Both Z{sub DIA} and Z{sub DIA}{sup M} incorrectly predict a Gaussian kurtosis for the steady state of a solvable three-mode example.
Date: November 1, 1995
Creator: Krommes, J.A.
Partner: UNT Libraries Government Documents Department

The effects of internal fluctuations on a class of nonequilibrium statistical field theories

Description: A class of models with applications to swarm behavior as well as many other types of spatially extended complex biological and physical systems is studied. Internal fluctuations can play an active role in the organization of the phase structure of such systems. In particular, for the class of models studied here the effect of internal fluctuations due to finite size is a renormalized decrease in the temperature near the point of spontaneous symmetry breaking.
Date: July 1, 1993
Creator: Millonas, M. M.
Partner: UNT Libraries Government Documents Department

Adaptive Dynamic Bayesian Networks

Description: A discrete-time Markov process can be compactly modeled as a dynamic Bayesian network (DBN)--a graphical model with nodes representing random variables and directed edges indicating causality between variables. Each node has a probability distribution, conditional on the variables represented by the parent nodes. A DBN's graphical structure encodes fixed conditional dependencies between variables. But in real-world systems, conditional dependencies between variables may be unknown a priori or may vary over time. Model errors can result if the DBN fails to capture all possible interactions between variables. Thus, we explore the representational framework of adaptive DBNs, whose structure and parameters can change from one time step to the next: a distribution's parameters and its set of conditional variables are dynamic. This work builds on recent work in nonparametric Bayesian modeling, such as hierarchical Dirichlet processes, infinite-state hidden Markov networks and structured priors for Bayes net learning. In this paper, we will explain the motivation for our interest in adaptive DBNs, show how popular nonparametric methods are combined to formulate the foundations for adaptive DBNs, and present preliminary results.
Date: October 26, 2007
Creator: Ng, B M
Partner: UNT Libraries Government Documents Department

Assessing confidence in phylogenetic trees : bootstrap versus Markov chain Monte Carlo

Description: Recent implementations of Bayesian approaches are one of the largest advances in phylogenetic tree estimation in the last 10 years. Markov chain Monte Carlo (MCMC) is used in these new approaches to estimate the Bayesian posterior probability for each tree topology of interest. Our goal is to assess the confidence in the estimated tree (particularly in whether prespecified groups are monophyletic) using MCMC and to compare the Bayesian estimate of confidence to a bootstrap-based estimate of confidence. We compare the Bayesian posterior probability to the bootstrap probability for specified groups in two real sets of influenza sequences and two sets of simulated sequences for our comparison. We conclude that the bootstrap estimate is adequate compared to the MCMC estimate except perhaps if the number of DNA sites is small.
Date: January 1, 2002
Creator: Burr, Tom; Doak, J. E. (Justin E.); Gattiker, J. R. (James R.) & Stanbro, W. D. (William D.)
Partner: UNT Libraries Government Documents Department

Markov sequential pattern recognition : dependency and the unknown class.

Description: The sequential probability ratio test (SPRT) minimizes the expected number of observations to a decision and can solve problems in sequential pattern recognition. Some problems have dependencies between the observations, and Markov chains can model dependencies where the state occupancy probability is geometric. For a non-geometric process we show how to use the effective amount of independent information to modify the decision process, so that we can account for the remaining dependencies. Along with dependencies between observations, a successful system needs to handle the unknown class in unconstrained environments. For example, in an acoustic pattern recognition problem any sound source not belonging to the target set is in the unknown class. We show how to incorporate goodness of fit (GOF) classifiers into the Markov SPRT, and determine the worse case nontarget model. We also develop a multiclass Markov SPRT using the GOF concept.
Date: October 1, 2004
Creator: Malone, Kevin Thomas; Haschke, Greg Benjamin & Koch, Mark William
Partner: UNT Libraries Government Documents Department

Markov models and the ensemble Kalman filter for estimation of sorption rates.

Description: Non-equilibrium sorption of contaminants in ground water systems is examined from the perspective of sorption rate estimation. A previously developed Markov transition probability model for solute transport is used in conjunction with a new conditional probability-based model of the sorption and desorption rates based on breakthrough curve data. Two models for prediction of spatially varying sorption and desorption rates along a one-dimensional streamline are developed. These models are a Markov model that utilizes conditional probabilities to determine the rates and an ensemble Kalman filter (EnKF) applied to the conditional probability method. Both approaches rely on a previously developed Markov-model of mass transfer, and both models assimilate the observed concentration data into the rate estimation at each observation time. Initial values of the rates are perturbed from the true values to form ensembles of rates and the ability of both estimation approaches to recover the true rates is examined over three different sets of perturbations. The models accurately estimate the rates when the mean of the perturbations are zero, the unbiased case. For the cases containing some bias, addition of the ensemble Kalman filter is shown to improve accuracy of the rate estimation by as much as an order of magnitude.
Date: September 1, 2007
Creator: Vugrin, Eric D.; McKenna, Sean Andrew (Sandia National Laboratories, Albuquerque, NM) & Vugrin, Kay White
Partner: UNT Libraries Government Documents Department

Uncertainty estimation in reconstructed deformable models

Description: One of the hallmarks of the Bayesian approach to modeling is the posterior probability, which summarizes all uncertainties regarding the analysis. Using a Markov Chain Monte Carlo (MCMC) technique, it is possible to generate a sequence of objects that represent random samples drawn from the posterior distribution. We demonstrate this technique for reconstructions of two-dimensional objects from noisy projections taken from two directions. The reconstructed object is modeled in terms of a deformable geometrically-defined boundary with a constant interior density yielding a nonlinear reconstruction problem. We show how an MCMC sequence can be used to estimate uncertainties in the location of the edge of the reconstructed object.
Date: December 31, 1996
Creator: Hanson, K.M.; Cunningham, G.S. & McKee, R.
Partner: UNT Libraries Government Documents Department

Mathematical and geological approaches to minimizing the data requirements for statistical analysis of hydraulic conductivity. Technical completion report

Description: Field scale heterogeneity has been recognized as a dominant control on solute dispersion in groundwater. Numerous random field models exist for quantifying heterogeneity and its influence on solute transport. Minimizing data requirements in model selection and subsequent parameterization will be necessary for efficient application of quantitative models in contaminated subsurface environments. In this study, a detailed quantitative sedimentological study is performed to address the issue of incorporating geologic information into the geostatistical characterization process. A field air-minipermeameter is developed for rapid in-situ measurements. The field study conducted on an outcrop of fluvial/interfluvial deposits of the Pliocene- Pleistocene Sierra Ladrones Formation in the Albuquerque Basin of central New Mexico. Architectural element analysis is adopted for mapping and analysis of depositional environment. Geostatistical analysis is performed at two scales. At the architectural element scale, geostatistical analysis of assigned mean log-permeabilities of a 0.16 km{sup 2} peninsular region indicates that the directions of maximum and minimum correlation correspond to the directions of the large-scale depositional processes. At the facies scale, permeability is found to be adequately represented as a log-normal process. Log-permeability within individual lithofacies appears uncorrelated. The overall correlation structure at the facies scale is found to be a function of the mean log-permeability and spatial distribution of the individual lithofacies. Based on field observations of abrupt spatial changes in lithology and hydrologic properties, an algorithm for simulating multi-dimensional discrete Markov random fields. Finally, a conceptual model is constructed relating the information inferred from dimensional environment analysis to the various random fields of heterogeneity.
Date: December 1, 1992
Creator: Phillips, F.M.; Wilson, J.L.; Gutjahr, A.L.; Love, D.W.; Davis, J.M.; Lohmann, R.C. et al.
Partner: UNT Libraries Government Documents Department