505 Matching Results

Search Results

Advanced search parameters have been applied.

A probabilistic tornado wind hazard model for the continental United States

Description: A probabilistic tornado wind hazard model for the continental United States (CONUS) is described. The model incorporates both aleatory (random) and epistemic uncertainties associated with quantifying the tornado wind hazard parameters. The temporal occurrences of tornadoes within the continental United States (CONUS) is assumed to be a Poisson process. A spatial distribution of tornado touchdown locations is developed empirically based on the observed historical events within the CONUS. The hazard model is an aerial probability model that takes into consideration the size and orientation of the facility, the length and width of the tornado damage area (idealized as a rectangle and dependent on the tornado intensity scale), wind speed variation within the damage area, tornado intensity classification errors (i.e.,errors in assigning a Fujita intensity scale based on surveyed damage), and the tornado path direction. Epistemic uncertainties in describing the distributions of the aleatory variables are accounted for by using more than one distribution model to describe aleatory variations. The epistemic uncertainties are based on inputs from a panel of experts. A computer program, TORNADO, has been developed incorporating this model; features of this program are also presented.
Date: April 19, 1999
Creator: Hossain, Q; Kimball, J; Mensing, R & Savy, J
Partner: UNT Libraries Government Documents Department

A comparison of approximate reasoning results using information uncertainty

Description: An Approximate Reasoning (AR) model is a useful alternative to a probabilistic model when there is a need to draw conclusions from information that is qualitative. For certain systems, much of the information available is elicited from subject matter experts (SME). One such example is the risk of attack on a particular facility by a pernicious adversary. In this example there are several avenues of attack, i.e. scenarios, and AR can be used to model the risk of attack associated with each scenario. The qualitative information available and provided by the SME is comprised of linguistic values which are well suited for an AR model but meager for other modeling approaches. AR models can produce many competing results. Associated with each competing AR result is a vector of linguistic values and a respective degree of membership in each value. A suitable means to compare and segregate AR results would be an invaluable tool to analysts and decisions makers. A viable method would be to quantify the information uncertainty present in each AR result then use the measured quantity comparatively. One issue of concern for measuring the infornlation uncertainty involved with fuzzy uncertainty is that previously proposed approaches focus on the information uncertainty involved within the entire fuzzy set. This paper proposes extending measures of information uncertainty to AR results, which involve only one degree of membership for each fuzzy set included in the AR result. An approach to quantify the information uncertainty in the AR result is presented.
Date: January 1, 2009
Creator: Chavez, Gregory; Key, Brian; Zerkle, David & Shevitz, Daniel
Partner: UNT Libraries Government Documents Department

Function estimation by feedforward sigmoidal networks with bounded weights

Description: The authors address the problem of PAC (probably and approximately correct) learning functions f : [0, 1]{sup d} {r_arrow} [{minus}K, K] based on iid (independently and identically distributed) sample generated according to an unknown distribution, by using feedforward sigmoidal networks. They use two basic properties of the neural networks with bounded weights, namely: (a) they form a Euclidean class, and (b) for hidden units of the form tanh ({gamma}z) they are Lipschitz functions. Either property yields sample sizes for PAC function learning under any Lipschitz cost function. The sample size based on the first property is tighter compared to the known bounds based on VC-dimension. The second estimate yields a sample size that can be conveniently adjusted by a single parameter, {gamma}, related to the hidden nodes.
Date: May 1, 1996
Creator: Rao, N.S.V.; Protopoescu, V. & Qiao, H.
Partner: UNT Libraries Government Documents Department

Bayesian stratified sampling to assess corpus utility

Description: This paper describes a method for asking statistical questions about a large text corpus. The authors exemplify the method by addressing the question, ``What percentage of Federal Register documents are real documents, of possible interest to a text researcher or analyst?`` They estimate an answer to this question by evaluating 200 documents selected from a corpus of 45,820 Federal Register documents. Bayesian analysis and stratified sampling are used to reduce the sampling uncertainty of the estimate from over 3,100 documents to fewer than 1,000. A possible application of the method is to establish baseline statistics used to estimate recall rates for information retrieval systems.
Date: December 1998
Creator: Hochberg, J.; Scovel, C.; Thomas, T. & Hall, S.
Partner: UNT Libraries Government Documents Department

Behavior of the finite-sized, three-dimensional, Ising model near the critical point

Description: Recent work showing the validity of hyperscaling involved results for finite size systems very near the critical point. The authors study this problem in more detail, and give estimators related to the Binder cumulant ratio which seem to approach the critical temperature from above and below. Based on these results, they estimate that the renormalized coupling constant, computed for the temperature fixed at the critical temperature and then taking the large system-size limit, is about 4.9 {+-} 0.1, and give a likely lower bound for it of 4.5. These estimates are argued to suffice to show the validity of hyperscaling.
Date: May 1, 1996
Creator: Baker, G.A. Jr. & Gupta, R.
Partner: UNT Libraries Government Documents Department

Radiation protection in space

Description: The challenge for planning radiation protection in space is to estimate the risk of events of low probability after low levels of irradiation. This work has revealed many gaps in the present state of knowledge that require further study. Despite investigations of several irradiated populations, the atomic-bomb survivors remain the primary basis for estimating the risk of ionizing radiation. Compared to previous estimates, two new independent evaluations of available information indicate a significantly greater risk of stochastic effects of radiation (cancer and genetic effects) by about a factor of three for radiation workers. This paper presents a brief historical perspective of the international effort to assure radiation protection in space.
Date: February 1, 1995
Creator: Blakely, E. A. & Fry, R. J. M.
Partner: UNT Libraries Government Documents Department

Lifetime difference of B{sub s} mesons and its implications

Description: The authors discuss the calculation of the width difference {Delta}{Gamma}{sub B}, between the B{sub s} mass eigenstates to next-to-leading order in the heavy quark expansion. 1/m{sub b}-corrections are estimated to reduce the leading order result by typically 30%. The error of the present estimate ({Delta}{Gamma}/{Gamma}){sub B{sub s}} = 0.16{sub {minus}0.09}{sup +0.11} could be substantially improved by pinning down the value of <{anti B}{sub s}{vert_bar}({anti b}{sub i}s{sub i}){sub S{minus}P}({anti b}{sub j}s{sub j}){sub S{minus}P}{vert_bar}B{sub s}> and an accuracy of 10% in ({Delta}{Gamma}/{Gamma}){sub B{sub s}} should eventually be reached. They briefly mention strategies to measure ({Delta}{Gamma}/{Gamma}){sub B{sub s}}, and its implications for constraints on {Delta}M{sub B{sub s}}, CKM parameters and the observation of CP violation in untagged B{sub s} samples.
Date: August 1, 1996
Creator: Beneke, M.
Partner: UNT Libraries Government Documents Department

Automated detection and location of structural degradation

Description: The investigation of a diagnostic method for detecting and locating the source of structural degradation in mechanical systems is described in this paper. The diagnostic method uses a mathematical model of the mechanical system to define relationships between system parameters, such as spring rates and damping rates, and measurable spectral features, such as natural frequencies and mode shapes. These model-defined relationships are incorporated into a neural network, which is used to relate measured spectral features to system parameters. The diagnosis of the system`s condition is performed by presenting the neural network with measured spectral features and comparing the system parameters estimated by the neural network to previously estimated values. Changes in the estimated system parameters indicate the location and severity of degradation in the mechanical system. The investigation applied the method by using computer-simulated data and data collected form a bench-top mechanical system. The effects of neural network training set size and composition on the accuracy of the model parameter estimates were investigated by using computer simulated data. The results show that diagnostic method can be applied to successfully locate and estimate the magnitude of structural changes in a mechanical system. The average error in the estimated spring rate values of the bench-top mechanical system was less than 10%. This degree of accuracy is sufficient to permit the use of this method for detecting and locating structural degradation in mechanical systems.
Date: March 1997
Creator: Damiano, B.; Blakeman, E. D. & Phillips, L. D.
Partner: UNT Libraries Government Documents Department

Probabilistic model, analysis and computer code for take-off and landing related aircraft crashes into a structure

Description: A methodology is presented that allows the calculation of the probability that any of a particular collection of structures will be hit by an aircraft in a take-off or landing related accident during a specified window of time with a velocity exceeding a given critical value. A probabilistic model is developed that incorporates the location of each structure relative to airport runways in the vicinity; the size of the structure; the sizes, types, and frequency of use of commercial, military, and general aviation aircraft which take-off and land at these runways; the relative frequency of take-off and landing related accidents by aircraft type; the stochastic properties of off-runway crashes, namely impact location, impact angle, impact velocity, and the heading, deceleration, and skid distance after impact; and the stochastic properties of runway overruns and runoffs, namely the position at which the aircraft exits the runway, its exit velocity, and the heading and deceleration after exiting. Relevant probability distributions are fitted from extensive commercial, military, and general aviation accident report data bases. The computer source code for implementation of the calculation is provided.
Date: February 6, 1996
Creator: Glaser, R.
Partner: UNT Libraries Government Documents Department

Modeling single molecule detection probabilities in microdroplets. Final report

Description: Optimization of molecular detection efficiencies is important for analytical applications of single molecule detection methods. In microdroplets some experimental limitations can be reduced, primarily because the molecule cannot diffuse away from the excitation and collection volume. Digital molecular detection using a stream of microdroplets has been proposed as a method of reducing concentration detection limits by several orders of magnitude relative to conventional measurements. However, the bending and reflection of light at the microdroplet`s liquid-air interface cause the illumination intensity and fluorescence intensity collected to be strongly dependent on the position of the molecule within the droplet. The goal is to model the detection of single molecules in microdroplets so that one can better understand and optimize detection efficiencies. In the first year of this modeling effort the authors studied the collection of fluorescence from unit-amplitude dipoles inside of spheres. In this second year they modified their analysis to accurately model the effects of excitation inhomogeneities, including effects of molecular saturation, motion of the droplet, and phase variations between the two counter-propagating waves that illuminate the droplet. They showed that counter-propagating plane wave illumination can decrease the variations in the intensity which excites the molecules. Also in this second year they simulated (using a Monte Carlo method) the detection of fluorescence from many droplets, each of which may contain zero, or one (or at higher concentrations, a few) fluorescent molecules.
Date: May 22, 1997
Creator: Hill, S.C.
Partner: UNT Libraries Government Documents Department

Algorithms for fusion of multiple sensors having unknown error distributions

Description: The authors presented recent results on a general sensor fusion problem, where the underlying sensor error distributions are not known, but a sample is available. They presented a general method for obtaining a fusion rule based on scale-sensitive dimension of the function class. Two computationally viable methods are reviewed based on the Nadaraya-Watson estimator, and the finite dimensional vector spaces. Several computational issues of the fusion rule estimation are open problems. It would be interesting to obtain necessary and sufficient conditions under which polynomial-time algorithms can be used to solve the fusion rule estimation problem under the criterion. Also, conditions under which the composite system is significantly better than best sensor would be extremely useful. Finally, lower bound estimates for various sample sizes will be very important in judging the optimality of sample size estimates.
Date: June 1, 1997
Creator: Rao, N. S. V.
Partner: UNT Libraries Government Documents Department

Simultaneous Monte Carlo zero-variance estimates of several correlated means

Description: Zero variance procedures have been in existence since the dawn of Monte Carlo. Previous works all treat the problem of zero variance solutions for a single tally. One often wants to get low variance solutions to more than one tally. When the sets of random walks needed for two tallies are similar, it is more efficient to do zero variance biasing for both tallies in the same Monte Carlo run, instead of two separate runs. The theory presented here correlates the random walks of particles by the similarity of their tallies. Particles with dissimilar tallies rapidly become uncorrelated whereas particles with similar tallies will stay correlated through most of their random walk. The theory herein should allow practitioners to make efficient use of zero-variance biasing procedures in practical problems.
Date: August 1, 1997
Creator: Booth, T.E.
Partner: UNT Libraries Government Documents Department

Structural model uncertainty in stochastic simulation

Description: Prediction uncertainty in stochastic simulation models can be described by a hierarchy of components: stochastic variability at the lowest level, input and parameter uncertainty at a higher level, and structural model uncertainty at the top. It is argued that a usual paradigm for analysis of input uncertainty is not suitable for application to structural model uncertainty. An approach more likely to produce an acceptable methodology for analyzing structural model uncertainty is one that uses characteristics specific to the particular family of models.
Date: September 1, 1997
Creator: McKay, M.D. & Morrison, J.D.
Partner: UNT Libraries Government Documents Department

Plausible inference and the interpretation of quantitative data

Description: The analysis of quantitative data is central to scientific investigation. Probability theory, which is founded on two rules, the sum and product rules, provides the unique, logically consistent method for drawing valid inferences from quantitative data. This primer on the use of probability theory is meant to fulfill a pedagogical purpose. The discussion begins at the foundation of scientific inference by showing how the sum and product rules of probability theory follow from some very basic considerations of logical consistency. The authors then develop general methods of probability theory that are essential to the analysis and interpretation of data. They discuss how to assign probability distributions using the principle of maximum entropy, how to estimate parameters from data, how to handle nuisance parameters whose values are of little interest, and how to determine which of a set of models is most justified by a data set. All these methods are used together in most realistic data analyses. Examples are given throughout to illustrate the basic points.
Date: February 1, 1998
Creator: Nakhleh, C.W.
Partner: UNT Libraries Government Documents Department

Optimization of stability index versus first strike cost

Description: This note studies the impact of maximizing the stability index rather than minimizing the first strike cost in choosing offensive missile allocations. It does so in the context of a model in which exchanges between vulnerable missile forces are modeled probabilistically, converted into first and second strike costs through approximations to the value target sets at risk, and the stability index is taken to be their ratio. The value of the allocation that minimizes the first strike cost for both attack preferences are derived analytically. The former recovers results derived earlier. The latter leads to an optimum at unity allocation for which the stability index is determined analytically. For values of the attack preference greater than about unity, maximizing the stability index increases the cost of striking first 10--15%. For smaller values of the attack preference, maximizing the index increases the second strike cost a similar amount. Both are stabilizing, so if both sides could be trusted to target on missiles in order to minimize damage to value and maximize stability, the stability index for vulnerable missiles could be increased by about 15%. However, that would increase the cost to the first striker by about 15%. It is unclear why--having decided to strike--he would do so in a way that would increase damage to himself.
Date: May 1, 1997
Creator: Canavan, G. H.
Partner: UNT Libraries Government Documents Department

Probabilistic model for pressure vessel reliability incorporating fracture mechanics and nondestructive examination

Description: A probabilistic model has been developed for predicting the reliability of structures based on fracture mechanics and the results of nondestructive examination (NDE). The distinctive feature of this model is the way in which inspection results and the probability of detection (POD) curve are used to calculate a probability density function (PDF) for the number of flaws and the distribution of those flaws among the various size ranges. In combination with a probabilistic fracture mechanics model, this density function is used to estimate the probability of failure (POF) of a structure in which flaws have been detected by NDE. The model is useful for parametric studies of inspection techniques and material characteristics.
Date: March 1, 1998
Creator: Tow, D.M. & Reuter, W.G.
Partner: UNT Libraries Government Documents Department

Shock certification of replacement subsystems and components in the presence of uncertainty

Description: In this paper a methodology for analytically estimating the response of replacement components in a system subjected to worst-case hostile shocks is presented. This methodology does not require the use of system testing but uses previously compiled shock data and inverse dynamic analysis to estimate component shock response. In the past component shock responses were determined from numerous system tests; however, with limitations on system testing, an alternate methodology for determining component response is required. Such a methodology is discussed. This methodology is mathematically complex in that two inverse problems, and a forward problem, must be solved for a permutation of models representing variabilities in dynamics. Two conclusions were deduced as a result of this work. First, the present methodology produces overly conservative results. Second, the specification of system variability is critical to the prediction of component response.
Date: May 8, 2000
Creator: DOHNER,JEFFREY L. & LAUFFER,JAMES P.
Partner: UNT Libraries Government Documents Department

New methods for predicting lifetimes. Part 2 -- The Wear-out approach for predicting the remaining lifetime of materials

Description: The so-called Palmgren-Miner concept that degradation is cumulative, and that failure is therefore considered to be the direct result of the accumulation of damage with time, has been known for decades. Cumulative damage models based on this concept have been derived and used mainly for fatigue life predictions for metals and composite materials. The authors review the principles underlying such models and suggest ways in which they may be best applied to polymeric materials in temperature environments. The authors first consider cases where polymer degradation data can be rigorously time-temperature superposed over a given temperature range. For a step change in temperature after damage has occurred at an initial temperature in this range, they show that the remaining lifetime at the second temperature should be linearly related to the aging time prior to the step. This predicted linearity implies that it may be possible to estimate the remaining lifetime of polymeric materials aging under application ambient conditions by completing the aging at an accelerated temperature. They refer to this generic temperature-step method as the Wear-out approach. They then outline the expectations for Wear-out experiments when time-temperature superposition is invalid, specifically describing the two cases where so-called interaction effects are absent and are present. Finally, they present some preliminary results outlining the application of the Wear-out approach to polymers. In analyzing the experimental Wear-out results, they introduce a procedure that they refer to as time-damage superposition. This procedure not only utilizes all of the experimental data instead of a single point from each data set, but also allows them to determine the importance of any interaction effects.
Date: April 20, 2000
Creator: GILLEN,KENNETH T. & CELINA,MATHIAS C.
Partner: UNT Libraries Government Documents Department

Representation of Random Shock via the Karhunen Loeve Expansion

Description: Shock excitations are normally random process realizations, and most of our efforts to represent them either directly or indirectly reflect this fact. The most common indirect representation of shock sources is the shock response spectrum. It seeks to establish the damage-causing potential of random shocks in terms of responses excited in linear, single-degree-of-freedom systems. This paper shows that shock sources can be represented directly by developing the probabilistic and statistical structure that underlies the random shock source. Confidence bounds on process statistics and probabilities of specific excitation levels can be established from the model. Some numerical examples are presented.
Date: December 8, 2000
Creator: PAEZ,THOMAS L. & HUNTER,NORMAN F.
Partner: UNT Libraries Government Documents Department