453 Matching Results

Search Results

Advanced search parameters have been applied.

A comparison of approximate reasoning results using information uncertainty

Description: An Approximate Reasoning (AR) model is a useful alternative to a probabilistic model when there is a need to draw conclusions from information that is qualitative. For certain systems, much of the information available is elicited from subject matter experts (SME). One such example is the risk of attack on a particular facility by a pernicious adversary. In this example there are several avenues of attack, i.e. scenarios, and AR can be used to model the risk of attack associated with each scenario. The qualitative information available and provided by the SME is comprised of linguistic values which are well suited for an AR model but meager for other modeling approaches. AR models can produce many competing results. Associated with each competing AR result is a vector of linguistic values and a respective degree of membership in each value. A suitable means to compare and segregate AR results would be an invaluable tool to analysts and decisions makers. A viable method would be to quantify the information uncertainty present in each AR result then use the measured quantity comparatively. One issue of concern for measuring the infornlation uncertainty involved with fuzzy uncertainty is that previously proposed approaches focus on the information uncertainty involved within the entire fuzzy set. This paper proposes extending measures of information uncertainty to AR results, which involve only one degree of membership for each fuzzy set included in the AR result. An approach to quantify the information uncertainty in the AR result is presented.
Date: January 1, 2009
Creator: Chavez, Gregory; Key, Brian; Zerkle, David & Shevitz, Daniel
Partner: UNT Libraries Government Documents Department

A probabilistic tornado wind hazard model for the continental United States

Description: A probabilistic tornado wind hazard model for the continental United States (CONUS) is described. The model incorporates both aleatory (random) and epistemic uncertainties associated with quantifying the tornado wind hazard parameters. The temporal occurrences of tornadoes within the continental United States (CONUS) is assumed to be a Poisson process. A spatial distribution of tornado touchdown locations is developed empirically based on the observed historical events within the CONUS. The hazard model is an aerial probability model that takes into consideration the size and orientation of the facility, the length and width of the tornado damage area (idealized as a rectangle and dependent on the tornado intensity scale), wind speed variation within the damage area, tornado intensity classification errors (i.e.,errors in assigning a Fujita intensity scale based on surveyed damage), and the tornado path direction. Epistemic uncertainties in describing the distributions of the aleatory variables are accounted for by using more than one distribution model to describe aleatory variations. The epistemic uncertainties are based on inputs from a panel of experts. A computer program, TORNADO, has been developed incorporating this model; features of this program are also presented.
Date: April 19, 1999
Creator: Hossain, Q; Kimball, J; Mensing, R & Savy, J
Partner: UNT Libraries Government Documents Department

Application of the NUREG/CR-6850 EPRI/NRC Fire PRA Methodology to a DOE Facility

Description: The application NUREG/CR-6850 EPRI/NRC fire PRA methodology to DOE facility presented several challenges. This paper documents the process and discusses several insights gained during development of the fire PRA. A brief review of the tasks performed is provided with particular focus on the following: • Tasks 5 and 14: Fire-induced risk model and fire risk quantification. A key lesson learned was to begin model development and quantification as early as possible in the project using screening values and simplified modeling if necessary. • Tasks 3 and 9: Fire PRA cable selection and detailed circuit failure analysis. In retrospect, it would have been beneficial to perform the model development and quantification in 2 phases with detailed circuit analysis applied during phase 2. This would have allowed for development of a robust model and quantification earlier in the project and would have provided insights into where to focus the detailed circuit analysis efforts. • Tasks 8 and 11: Scoping fire modeling and detailed fire modeling. More focus should be placed on detailed fire modeling and less focus on scoping fire modeling. This was the approach taken for the fire PRA. • Task 14: Fire risk quantification. Typically, multiple safe shutdown (SSD) components fail during a given fire scenario. Therefore dependent failure analysis is critical to obtaining a meaningful fire risk quantification. Dependent failure analysis for the fire PRA presented several challenges which will be discussed in the full paper.
Date: March 1, 2011
Creator: Elicson, Tom; Harwood, Bentley; Yorg, Richard; Lucek, Heather; Bouchard, Jim; Jukkola, Ray et al.
Partner: UNT Libraries Government Documents Department

SRS BEDROCK PROBABILISTIC SEISMIC HAZARD ANALYSIS (PSHA) DESIGN BASIS JUSTIFICATION (U)

Description: This represents an assessment of the available Savannah River Site (SRS) hard-rock probabilistic seismic hazard assessments (PSHAs), including PSHAs recently completed, for incorporation in the SRS seismic hazard update. The prior assessment of the SRS seismic design basis (WSRC, 1997) incorporated the results from two PSHAs that were published in 1988 and 1993. Because of the vintage of these studies, an assessment is necessary to establish the value of these PSHAs considering more recently collected data affecting seismic hazards and the availability of more recent PSHAs. This task is consistent with the Department of Energy (DOE) order, DOE O 420.1B and DOE guidance document DOE G 420.1-2. Following DOE guidance, the National Map Hazard was reviewed and incorporated in this assessment. In addition to the National Map hazard, alternative ground motion attenuation models (GMAMs) are used with the National Map source model to produce alternate hazard assessments for the SRS. These hazard assessments are the basis for the updated hard-rock hazard recommendation made in this report. The development and comparison of hazard based on the National Map models and PSHAs completed using alternate GMAMs provides increased confidence in this hazard recommendation. The alternate GMAMs are the EPRI (2004), USGS (2002) and a regional specific model (Silva et al., 2004). Weights of 0.6, 0.3 and 0.1 are recommended for EPRI (2004), USGS (2002) and Silva et al. (2004) respectively. This weighting gives cluster weights of .39, .29, .15, .17 for the 1-corner, 2-corner, hybrid, and Greens-function models, respectively. This assessment is judged to be conservative as compared to WSRC (1997) and incorporates the range of prevailing expert opinion pertinent to the development of seismic hazard at the SRS. The corresponding SRS hard-rock uniform hazard spectra are greater than the design spectra developed in WSRC (1997) that were based on the LLNL ...
Date: December 14, 2005
Creator: Lee, R. C. & McHood, M. D.
Partner: UNT Libraries Government Documents Department

Risk assessment framework for geologic carbon sequestration sites

Description: We have developed a simple and transparent approach for assessing CO{sub 2} and brine leakage risk associated with CO{sub 2} injection at geologic carbon sequestration (GCS) sites. The approach, called the Certification Framework (CF), is based on the concept of effective trapping, which takes into account both the probability of leakage from the storage formation and impacts of leakage. The effective trapping concept acknowledges that GCS can be safe and effective even if some CO{sub 2} and brine were to escape from the storage formation provided the impact of such leakage is below agreed-upon limits. The CF uses deterministic process models to calculate expected well- and fault-related leakage fluxes and concentrations. These in turn quantify the impacts under a given leakage scenario to so-called 'compartments,' which comprise collections of vulnerable entities. The probabilistic part of the calculated risk comes from the likelihood of (1) the intersections of injected CO{sub 2} and related pressure perturbations with well or fault leakage pathways, and (2) intersections of leakage pathways with compartments. Two innovative approaches for predicting leakage likelihood, namely (1) fault statistics, and (2) fuzzy rules for fault and fracture intersection probability, are highlighted here.
Date: February 1, 2010
Creator: Oldenburg, C.; Jordan, P.; Zhang, Y.; Nicot, J.-P. & Bryant, S.L.
Partner: UNT Libraries Government Documents Department

Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE), Version 5.0

Description: The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The Graphical Evaluation Module (GEM) is a special application tool designed for evaluation of operational occurrences using the Accident Sequence Precursor (ASP) program methods. GEM provides the capability for an analyst to quickly and easily perform conditional core damage probability (CCDP) calculations. The analyst can then use the CCDP calculations to determine if the occurrence of an initiating event or a condition adversely impacts safety. It uses models and data developed in the SAPHIRE specially for the ASP program. GEM requires more data than that normally provided in SAPHIRE and will not perform properly with other models or data bases. This is the first release of GEM and the developers of GEM welcome user comments and feedback that will generate ideas for improvements to future versions. GEM is designated as version 5.0 to track GEM codes along with the other SAPHIRE codes as the GEM relies on the same, shared database structure.
Date: October 1, 1995
Creator: Russell, K. D.; Kvarfordt, K. J. & Hoffman, C. L.
Partner: UNT Libraries Government Documents Department

Function estimation by feedforward sigmoidal networks with bounded weights

Description: The authors address the problem of PAC (probably and approximately correct) learning functions f : [0, 1]{sup d} {r_arrow} [{minus}K, K] based on iid (independently and identically distributed) sample generated according to an unknown distribution, by using feedforward sigmoidal networks. They use two basic properties of the neural networks with bounded weights, namely: (a) they form a Euclidean class, and (b) for hidden units of the form tanh ({gamma}z) they are Lipschitz functions. Either property yields sample sizes for PAC function learning under any Lipschitz cost function. The sample size based on the first property is tighter compared to the known bounds based on VC-dimension. The second estimate yields a sample size that can be conveniently adjusted by a single parameter, {gamma}, related to the hidden nodes.
Date: May 1, 1996
Creator: Rao, N.S.V.; Protopoescu, V. & Qiao, H.
Partner: UNT Libraries Government Documents Department

Bayesian stratified sampling to assess corpus utility

Description: This paper describes a method for asking statistical questions about a large text corpus. The authors exemplify the method by addressing the question, ``What percentage of Federal Register documents are real documents, of possible interest to a text researcher or analyst?`` They estimate an answer to this question by evaluating 200 documents selected from a corpus of 45,820 Federal Register documents. Bayesian analysis and stratified sampling are used to reduce the sampling uncertainty of the estimate from over 3,100 documents to fewer than 1,000. A possible application of the method is to establish baseline statistics used to estimate recall rates for information retrieval systems.
Date: December 1998
Creator: Hochberg, J.; Scovel, C.; Thomas, T. & Hall, S.
Partner: UNT Libraries Government Documents Department

Behavior of the finite-sized, three-dimensional, Ising model near the critical point

Description: Recent work showing the validity of hyperscaling involved results for finite size systems very near the critical point. The authors study this problem in more detail, and give estimators related to the Binder cumulant ratio which seem to approach the critical temperature from above and below. Based on these results, they estimate that the renormalized coupling constant, computed for the temperature fixed at the critical temperature and then taking the large system-size limit, is about 4.9 {+-} 0.1, and give a likely lower bound for it of 4.5. These estimates are argued to suffice to show the validity of hyperscaling.
Date: May 1, 1996
Creator: Baker, G.A. Jr. & Gupta, R.
Partner: UNT Libraries Government Documents Department

Radiation protection in space

Description: The challenge for planning radiation protection in space is to estimate the risk of events of low probability after low levels of irradiation. This work has revealed many gaps in the present state of knowledge that require further study. Despite investigations of several irradiated populations, the atomic-bomb survivors remain the primary basis for estimating the risk of ionizing radiation. Compared to previous estimates, two new independent evaluations of available information indicate a significantly greater risk of stochastic effects of radiation (cancer and genetic effects) by about a factor of three for radiation workers. This paper presents a brief historical perspective of the international effort to assure radiation protection in space.
Date: February 1, 1995
Creator: Blakely, E. A. & Fry, R. J. M.
Partner: UNT Libraries Government Documents Department

Lifetime difference of B{sub s} mesons and its implications

Description: The authors discuss the calculation of the width difference {Delta}{Gamma}{sub B}, between the B{sub s} mass eigenstates to next-to-leading order in the heavy quark expansion. 1/m{sub b}-corrections are estimated to reduce the leading order result by typically 30%. The error of the present estimate ({Delta}{Gamma}/{Gamma}){sub B{sub s}} = 0.16{sub {minus}0.09}{sup +0.11} could be substantially improved by pinning down the value of <{anti B}{sub s}{vert_bar}({anti b}{sub i}s{sub i}){sub S{minus}P}({anti b}{sub j}s{sub j}){sub S{minus}P}{vert_bar}B{sub s}> and an accuracy of 10% in ({Delta}{Gamma}/{Gamma}){sub B{sub s}} should eventually be reached. They briefly mention strategies to measure ({Delta}{Gamma}/{Gamma}){sub B{sub s}}, and its implications for constraints on {Delta}M{sub B{sub s}}, CKM parameters and the observation of CP violation in untagged B{sub s} samples.
Date: August 1, 1996
Creator: Beneke, M.
Partner: UNT Libraries Government Documents Department

Automated detection and location of structural degradation

Description: The investigation of a diagnostic method for detecting and locating the source of structural degradation in mechanical systems is described in this paper. The diagnostic method uses a mathematical model of the mechanical system to define relationships between system parameters, such as spring rates and damping rates, and measurable spectral features, such as natural frequencies and mode shapes. These model-defined relationships are incorporated into a neural network, which is used to relate measured spectral features to system parameters. The diagnosis of the system`s condition is performed by presenting the neural network with measured spectral features and comparing the system parameters estimated by the neural network to previously estimated values. Changes in the estimated system parameters indicate the location and severity of degradation in the mechanical system. The investigation applied the method by using computer-simulated data and data collected form a bench-top mechanical system. The effects of neural network training set size and composition on the accuracy of the model parameter estimates were investigated by using computer simulated data. The results show that diagnostic method can be applied to successfully locate and estimate the magnitude of structural changes in a mechanical system. The average error in the estimated spring rate values of the bench-top mechanical system was less than 10%. This degree of accuracy is sufficient to permit the use of this method for detecting and locating structural degradation in mechanical systems.
Date: March 1997
Creator: Damiano, B.; Blakeman, E. D. & Phillips, L. D.
Partner: UNT Libraries Government Documents Department

Probabilistic model, analysis and computer code for take-off and landing related aircraft crashes into a structure

Description: A methodology is presented that allows the calculation of the probability that any of a particular collection of structures will be hit by an aircraft in a take-off or landing related accident during a specified window of time with a velocity exceeding a given critical value. A probabilistic model is developed that incorporates the location of each structure relative to airport runways in the vicinity; the size of the structure; the sizes, types, and frequency of use of commercial, military, and general aviation aircraft which take-off and land at these runways; the relative frequency of take-off and landing related accidents by aircraft type; the stochastic properties of off-runway crashes, namely impact location, impact angle, impact velocity, and the heading, deceleration, and skid distance after impact; and the stochastic properties of runway overruns and runoffs, namely the position at which the aircraft exits the runway, its exit velocity, and the heading and deceleration after exiting. Relevant probability distributions are fitted from extensive commercial, military, and general aviation accident report data bases. The computer source code for implementation of the calculation is provided.
Date: February 6, 1996
Creator: Glaser, R.
Partner: UNT Libraries Government Documents Department

Modeling single molecule detection probabilities in microdroplets. Final report

Description: Optimization of molecular detection efficiencies is important for analytical applications of single molecule detection methods. In microdroplets some experimental limitations can be reduced, primarily because the molecule cannot diffuse away from the excitation and collection volume. Digital molecular detection using a stream of microdroplets has been proposed as a method of reducing concentration detection limits by several orders of magnitude relative to conventional measurements. However, the bending and reflection of light at the microdroplet`s liquid-air interface cause the illumination intensity and fluorescence intensity collected to be strongly dependent on the position of the molecule within the droplet. The goal is to model the detection of single molecules in microdroplets so that one can better understand and optimize detection efficiencies. In the first year of this modeling effort the authors studied the collection of fluorescence from unit-amplitude dipoles inside of spheres. In this second year they modified their analysis to accurately model the effects of excitation inhomogeneities, including effects of molecular saturation, motion of the droplet, and phase variations between the two counter-propagating waves that illuminate the droplet. They showed that counter-propagating plane wave illumination can decrease the variations in the intensity which excites the molecules. Also in this second year they simulated (using a Monte Carlo method) the detection of fluorescence from many droplets, each of which may contain zero, or one (or at higher concentrations, a few) fluorescent molecules.
Date: May 22, 1997
Creator: Hill, S.C.
Partner: UNT Libraries Government Documents Department

Algorithms for fusion of multiple sensors having unknown error distributions

Description: The authors presented recent results on a general sensor fusion problem, where the underlying sensor error distributions are not known, but a sample is available. They presented a general method for obtaining a fusion rule based on scale-sensitive dimension of the function class. Two computationally viable methods are reviewed based on the Nadaraya-Watson estimator, and the finite dimensional vector spaces. Several computational issues of the fusion rule estimation are open problems. It would be interesting to obtain necessary and sufficient conditions under which polynomial-time algorithms can be used to solve the fusion rule estimation problem under the criterion. Also, conditions under which the composite system is significantly better than best sensor would be extremely useful. Finally, lower bound estimates for various sample sizes will be very important in judging the optimality of sample size estimates.
Date: June 1, 1997
Creator: Rao, N. S. V.
Partner: UNT Libraries Government Documents Department

Simultaneous Monte Carlo zero-variance estimates of several correlated means

Description: Zero variance procedures have been in existence since the dawn of Monte Carlo. Previous works all treat the problem of zero variance solutions for a single tally. One often wants to get low variance solutions to more than one tally. When the sets of random walks needed for two tallies are similar, it is more efficient to do zero variance biasing for both tallies in the same Monte Carlo run, instead of two separate runs. The theory presented here correlates the random walks of particles by the similarity of their tallies. Particles with dissimilar tallies rapidly become uncorrelated whereas particles with similar tallies will stay correlated through most of their random walk. The theory herein should allow practitioners to make efficient use of zero-variance biasing procedures in practical problems.
Date: August 1, 1997
Creator: Booth, T.E.
Partner: UNT Libraries Government Documents Department

Structural model uncertainty in stochastic simulation

Description: Prediction uncertainty in stochastic simulation models can be described by a hierarchy of components: stochastic variability at the lowest level, input and parameter uncertainty at a higher level, and structural model uncertainty at the top. It is argued that a usual paradigm for analysis of input uncertainty is not suitable for application to structural model uncertainty. An approach more likely to produce an acceptable methodology for analyzing structural model uncertainty is one that uses characteristics specific to the particular family of models.
Date: September 1, 1997
Creator: McKay, M.D. & Morrison, J.D.
Partner: UNT Libraries Government Documents Department

Plausible inference and the interpretation of quantitative data

Description: The analysis of quantitative data is central to scientific investigation. Probability theory, which is founded on two rules, the sum and product rules, provides the unique, logically consistent method for drawing valid inferences from quantitative data. This primer on the use of probability theory is meant to fulfill a pedagogical purpose. The discussion begins at the foundation of scientific inference by showing how the sum and product rules of probability theory follow from some very basic considerations of logical consistency. The authors then develop general methods of probability theory that are essential to the analysis and interpretation of data. They discuss how to assign probability distributions using the principle of maximum entropy, how to estimate parameters from data, how to handle nuisance parameters whose values are of little interest, and how to determine which of a set of models is most justified by a data set. All these methods are used together in most realistic data analyses. Examples are given throughout to illustrate the basic points.
Date: February 1, 1998
Creator: Nakhleh, C.W.
Partner: UNT Libraries Government Documents Department

Optimization of stability index versus first strike cost

Description: This note studies the impact of maximizing the stability index rather than minimizing the first strike cost in choosing offensive missile allocations. It does so in the context of a model in which exchanges between vulnerable missile forces are modeled probabilistically, converted into first and second strike costs through approximations to the value target sets at risk, and the stability index is taken to be their ratio. The value of the allocation that minimizes the first strike cost for both attack preferences are derived analytically. The former recovers results derived earlier. The latter leads to an optimum at unity allocation for which the stability index is determined analytically. For values of the attack preference greater than about unity, maximizing the stability index increases the cost of striking first 10--15%. For smaller values of the attack preference, maximizing the index increases the second strike cost a similar amount. Both are stabilizing, so if both sides could be trusted to target on missiles in order to minimize damage to value and maximize stability, the stability index for vulnerable missiles could be increased by about 15%. However, that would increase the cost to the first striker by about 15%. It is unclear why--having decided to strike--he would do so in a way that would increase damage to himself.
Date: May 1, 1997
Creator: Canavan, G. H.
Partner: UNT Libraries Government Documents Department

Probabilistic model for pressure vessel reliability incorporating fracture mechanics and nondestructive examination

Description: A probabilistic model has been developed for predicting the reliability of structures based on fracture mechanics and the results of nondestructive examination (NDE). The distinctive feature of this model is the way in which inspection results and the probability of detection (POD) curve are used to calculate a probability density function (PDF) for the number of flaws and the distribution of those flaws among the various size ranges. In combination with a probabilistic fracture mechanics model, this density function is used to estimate the probability of failure (POF) of a structure in which flaws have been detected by NDE. The model is useful for parametric studies of inspection techniques and material characteristics.
Date: March 1, 1998
Creator: Tow, D.M. & Reuter, W.G.
Partner: UNT Libraries Government Documents Department

Shock certification of replacement subsystems and components in the presence of uncertainty

Description: In this paper a methodology for analytically estimating the response of replacement components in a system subjected to worst-case hostile shocks is presented. This methodology does not require the use of system testing but uses previously compiled shock data and inverse dynamic analysis to estimate component shock response. In the past component shock responses were determined from numerous system tests; however, with limitations on system testing, an alternate methodology for determining component response is required. Such a methodology is discussed. This methodology is mathematically complex in that two inverse problems, and a forward problem, must be solved for a permutation of models representing variabilities in dynamics. Two conclusions were deduced as a result of this work. First, the present methodology produces overly conservative results. Second, the specification of system variability is critical to the prediction of component response.
Date: May 8, 2000
Creator: DOHNER,JEFFREY L. & LAUFFER,JAMES P.
Partner: UNT Libraries Government Documents Department

New methods for predicting lifetimes. Part 2 -- The Wear-out approach for predicting the remaining lifetime of materials

Description: The so-called Palmgren-Miner concept that degradation is cumulative, and that failure is therefore considered to be the direct result of the accumulation of damage with time, has been known for decades. Cumulative damage models based on this concept have been derived and used mainly for fatigue life predictions for metals and composite materials. The authors review the principles underlying such models and suggest ways in which they may be best applied to polymeric materials in temperature environments. The authors first consider cases where polymer degradation data can be rigorously time-temperature superposed over a given temperature range. For a step change in temperature after damage has occurred at an initial temperature in this range, they show that the remaining lifetime at the second temperature should be linearly related to the aging time prior to the step. This predicted linearity implies that it may be possible to estimate the remaining lifetime of polymeric materials aging under application ambient conditions by completing the aging at an accelerated temperature. They refer to this generic temperature-step method as the Wear-out approach. They then outline the expectations for Wear-out experiments when time-temperature superposition is invalid, specifically describing the two cases where so-called interaction effects are absent and are present. Finally, they present some preliminary results outlining the application of the Wear-out approach to polymers. In analyzing the experimental Wear-out results, they introduce a procedure that they refer to as time-damage superposition. This procedure not only utilizes all of the experimental data instead of a single point from each data set, but also allows them to determine the importance of any interaction effects.
Date: April 20, 2000
Creator: GILLEN,KENNETH T. & CELINA,MATHIAS C.
Partner: UNT Libraries Government Documents Department