13 Matching Results

Search Results

Advanced search parameters have been applied.

Statistical Analyses of Second Indoor Bio-Release Field Evaluation Study at Idaho National Laboratory

Description: In September 2008 a large-scale testing operation (referred to as the INL-2 test) was performed within a two-story building (PBF-632) at the Idaho National Laboratory (INL). The report “Operational Observations on the INL-2 Experiment” defines the seven objectives for this test and discusses the results and conclusions. This is further discussed in the introduction of this report. The INL-2 test consisted of five tests (events) in which a floor (level) of the building was contaminated with the harmless biological warfare agent simulant Bg and samples were taken in most, if not all, of the rooms on the contaminated floor. After the sampling, the building was decontaminated, and the next test performed. Judgmental samples and probabilistic samples were determined and taken during each test. Vacuum, wipe, and swab samples were taken within each room. The purpose of this report is to study an additional four topics that were not within the scope of the original report. These topics are: 1) assess the quantitative assumptions about the data being normally or log-normally distributed; 2) evaluate differences and quantify the sample to sample variability within a room and across the rooms; 3) perform geostatistical types of analyses to study spatial correlations; and 4) quantify the differences observed between surface types and sampling methods for each scenario and study the consistency across the scenarios. The following four paragraphs summarize the results of each of the four additional analyses. All samples after decontamination came back negative. Because of this, it was not appropriate to determine if these clearance samples were normally distributed. As Table 1 shows, the characterization data consists of values between and inclusive of 0 and 100 CFU/cm2 (100 was the value assigned when the number is too numerous to count). The 100 values are generally much bigger than the rest of the ...
Date: December 17, 2009
Creator: Amidan, Brett G.; Pulsipher, Brent A. & Matzke, Brett D.
Partner: UNT Libraries Government Documents Department

Extension of latin hypercube samples with correlated variables.

Description: A procedure for extending the size of a Latin hypercube sample (LHS) with rank correlated variables is described and illustrated. The extension procedure starts with an LHS of size m and associated rank correlation matrix C and constructs a new LHS of size 2m that contains the elements of the original LHS and has a rank correlation matrix that is close to the original rank correlation matrix C. The procedure is intended for use in conjunction with uncertainty and sensitivity analysis of computationally demanding models in which it is important to make efficient use of a necessarily limited number of model evaluations.
Date: November 1, 2006
Creator: Hora, Stephen Curtis; Helton, Jon Craig & Sallaberry, Cedric J.
Partner: UNT Libraries Government Documents Department

Accuracy and Interpretability Testing of Text Mining Methods

Description: Extracting meaningful information from large collections of text data is problematic because of the sheer size of the database. However, automated analytic methods capable of processing such data have emerged. These methods, collectively called text mining first began to appear in 1988. A number of additional text mining methods quickly developed in independent research silos with each based on unique mathematical algorithms. How good each of these methods are at analyzing text is unclear. Method development typically evolves from some research silo centric requirement with the success of the method measured by a custom requirement-based metric. Results of the new method are then compared to another method that was similarly developed. The proposed research introduces an experimentally designed testing method to text mining that eliminates research silo bias and simultaneously evaluates methods from all of the major context-region text mining method families. The proposed research method follows a random block factorial design with two treatments consisting of three and five levels (RBF-35) with repeated measures. Contribution of the research is threefold. First, the users perceived a difference in the effectiveness of the various methods. Second, while still not clear, there are characteristics with in the text collection that affect the algorithms ability to extract meaningful results. Third, this research develops an experimental design process for testing the algorithms that is adaptable into other areas of software development and algorithm testing. This design eliminates the bias based practices historically employed by algorithm developers.
Date: August 2013
Creator: Ashton, Triss A.
Partner: UNT Libraries

Verification test problems for the calculation of probability of loss of assured safety in temperature-dependent systems with multiple weak and strong links.

Description: Four verification test problems are presented for checking the conceptual development and computational implementation of calculations to determine the probability of loss of assured safety (PLOAS) in temperature-dependent systems with multiple weak links (WLs) and strong links (SLs). The problems are designed to test results obtained with the following definitions of loss of assured safety: (1) Failure of all SLs before failure of any WL, (2) Failure of any SL before failure of any WL, (3) Failure of all SLs before failure of all WLs, and (4) Failure of any SL before failure of all WLs. The test problems are based on assuming the same failure properties for all links, which results in problems that have the desirable properties of fully exercising the numerical integration procedures required in the evaluation of PLOAS and also possessing simple algebraic representations for PLOAS that can be used for verification of the analysis.
Date: June 1, 2006
Creator: Johnson, Jay Dean (ProStat, Mesa, AZ); Oberkampf, William Louis & Helton, Jon Craig (Arizona State University, Tempe, AZ)
Partner: UNT Libraries Government Documents Department

Comparing outcome measures derived from four research designs incorporating the retrospective pretest.

Description: Over the last 5 decades, the retrospective pretest has been used in behavioral science research to battle key threats to the internal validity of posttest-only control-group and pretest-posttest only designs. The purpose of this study was to compare outcome measures resulting from four research design implementations incorporating the retrospective pretest: (a) pre-post-then, (b) pre-post/then, (c) post-then, and (d) post/then. The study analyzed the interaction effect of pretest sensitization and post-intervention survey order on two subjective measures: (a) a control measure not related to the intervention and (b) an experimental measure consistent with the intervention. Validity of subjective measurement outcomes were assessed by correlating resulting to objective performance measurement outcomes. A Situational Leadership® II (SLII) training workshop served as the intervention. The Work Involvement Scale of the self version of the Survey of Management Practices Survey served as the subjective control measure. The Clarification of Goals and Objectives Scale of the self version of the Survey of Management Practices Survey served as the subjective experimental measure. The Effectiveness Scale of the self version of the Leader Behavior Analysis II® served as the objective performance measure. This study detected differences in measurement outcomes from SLII participant responses to an experimental and a control measure. In the case of the experimental measure, differences were found in the magnitude and direction of the validity coefficients. In the case of the control measure, differences were found in the magnitude of the treatment effect between groups. These differences indicate that, for this study, the pre-post-then design produced the most valid results for the experimental measure. For the control measure in this study, the pre-post/then design produced the most valid results. Across both measures, the post/then design produced the least valid results.
Date: August 2007
Creator: Nimon, Kim F.
Partner: UNT Libraries

Fertilizer Response Curves for Commercial Southern Forest Species Defined with an Un-Replicated Experimental Design.

Description: There has been recent interest in use of non-replicated regression experimental designs in forestry, as the need for replication in experimental design is burdensome on limited research budgets. We wanted to determine the interacting effects of soil moisture and nutrient availability on the production of various southeastern forest trees (two clones of Populus deltoides, open pollinated Platanus occidentalis, Liquidambar styraciflua and Pinus taeda). Additionally, we required an understanding of the fertilizer response curve. To accomplish both objectives we developed a composite design that includes a core ANOVA approach to consider treatment interactions, with the addition of non-replicated regression plots receiving a range of fertilizer levels for the primary irrigation treatment.
Date: November 1, 2005
Creator: Coleman, Mark; Aubrey, Doug; Coyle, David, R. & Daniels, Richard, F.
Partner: UNT Libraries Government Documents Department

Time Series Data Analysis of Single Subject Experimental Designs Using Bayesian Estimation

Description: This study presents a set of data analysis approaches for single subject designs (SSDs). The primary purpose is to establish a series of statistical models to supplement visual analysis in single subject research using Bayesian estimation. Linear modeling approach has been used to study level and trend changes. I propose an alternate approach that treats the phase change-point between the baseline and intervention conditions as an unknown parameter. Similar to some existing approaches, the models take into account changes in slopes and intercepts in the presence of serial dependency. The Bayesian procedure used to estimate the parameters and analyze the data is described. Researchers use a variety of statistical analysis methods to analyze different single subject research designs. This dissertation presents a series of statistical models to model data from various conditions: the baseline phase, A-B design, A-B-A-B design, multiple baseline design, alternating treatments design, and changing criterion design. The change-point evaluation method can provide additional confirmation of causal effect of the treatment on target behavior. Software codes are provided as supplemental materials in the appendices. The applicability for the analyses is demonstrated using five examples from the SSD literature.
Date: August 2015
Creator: Aerts, Xing Qin
Partner: UNT Libraries

Measures of agreement between computation and experiment:validation metrics.

Description: With the increasing role of computational modeling in engineering design, performance estimation, and safety assessment, improved methods are needed for comparing computational results and experimental measurements. Traditional methods of graphically comparing computational and experimental results, though valuable, are essentially qualitative. Computable measures are needed that can quantitatively compare computational and experimental results over a range of input, or control, variables and sharpen assessment of computational accuracy. This type of measure has been recently referred to as a validation metric. We discuss various features that we believe should be incorporated in a validation metric and also features that should be excluded. We develop a new validation metric that is based on the statistical concept of confidence intervals. Using this fundamental concept, we construct two specific metrics: one that requires interpolation of experimental data and one that requires regression (curve fitting) of experimental data. We apply the metrics to three example problems: thermal decomposition of a polyurethane foam, a turbulent buoyant plume of helium, and compressibility effects on the growth rate of a turbulent free-shear layer. We discuss how the present metrics are easily interpretable for assessing computational model accuracy, as well as the impact of experimental measurement uncertainty on the accuracy assessment.
Date: August 1, 2005
Creator: Barone, Matthew Franklin & Oberkampf, William Louis
Partner: UNT Libraries Government Documents Department

Experimental Design for a Sponge-Wipe Study to Relate the Recovery Efficiency and False Negative Rate to the Concentration of a Bacillus anthracis Surrogate for Six Surface Materials

Description: Two concerns were raised by the Government Accountability Office following the 2001 building contaminations via letters containing Bacillus anthracis (BA). These included the: 1) lack of validated sampling methods, and 2) need to use statistical sampling to quantify the confidence of no contamination when all samples have negative results. Critical to addressing these concerns is quantifying the false negative rate (FNR). The FNR may depend on the 1) method of contaminant deposition, 2) surface concentration of the contaminant, 3) surface material being sampled, 4) sample collection method, 5) sample storage/transportation conditions, 6) sample processing method, and 7) sample analytical method. A review of the literature found 17 laboratory studies that focused on swab, wipe, or vacuum samples collected from a variety of surface materials contaminated by BA or a surrogate, and used culture methods to determine the surface contaminant concentration. These studies quantified performance of the sampling and analysis methods in terms of recovery efficiency (RE) and not FNR (which left a major gap in available information). Quantifying the FNR under a variety of conditions is a key aspect of validating sample and analysis methods, and also for calculating the confidence in characterization or clearance decisions based on a statistical sampling plan. A laboratory study was planned to partially fill the gap in FNR results. This report documents the experimental design developed by Pacific Northwest National Laboratory and Sandia National Laboratories (SNL) for a sponge-wipe method. The testing was performed by SNL and is now completed. The study investigated the effects on key response variables from six surface materials contaminated with eight surface concentrations of a BA surrogate (Bacillus atrophaeus). The key response variables include measures of the contamination on test coupons of surface materials tested, contamination recovered from coupons by sponge-wipe samples, RE, and FNR. The experimental design involves ...
Date: May 1, 2011
Creator: Piepel, Gregory F.; Amidan, Brett G.; Krauter, Paula & Einfeld, Wayne
Partner: UNT Libraries Government Documents Department

Experimental Design for a Sponge-Wipe Study to Relate the Recovery Efficiency and False Negative Rate to the Concentration of a Bacillus anthracis Surrogate for Six Surface Materials

Description: Two concerns were raised by the Government Accountability Office following the 2001 building contaminations via letters containing Bacillus anthracis (BA). These included the: 1) lack of validated sampling methods, and 2) need to use statistical sampling to quantify the confidence of no contamination when all samples have negative results. Critical to addressing these concerns is quantifying the probability of correct detection (PCD) (or equivalently the false negative rate FNR = 1 − PCD). The PCD/FNR may depend on the 1) method of contaminant deposition, 2) surface concentration of the contaminant, 3) surface material being sampled, 4) sample collection method, 5) sample storage/transportation conditions, 6) sample processing method, and 7) sample analytical method. A review of the literature found 17 laboratory studies that focused on swab, wipe, or vacuum samples collected from a variety of surface materials contaminated by BA or a surrogate, and used culture methods to determine the surface contaminant concentration. These studies quantified performance of the sampling and analysis methods in terms of recovery efficiency (RE) and not PCD/FNR (which left a major gap in available information). Quantifying the PCD/FNR under a variety of conditions is a key aspect of validating sample and analysis methods, and also for calculating the confidence in characterization or clearance decisions based on a statistical sampling plan. A laboratory study was planned to partially fill the gap in PCD/FNR results. This report documents the experimental design developed by Pacific Northwest National Laboratory and Sandia National Laboratories (SNL) for a sponge-wipe method. The study will investigate the effects on key response variables from six surface materials contaminated with eight surface concentrations of a BA surrogate (Bacillus atrophaeus). The key response variables include measures of the contamination on test coupons of surface materials tested, contamination recovered from coupons by sponge-wipe samples, RE, and PCD/FNR. The ...
Date: December 16, 2010
Creator: Piepel, Gregory F.; Amidan, Brett G.; Krauter, Paula & Einfeld, Wayne
Partner: UNT Libraries Government Documents Department

A methodology for selecting an optimal experimental design for the computer analysis of a complex system

Description: Investigation and evaluation of a complex system is often accomplished through the use of performance measures based on system response models. The response models are constructed using computer-generated responses supported where possible by physical test results. The general problem considered is one where resources and system complexity together restrict the number of simulations that can be performed. The levels of input variables used in defining environmental scenarios, initial and boundary conditions and for setting system parameters must be selected in an efficient way. This report describes an algorithmic approach for performing this selection.
Date: February 3, 2000
Creator: RUTHERFORD,BRIAN M.
Partner: UNT Libraries Government Documents Department

A Resampling Based Approach to Optimal Experimental Design for Computer Analysis of a Complex System

Description: The investigation of a complex system is often performed using computer generated response data supplemented by system and component test results where possible. Analysts rely on an efficient use of limited experimental resources to test the physical system, evaluate the models and to assure (to the extent possible) that the models accurately simulate the system order investigation. The general problem considered here is one where only a restricted number of system simulations (or physical tests) can be performed to provide additional data necessary to accomplish the project objectives. The levels of variables used for defining input scenarios, for setting system parameters and for initializing other experimental options must be selected in an efficient way. The use of computer algorithms to support experimental design in complex problems has been a topic of recent research in the areas of statistics and engineering. This paper describes a resampling based approach to form dating this design. An example is provided illustrating in two dimensions how the algorithm works and indicating its potential on larger problems. The results show that the proposed approach has characteristics desirable of an algorithmic approach on the simple examples. Further experimentation is needed to evaluate its performance on larger problems.
Date: August 4, 1999
Creator: Rutherford, Brian
Partner: UNT Libraries Government Documents Department