11 Matching Results

Search Results

Advanced search parameters have been applied.

A design for a V&V and UQ discovery process.

Description: There is currently sparse literature on how to implement systematic and comprehensive processes for modern V&V/UQ (VU) within large computational simulation projects. Important design requirements have been identified in order to construct a viable 'system' of processes. Significant processes that are needed include discovery, accumulation, and assessment. A preliminary design is presented for a VU Discovery process that accounts for an important subset of the requirements. The design uses a hierarchical approach to set context and a series of place-holders that identify the evidence and artifacts that need to be created in order to tell the VU story and to perform assessments. The hierarchy incorporates VU elements from a Predictive Capability Maturity Model and uses questionnaires to define critical issues in VU. The place-holders organize VU data within a central repository that serves as the official VU record of the project. A review process ensures that those who will contribute to the record have agreed to provide the evidence identified by the Discovery process. VU expertise is an essential part of this process and ensures that the roadmap provided by the Discovery process is adequate. Both the requirements and the design were developed to support the Nuclear Energy Advanced Modeling and Simulation Waste project, which is developing a set of advanced codes for simulating the performance of nuclear waste storage sites. The Waste project served as an example to keep the design of the VU Discovery process grounded in practicalities. However, the system is represented abstractly so that it can be applied to other M&S projects.
Date: September 1, 2011
Creator: Knupp, Patrick Michael & Urbina, Angel
Partner: UNT Libraries Government Documents Department

A comparison of methods for representing sparsely sampled random quantities.

Description: This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.
Date: September 1, 2013
Creator: Romero, Vicente Jose; Swiler, Laura Painton; Urbina, Angel & Mullins, Joshua
Partner: UNT Libraries Government Documents Department

Modeling of lead-acid battery capacity loss in a photovoltaic application

Description: The authors have developed a model for the probabilistic behavior of a rechargeable battery acting as the energy storage component in a photovoltaic power supply system. Stochastic and deterministic models are created to simulate the behavior of the system components. The components are the solar resource, the photovoltaic power supply system, the rechargeable battery, and a load. One focus of this research is to model battery state of charge and battery capacity as a function of time. The capacity damage effect that occurs during deep discharge is introduced via a non-positive function of duration and depth of deep discharge events. Because the form of this function is unknown and varies with battery type, the authors model it with an artificial neural network (ANN) whose parameters are to be trained with experimental data. The battery capacity loss model will be described and a numerical example will be presented showing the predicted battery life under different PV system use scenarios.
Date: April 12, 2000
Creator: JUNGST,RUDOLPH G.; URBINA,ANGEL & PAEZ,THOMAS L.
Partner: UNT Libraries Government Documents Department

Development of a fourth generation predictive capability maturity model.

Description: The Predictive Capability Maturity Model (PCMM) is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated for an intended application. The primary application of this tool at Sandia National Laboratories (SNL) has been for physics-based computational simulations in support of nuclear weapons applications. The two main goals of a PCMM evaluation are 1) the communication of computational simulation capability, accurately and transparently, and 2) the development of input for effective planning. As a result of the increasing importance of computational simulation to SNL's mission, the PCMM has evolved through multiple generations with the goal to provide more clarity, rigor, and completeness in its application. This report describes the approach used to develop the fourth generation of the PCMM.
Date: September 1, 2013
Creator: Hills, Richard Guy; Witkowski, Walter R.; Urbina, Angel; Rider, William J. & Trucano, Timothy Guy
Partner: UNT Libraries Government Documents Department

Status and Integrated Road-Map for Joints Modeling Research

Description: The constitutive behavior of mechanical joints is largely responsible for the energy dissipation and vibration damping in weapons systems. For reasons arising from the dramatically different length scales associated with those dissipative mechanisms and the length scales characteristic of the overall structure, this physics cannot be captured adequately through direct simulation of the contact mechanics within a structural dynamics analysis. The only practical method for accommodating the nonlinear nature of joint mechanisms within structural dynamic analysis is through constitutive models employing degrees of freedom natural to the scale of structural dynamics. This document discusses a road-map for developing such constitutive models.
Date: March 1, 2003
Creator: SEGALMAN, DANIEL J.; SMALLWOOD, DAVID ORA; SUMALI, HARTONO; PAEZ, THOMAS L. & URBINA, ANGEL
Partner: UNT Libraries Government Documents Department

Advanced Signal Processing for Thermal Flaw Detection

Description: Dynamic thermography is a promising technology for inspecting metallic and composite structures used in high-consequence industries. However, the reliability and inspection sensitivity of this technology has historically been limited by the need for extensive operator experience and the use of human judgment and visual acuity to detect flaws in the large volume of infrared image data collected. To overcome these limitations new automated data analysis algorithms and software is needed. The primary objectives of this research effort were to develop a data processing methodology that is tied to the underlying physics, which reduces or removes the data interpretation requirements, and which eliminates the need to look at significant numbers of data frames to determine if a flaw is present. Considering the strengths and weakness of previous research efforts, this research elected to couple both the temporal and spatial attributes of the surface temperature. Of the possible algorithms investigated, the best performing was a radiance weighted root mean square Laplacian metric that included a multiplicative surface effect correction factor and a novel spatio-temporal parametric model for data smoothing. This metric demonstrated the potential for detecting flaws smaller than 0.075 inch in inspection areas on the order of one square foot. Included in this report is the development of a thermal imaging model, a weighted least squares thermal data smoothing algorithm, simulation and experimental flaw detection results, and an overview of the ATAC (Automated Thermal Analysis Code) software that was developed to analyze thermal inspection data.
Date: September 1, 2001
Creator: VALLEY, MICHAEL T.; HANSCHE, BRUCE D.; PAEZ, THOMAS L.; URBINA, ANGEL & ASHBAUGH, DENNIS M.
Partner: UNT Libraries Government Documents Department

Description of the Sandia Validation Metrics Project

Description: This report describes the underlying principles and goals of the Sandia ASCI Verification and Validation Program Validation Metrics Project. It also gives a technical description of two case studies, one in structural dynamics and the other in thermomechanics, that serve to focus the technical work of the project in Fiscal Year 2001.
Date: August 1, 2001
Creator: TRUCANO,TIMOTHY G.; EASTERLING,ROBERT G.; DOWDING,KEVIN J.; PAEZ,THOMAS L.; URBINA,ANGEL; ROMERO,VICENTE J. et al.
Partner: UNT Libraries Government Documents Department

Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) verification and validation plan. version 1.

Description: The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. To meet this objective, NEAMS Waste IPSC M&S capabilities will be applied to challenging spatial domains, temporal domains, multiphysics couplings, and multiscale couplings. A strategic verification and validation (V&V) goal is to establish evidence-based metrics for the level of confidence in M&S codes and capabilities. Because it is economically impractical to apply the maximum V&V rigor to each and every M&S capability, M&S capabilities will be ranked for their impact on the performance assessments of various components of the repository systems. Those M&S capabilities with greater impact will require a greater level of confidence and a correspondingly greater investment in V&V. This report includes five major components: (1) a background summary of the NEAMS Waste IPSC to emphasize M&S challenges; (2) the conceptual foundation for verification, validation, and confidence assessment of NEAMS Waste IPSC M&S capabilities; (3) specifications for the planned verification, validation, and confidence-assessment practices; (4) specifications for the planned evidence information management system; and (5) a path forward for the incremental implementation of this V&V plan.
Date: January 1, 2011
Creator: Bartlett, Roscoe Ainsworth; Arguello, Jose Guadalupe, Jr.; Urbina, Angel; Bouchard, Julie F.; Edwards, Harold Carter; Freeze, Geoffrey A. et al.
Partner: UNT Libraries Government Documents Department