48 Matching Results

Search Results

Advanced search parameters have been applied.

Redundant computing for exascale systems.

Description: Exascale systems will have hundred thousands of compute nodes and millions of components which increases the likelihood of faults. Today, applications use checkpoint/restart to recover from these faults. Even under ideal conditions, applications running on more than 50,000 nodes will spend more than half of their total running time saving checkpoints, restarting, and redoing work that was lost. Redundant computing is a method that allows an application to continue working even when failures occur. Instead of each failure causing an application interrupt, multiple failures can be absorbed by the application until redundancy is exhausted. In this paper we present a method to analyze the benefits of redundant computing, present simulation results of the cost, and compare it to other proposed methods for fault resilience.
Date: December 1, 2010
Creator: Stearley, Jon R.; Riesen, Rolf E.; Laros, James H., III; Ferreira, Kurt Brian; Pedretti, Kevin Thomas Tauke; Oldfield, Ron A. et al.
Partner: UNT Libraries Government Documents Department

Within-Channel Redundancy Versus Between-Channel Redundancy in Instructional Material and Its Association with Amount Learned

Description: The problem of this study is whether between-channel redundancy in an instructional audio-visual message enhances immediate recall of information more than within—channel redundancy. A secondary purpose was to compare three forms of between—channel redundancy! audio—video, audio—video—caption, and audio-caption with one form of within-channel redundancy: video-caption. These comparisons were designed to demonstrate which form of redundancy had a higher association with recall of information. The subjects were administered the Kentucky Comprehensive Listening Inventory to measure listening skills, and the Receiver Apprehension Inventory to identify subjects who experienced significantly high apprehension as receivers of information. Then the subjects were randomly divided into four treatment groups and shown an eight minute newscast. All four groups were presented the same instructional message, but the mode of presentation differed depending upon the treatment group. After viewing the instructional program each member of each group was given a forty item multiple-choice retention inventory based on the information presented in the newscast. The data were presented in terms of correct responses on the Kentucky Comprehensive Listening Inventory and the forty item retention inventory. Discriminate analysis was used to determine which items from the multiple-choice retention inventory accounted for the most variance. Thirteen items were found to account for the greatest amount of variance. Reliability estimates were calculated for all four story categories and for the forty items collectively. All reliability estimates were acceptable.
Date: May 1985
Creator: Evans, Sharon A. (Sharon Ann), 1954-
Partner: UNT Libraries

Implications of turbulence interactions: A path toward addressing very high Reynolds number flows

Description: The classical 'turbulence problem' is narrowed down and redefined for scientific and engineering applications. From an application perspective, accurate computation of large-scale transport of the turbulent flows is needed. In this paper, a scaling analysis that allows for the large-scales of very high Reynolds number turbulent flows - to be handled by the available supercomputers is proposed. Current understanding of turbulence interactions of incompressible turbulence, which forms the foundation of our argument, is reviewed. Furthermore, the data redundancy in the inertial range is demonstrated. Two distinctive interactions, namely, the distance and near-grid interactions, are inspected for large-scale simulations. The distant interactions in the subgrid scales in an inertial range can be effectively modelled by an eddy damping. The near-grid interactions must be carefully incorporated.
Date: May 15, 2006
Creator: Zhou, Y
Partner: UNT Libraries Government Documents Department

ROSE::FTTransform - A Source-to-Source Translation Framework for Exascale Fault-Tolerance Research

Description: Exascale computing systems will require sufficient resilience to tolerate numerous types of hardware faults while still assuring correct program execution. Such extreme-scale machines are expected to be dominated by processors driven at lower voltages (near the minimum 0.5 volts for current transistors). At these voltage levels, the rate of transient errors increases dramatically due to the sensitivity to transient and geographically localized voltage drops on parts of the processor chip. To achieve power efficiency, these processors are likely to be streamlined and minimal, and thus they cannot be expected to handle transient errors entirely in hardware. Here we present an open, compiler-based framework to automate the armoring of High Performance Computing (HPC) software to protect it from these types of transient processor errors. We develop an open infrastructure to support research work in this area, and we define tools that, in the future, may provide more complete automated and/or semi-automated solutions to support software resiliency on future exascale architectures. Results demonstrate that our approach is feasible, pragmatic in how it can be separated from the software development process, and reasonably efficient (0% to 30% overhead for the Jacobi iteration on common hardware; and 20%, 40%, 26%, and 2% overhead for a randomly selected subset of benchmarks from the Livermore Loops [1]).
Date: March 26, 2012
Creator: Lidman, J; Quinlan, D; Liao, C & McKee, S
Partner: UNT Libraries Government Documents Department

A logic model approach to conceptual design of a scientific/industrial complex.

Description: The development of a large-scale scientific/industrial complex is a difficult undertaking. At the conceptual design stage a number of issues must be carefully resolved for the project to be successful. A few of these questions include: What is the purpose of the complex? What capabilities are needed to express this purpose? How should the facilities that contain these capabilities be arranged and in what order should they be constructed? Which capabilities are most important in fulfilling the complex' mission and therefore what redundancy is needed? Answers to these questions are needed whether the complex under consideration is planned or already exists and is undergoing a major restructuring. In the latter case additional constraints such as the need to maintain production, accommodate new missions and possible requirements to continue to use certain facilities may exist. The sheer size of the complexes under consideration here means that there will be many shareholders with a stake in the decision process and that the potential consequences of a poor decision could be severe. Different design options will exist in terms of scope and possibly in terms of schedule. Shareholders will value these options differently. Therefore a mechanism is needed to takc into account the varying perspectives of the shareholders and to prioritize the options.
Date: January 1, 2001
Creator: Bott, T. F. (Terrence F.) & Eisenhawer, S. W. (Stephen W.)
Partner: UNT Libraries Government Documents Department

Mitotic Exit Control as an Evolved Complex System

Description: The exit from mitosis is the last critical decision a cell has to make during a division cycle. A complex regulatory system has evolved to evaluate the success of mitotic events and control this decision. Whereas outstanding genetic work in yeast has led to rapid discovery of a large number of interacting genes involved in the control of mitotic exit, it has also become increasingly difficult to comprehend the logic and mechanistic features embedded in the complex molecular network. Our view is that this difficulty stems in part from the attempt to explain mitotic exit control using concepts from traditional top-down engineering design, and that exciting new results from evolutionary engineering design applied to networks and electronic circuits may lend better insights. We focus on four particularly intriguing features of the mitotic exit control system: the two-stepped release of Cdc14; the self-activating nature of Tem1 GTPase; the spatial sensor associated with the spindle pole body; and the extensive redundancy in the mitotic exit network. We attempt to examine these design features from the perspective of evolutionary design and complex system engineering.
Date: April 25, 2005
Creator: Bosl, W & Li, R
Partner: UNT Libraries Government Documents Department

Communication Optimizations for Fine-Grained UPCApplications

Description: Global address space languages like UPC exhibit high performance and portability on a broad class of shared and distributed memory parallel architectures. The most scalable applications use bulk memory copies rather than individual reads and writes to the shared space, but finer-grained sharing can be useful for scenarios such as dynamic load balancing, event signaling, and distributed hash tables. In this paper we present three optimization techniques for global address space programs with fine-grained communication: redundancy elimination, use of split-phase communication, and communication coalescing. Parallel UPC programs are analyzed using static single assignment form and a data flow graph, which are extended to handle the various shared and private pointer types that are available in UPC. The optimizations also take advantage of UPC's relaxed memory consistency model, which reduces the need for cross thread analysis. We demonstrate the effectiveness of the analysis and optimizations using several benchmarks, which were chosen to reflect the kinds of fine-grained, communication-intensive phases that exist in some larger applications. The optimizations show speedups of up to 70 percent on three parallel systems, which represent three different types of cluster network technologies.
Date: July 8, 2005
Creator: Chen, Wei-Yu; Iancu, Costin & Yelick, Katherine
Partner: UNT Libraries Government Documents Department

Advanced Benchmarking for Complex Building Types: Laboratories as an Exemplar

Description: Complex buildings such as laboratories, data centers and cleanrooms present particular challenges for energy benchmarking because it is difficult to normalize special requirements such as health and safety in laboratories and reliability (i.e., system redundancy to maintain uptime) in data centers which significantly impact energy use. For example, air change requirements vary widely based on the type of work being performed in each laboratory space. We present methods and tools for energy benchmarking in laboratories, as an exemplar of a complex building type. First, we address whole building energy metrics and normalization parameters. We present empirical methods based on simple data filtering as well as multivariate regression analysis on the Labs21 database. The regression analysis showed lab type, lab-area ratio and occupancy hours to be significant variables. Yet the dataset did not allow analysis of factors such as plug loads and air change rates, both of which are critical to lab energy use. The simulation-based method uses an EnergyPlus model to generate a benchmark energy intensity normalized for a wider range of parameters. We suggest that both these methods have complementary strengths and limitations. Second, we present"action-oriented" benchmarking, which extends whole-building benchmarking by utilizing system-level features and metrics such as airflow W/cfm to quickly identify a list of potential efficiency actions which can then be used as the basis for a more detailed audit. While action-oriented benchmarking is not an"audit in a box" and is not intended to provide the same degree of accuracy afforded by an energy audit, we demonstrate how it can be used to focus and prioritize audit activity and track performance at the system level. We conclude with key principles that are more broadly applicable to other complex building types.
Date: August 1, 2010
Creator: Mathew, Paul A.; Clear, Robert; Kircher, Kevin; Webster, Tom; Lee, Kwang Ho & Hoyt, Tyler
Partner: UNT Libraries Government Documents Department

Constructing a resilience index for the enhanced critical in Frastructure Protection Program.

Description: Following recommendations made in Homeland Security Presidential Directive 7, which established a national policy for the identification and increased protection of critical infrastructure and key resources (CIKR) by Federal departments and agencies, the U.S. Department of Homeland Security (DHS) in 2006 developed the Enhanced Critical Infrastructure Protection (ECIP) program. The ECIP program aimed to provide a closer partnership with state, regional, territorial, local, and tribal authorities in fulfilling the national objective to improve CIKR protection. The program was specifically designed to identify protective measures currently in place in CIKR and to inform facility owners/operators of the benefits of new protective measures. The ECIP program also sought to enhance existing relationships between DHS and owners/operators of CIKR and to build relationships where none existed (DHS 2008; DHS 2009). In 2009, DHS and its protective security advisors (PSAs) began assessing CIKR assets using the ECIP program and ultimately produced individual protective measure and vulnerability values through the protective measure and vulnerability indices (PMI/VI). The PMI/VI assess the protective measures posture of individual facilities at their 'weakest link,' allowing for a detailed analysis of the most vulnerable aspects of the facilities (Schneier 2003), while maintaining the ability to produce an overall protective measures picture. The PMI has six main components (physical security, security management, security force, information sharing, protective measures assessments, and dependencies) and focuses on actions taken by a facility to prevent or deter the occurrence of an incident (Argonne National Laboratory 2009). As CIKR continue to be assessed using the PMI/VI and owners/operators better understand how they can prevent or deter incidents, academic research, practitioner emphasis, and public policy formation have increasingly focused on resilience as a necessary component of the risk management framework and infrastructure protection. This shift in focus toward resilience complements the analysis of protective measures by taking ...
Date: October 14, 2010
Creator: Fisher, R. E.; Bassett, G. W.; Buehring, W. A.; Collins, M. J.; Dickinson, D. C.; Eaton, L. K. et al.
Partner: UNT Libraries Government Documents Department

Using the Choquet integral for screening geological CO2 storage sites

Description: For geological CO{sub 2} storage site selection, it is desirable to reduce the number of candidate sites through a screening process before detailed site characterization is performed. Screening generally involves defining a number of criteria which then need to be evaluated for each site. The importance of each criterion to the final evaluation will generally be different. Weights reflecting the relative importance of these criteria can be provided by experts. To evaluate a site, each criterion must be evaluated and scored, and then aggregated, taking into account the importance of the criteria. We propose the use of the Choquet integral for aggregating the scores. The Choquet integral considers the interactions among criteria, i.e. whether they are independent, complementary to each other, or partially repetitive. We also evaluate the Shapley index, which demonstrates how the importance of a given piece of information may change if it is considered by itself or together with other available information. An illustrative example demonstrates how the Choquet integral properly accounts for the presence of redundancy in two site-evaluation criteria, making the screening process more defensible than the standard weighted-average approach.
Date: March 1, 2011
Creator: Zhang, Y.
Partner: UNT Libraries Government Documents Department

The First Pan-WCRP Workshop on Monsoon Climate Systems: Toward Better Prediction of the Monsoons

Description: In 2004 the Joint Scientific Committee (JSC) that provides scientific guidance to the World Climate Research Programme (WCRP) requested an assessment of (1) WCRP monsoon related activities and (2) the range of available observations and analyses in monsoon regions. The purpose of the assessment was to (a) define the essential elements of a pan-WCRP monsoon modeling strategy, (b) identify the procedures for producing this strategy, and (c) promote improvements in monsoon observations and analyses with a view toward their adequacy, and addressing any undue redundancy or duplication. As such, the WCRP sponsored the ''1st Pan-WCRP Workshop on Monsoon Climate Systems: Toward Better Prediction of the Monsoons'' at the University of California, Irvine, CA, USA from 15-17 June 2005. Experts from the two WCRP programs directly relevant to monsoon studies, the Climate Variability and Predictability Programme (CLIVAR) and the Global Energy and Water Cycle Experiment (GEWEX), gathered to assess the current understanding of the fundamental physical processes governing monsoon variability and to highlight outstanding problems in simulating the monsoon that can be tackled through enhanced cooperation between CLIVAR and GEWEX. The agenda with links to the presentations can be found at: http://www.clivar.org/organization/aamon/WCRPmonsoonWS/agenda.htm. Scientific motivation for a joint CLIVAR-GEWEX approach to investigating monsoons includes the potential for improved medium-range to seasonal prediction through better simulation of intraseasonal (30-60 day) oscillations (ISO's). ISO's are important for the onset of monsoons, as well as the development of active and break periods of rainfall during the monsoon season. Foreknowledge of the active and break phases of the monsoon is important for crop selection, the determination of planting times and mitigation of potential flooding and short-term drought. With a few exceptions simulations of ISO are typically poor in all classes of modeling. Observational and modeling studies indicate that the diurnal cycle of radiative heating and surface ...
Date: July 27, 2005
Creator: Sperber, K R & Yasunari, T
Partner: UNT Libraries Government Documents Department

Module Safety Issues (Presentation)

Description: Description of how to make PV modules so that they are less likely to turn into safety hazards. Making modules inherently safer with minimum additional cost is the preferred approach for PV. Safety starts with module design to ensure redundancy within the electrical circuitry to minimize open circuits and proper mounting instructions to prevent installation related ground faults. Module manufacturers must control the raw materials and processes to ensure that that every module is built like those qualified through the safety tests. This is the reason behind the QA task force effort to develop a 'Guideline for PV Module Manufacturing QA'. Periodic accelerated stress testing of production products is critical to validate the safety of the product. Combining safer PV modules with better systems designs is the ultimate goal. This should be especially true for PV arrays on buildings. Use of lower voltage dc circuits - AC modules, DC-DC converters. Use of arc detectors and interrupters to detect arcs and open the circuits to extinguish the arcs.
Date: February 1, 2012
Creator: Wohlgemuth, J.
Partner: UNT Libraries Government Documents Department

DESIGN CONSIDERATIONS OF FAST KICKER SYSTEMS FOR HIGH INTENSITY PROTON ACCELERATORS

Description: In this paper, we discuss the specific issues related to the design of the Fast Kicker Systems for high intensity proton accelerators. To address these issues in the preliminary design stage can be critical since the fast kicker systems affect the machine lattice structure and overall design parameters. Main topics include system architecture, design strategy, beam current coupling, grounding, end user cost vs. system cost, reliability, redundancy and flexibility. Operating experience with the Alternating Gradient Synchrotron injection and extraction kicker systems at Brookhaven National Laboratory and their future upgrade is presented. Additionally, new conceptual designs of the extraction kicker for the Spallation Neutron Source at Oak Ridge and the Advanced Hydrotest Facility at Los Alamos are discussed.
Date: January 1, 2001
Creator: Parsons, W. M. (William Mark); Walstrom, P. L. (Peter L.); Murray, M. M. (Matthew M.); Zhang, Wu & Sandberg, J. (Jon)
Partner: UNT Libraries Government Documents Department

Measurement and correction of linear optics and coupling at tevatron complex

Description: The optics measurements have played important role in improving the performance of Tevatron collider. Until recently, most of them were based on the differential orbit measurements with data analysis, which neglects measurement inaccuracies such as differences in differential responses of beam position monitors, their rolls, etc. To address these complications we have used a method based on the analysis of many differential orbits. That creates the redundancy in the data allowing to get more detailed understanding of the machine. In this article we discuss the progress with Tevatron optics correction, its present status and future improvements.
Date: January 1, 2006
Creator: Lebedev, V.; Nagaslaev, V.; Valishev, A.; /Fermilab; Sajaev, V. & /Argonne
Partner: UNT Libraries Government Documents Department

Supervisory control of a pilot-scale cooling loop

Description: We combine a previously developed strategy for Fault Detection and Identification (FDI) with a supervisory controller in closed loop. The combined method is applied to a model of a pilot-scale cooling loop of a nuclear plant, which includes Kalman filters and a model-based predictive controller as part of normal operation. The system has two valves available for flow control meaning that some redundancy is available. The FDI method is based on likelihood ratios for different fault scenarios which in turn are derived from the application of the Kalman filter. A previously introduced extension of the FDI method is used here to enable detection and identification of non-linear faults like stuck valve problems and proper accounting of the time of fault introduction. The supervisory control system is designed so to take different kinds of actions depending on the status of the fault diagnosis task and on the type of identified fault once diagnosis is complete. Some faults, like sensor bias and drift, are parametric in nature and can be adjusted without need for reconfiguration of the regulatory control system. Other faults, like a stuck valve problem, require reconfiguration of the regulatory control system. The whole strategy is demonstrated for several scenarios.
Date: August 1, 2011
Creator: Villez, Kris; Venkatasubramanian, Venkat & Garcia, Humberto
Partner: UNT Libraries Government Documents Department

The theory of diversity and redundancy in information system security : LDRD final report.

Description: The goal of this research was to explore first principles associated with mixing of diverse implementations in a redundant fashion to increase the security and/or reliability of information systems. Inspired by basic results in computer science on the undecidable behavior of programs and by previous work on fault tolerance in hardware and software, we have investigated the problem and solution space for addressing potentially unknown and unknowable vulnerabilities via ensembles of implementations. We have obtained theoretical results on the degree of security and reliability benefits from particular diverse system designs, and mapped promising approaches for generating and measuring diversity. We have also empirically studied some vulnerabilities in common implementations of the Linux operating system and demonstrated the potential for diversity to mitigate these vulnerabilities. Our results provide foundational insights for further research on diversity and redundancy approaches for information systems.
Date: October 1, 2010
Creator: Mayo, Jackson R. (Sandia National Laboratories, Livermore, CA); Torgerson, Mark Dolan; Walker, Andrea Mae; Armstrong, Robert C. (Sandia National Laboratories, Livermore, CA); Allan, Benjamin A. (Sandia National Laboratories, Livermore, CA) & Pierson, Lyndon George
Partner: UNT Libraries Government Documents Department

INCOSE Systems Engineering Handbook v3.2: Improving the Process for SE Practitioners

Description: The INCOSE Systems Engineering Handbook is the official INCOSE reference document for understanding systems engineering (SE) methods and conducting SE activities. Over the years, the Handbook has evolved to accommodate advances in the SE discipline and now serves as the basis for the Certified Systems Engineering Professional (CSEP) exam. Due to its evolution, the Handbook had become somewhat disjointed in its treatment and presentation of SE topics and was not aligned with the latest version of International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC) 15288:2008, Systems and Software Engineering. As a result, numerous inconsistencies were identified that could confuse practitioners and directly impact the probability of success in passing the CSEP exam. Further, INCOSE leadership had previously submitted v3.1 of the Handbook to ISO/IEC for consideration as a Technical Report, but was told that the Handbook would have to be updated to conform with the terminology and structure of new ISO/IEC15288:2008, Systems and software engineering, prior to being considered. The revised INCOSE Systems Engineering Handbook v3.2 aligns with the structure and principles of ISO/IEC 15288:2008 and presents the generic SE life-cycle process steps in their entirety, without duplication or redundancy, in a single location within the text. As such, the revised Handbook v3.2 serves as a comprehensive instructional and reference manual for effectively understanding SE processes and conducting SE and better serves certification candidates preparing for the CSEP exam.
Date: July 1, 2010
Creator: Hamelin, R. Douglas; Walden, David D. & Krueger, Michael E.
Partner: UNT Libraries Government Documents Department

Fault Tolerance and Scaling in e-Science Cloud Applications: Observations from the Continuing Development of MODISAzure

Description: It can be natural to believe that many of the traditional issues of scale have been eliminated or at least greatly reduced via cloud computing. That is, if one can create a seemingly wellfunctioning cloud application that operates correctly on small or moderate-sized problems, then the very nature of cloud programming abstractions means that the same application will run as well on potentially significantly larger problems. In this paper, we present our experiences taking MODISAzure, our satellite data processing system built on the Windows Azure cloud computing platform, from the proof-of-concept stage to a point of being able to run on significantly larger problem sizes (e.g., from national-scale data sizes to global-scale data sizes). To our knowledge, this is the longest-running eScience application on the nascent Windows Azure platform. We found that while many infrastructure-level issues were thankfully masked from us by the cloud infrastructure, it was valuable to design additional redundancy and fault-tolerance capabilities such as transparent idempotent task retry and logging to support debugging of user code encountering unanticipated data issues. Further, we found that using a commercial cloud means anticipating inconsistent performance and black-box behavior of virtualized compute instances, as well as leveraging changing platform capabilities over time. We believe that the experiences presented in this paper can help future eScience cloud application developers on Windows Azure and other commercial cloud providers.
Date: April 1, 2010
Creator: Li, Jie; Humphrey, Marty; Cheah, You-Wei; Ryu, Youngryel; Agarwal, Deb; Jackson, Keith et al.
Partner: UNT Libraries Government Documents Department

1994 SSRL Activity Report

Description: SSRL, a division of the Stanford Linear Accelerator Center, is a national user facility which provides synchrotron radiation, a name given to x-rays or light produced by electrons circulating in a storage ring at nearly the speed of light. The synchrotron radiation is produced by the 3.3 GeV storage ring, SPEAR. SPEAR is a fully dedicated synchrotron radiation facility which has been operating for user experiments 6 to 7 months per year. 1994, the third year of operation of SSRL as a fully dedicated, low-emittance, independent user facility was superb. The facility ran extremely well, delivering 89% of the scheduled user beam to 25 experimental stations during 6.5 months of user running. Over 600 users came from 167 institutions to participate in 343 experiments. Users from private industry were involved in 31% of these experiments. The SPEAR accelerator ran very well with no major component failures and an unscheduled down time of only 2.9%. In addition to this increased reliability, there was a significant improvement in the stability of the beam. The enhancements to the SPEAR orbit as part of a concerted three-year program were particularly noticeable to users. the standard deviation of beam movement (both planes) in the last part of the run was 80 microns, major progress toward the ultimate goal of 50-micron stability. This was a significant improvement from the previous year when the movement was 400 microns in the horizontal and 200 microns in the vertical. A new accelerator Personal Protection System (PPS), built with full redundancy and providing protection from both radiation exposure and electrical hazards, was installed in 1994.
Date: November 18, 2011
Partner: UNT Libraries Government Documents Department

Internal Technical Report, Software Requirements and Design Guide for the 5MW(e) Raft River Pilot Plant

Description: The 5MW(e) electrical generating plant is a demonstrational unit intended to provide engineering data basic to improvement of geothermal-electric technology. It is anticipated that the plant will operate on a production basis after initial testing, and that periodic re-testing will be done to measure the effects of fouling in heat exchanges and aging generally. The initial tests will confirm engineering estimates of performance, and will identify optimum feasible operatinq conditions and maximum power generating capacity. They will also identify any anomalous plant behavior not foreseen. Several tests will lead to quantification of constant and variable terms used in thermodynamic relationships descriptive of plant and subsystem behavior. Because the product of the testing will be confirmatory engineering data heretofore unavailable, the plant has been carefully instrumented, with either explicit or implicit instrumentation redundancy for most parameters to be measured. The 5MW(e) data system collects the data directly from the instrumentation. Enhancements such as on-line analytical routines may be added later, but initially, data capture shall be the sole activity of the data system. The 5MW(e) plant converts the energy of geothermally heated water to electrical energy conforming to 60 Hz commercial power standards. The plant capacity is sufficiently large to be useful in a demonstrational sense, but is limited to a capacity conservative of the geothermal energy resource, which hasn't been fully explored. The 5MW(e) plant will continue on a regular production basis after completion of the testing. Additional production units may be added at a later time.
Date: May 15, 1980
Creator: Metcalf, D.D. & Cole, M.S.
Partner: UNT Libraries Government Documents Department

Monitor and Evaluate the Genetic Characteristics of Supplemented Salmon and Steelhead, 2006-2007 Progress Report.

Description: This progress report offers a summary of genetic monitoring and evaluation research related to artificial propagation of Chinook salmon and steelhead in the Snake River basin. Our principal goal has been to characterize the relative (and net) reproductive success of hatchery fish spawning in the wild in multiple sub-basins. We address a critical uncertainty identified in essentially all tribal, state, and federal recovery planning efforts. Beyond simple description of those patterns of differential reproductive success, we seek to understand the biotic and abiotic factors that contribute to our observations, including genetic and environmental elements, and the real time effects of hatchery reform. We adopt two fundamentally different approaches that capture processes operating at different geographic scales. Our tier 2 design monitors changes in gene frequency through time in hatchery and wild populations. These studies monitor spatial and temporal genetic change over broad river basins and sub-basins. Tier 3 studies, by contrast, are able to construct pedigrees in naturally spawning populations that include hatchery and wild fish. We can then use actual matings to infer the fitness of hatchery versus wild individuals, based on the numbers of offspring we observe in our progeny samples. We get extraordinary detail from the tier 3 studies but only for a single river system. Thus, tier 2 studies provide breadth of information, whereas tier 3 studies offer unparalleled depth of insight for single discrete systems. We exceeded our goals in almost all areas for both tier 2 and tier 3 studies, and, where we did not, we offer an explanation of why, and what future action will be taken (Lessons Learned). All subcontracts were let as expected, providing smolt trapping, tissue sampling, genotyping, and analysis. Our inter-laboratory standardization efforts with tribal, state, and federal agencies were highly successful in this period. These standardization activities have ...
Date: November 20, 2008
Creator: Berntson, Ewann; Waples, Robin S. & Moran, Paul
Partner: UNT Libraries Government Documents Department

Reconfigurable computing in space: from current technology to reconfigurable systems-on-a-chip.

Description: The performance, in-system reprogrammability, flexibility, and reduced costs of SRAM-based FPGAs make them very interesting for high-speed, on-orbit data processing, but, because the current generation of radiation-tolerant SRAM-based FPGAs are derived directly from COTS versions of the chips, several issues must be dealt with for space, including SEU sensitivities, power consumption, thermal problems, and support logic. This paper will discuss Los Alamos National Laboratory's approach to using the Xilinx XQVR1000 FPGAs for on-orbit processing in the Cibola Flight Experiment (CFE) as well as the possibilities and challenges of using newer, system-on-a-reprogrammable-chip FPGAs, such as Virtex I1 Pro, in space-based reconfigurable computing. The reconfigurable computing payload for CFE includes three processing boards, each having three radiation-tolerant Xilinx XQVRl 000 FPGAs. The reconfigurable computing architecture for this project is intended for in-flight, real-time processing of two radio fi-equency channels, each producing 12-bit samples at 100 million samples/second. In this system, SEU disruptions in data path operations can be tolerated while disruptions in the control path are much less tolerable. With this system in mind, LANL has developed an SEU management scheme with strategies for handling upsets in all of the FPGA resources known to be sensitive to radiation-induced SEUs. While mitigation schemes for many resources will be discussed, the paper will concentrate on SEU management strategies and tools developed at LANL for the configuration bitstream and 'half latches'. To understand the behavior of specific designs under SEUs in the configuration bitstream, LANL and Brigham Young University have developed an SEU simulator using ISI's SLAACl-V reconfigurable computing board. The simulator can inject single-bit upsets into a design's configuration bitstream to simulate SEUs and observe how these simulated SEUs affect the design's operation. Using fast partial configuration, the simulator can cover the entire bitstream of a Xilinx XQVRl 000 FPGA, which has 6 million ...
Date: January 1, 2002
Creator: Graham, R. C. (Robert C.); Caffrey, M. P. (Michael Paul); Johnson, D. E. (Darrel Eric) & Wirthlin, M. J. (Michael J.)
Partner: UNT Libraries Government Documents Department

USING COPULAS TO MODEL DEPENDENCE IN SIMULATION RISK ASSESSMENT

Description: Typical engineering systems in applications with high failure consequences such as nuclear reactor plants often employ redundancy and diversity of equipment in an effort to lower the probability of failure and therefore risk. However, it has long been recognized that dependencies exist in these redundant and diverse systems. Some dependencies, such as common sources of electrical power, are typically captured in the logic structure of the risk model. Others, usually referred to as intercomponent dependencies, are treated implicitly by introducing one or more statistical parameters into the model. Such common-cause failure models have limitations in a simulation environment. In addition, substantial subjectivity is associated with parameter estimation for these models. This paper describes an approach in which system performance is simulated by drawing samples from the joint distributions of dependent variables. The approach relies on the notion of a copula distribution, a notion which has been employed by the actuarial community for ten years or more, but which has seen only limited application in technological risk assessment. The paper also illustrates how equipment failure data can be used in a Bayesian framework to estimate the parameter values in the copula model. This approach avoids much of the subjectivity required to estimate parameters in traditional common-cause failure models. Simulation examples are presented for failures in time. The open-source software package R is used to perform the simulations. The open-source software package WinBUGS is used to perform the Bayesian inference via Markov chain Monte Carlo sampling.
Date: November 1, 2007
Creator: Kelly, Dana L.
Partner: UNT Libraries Government Documents Department

DOE Partnerships with States, Tribes and Other Federal Programs Help Responders Prepare for Challenges Involving Transport of Radioactive Materials

Description: DOE Partnerships with States, Tribes and Other Federal Programs Help Responders Prepare for Challenges Involving Transport of Radioactive Materials Implementing adequate institutional programs and validating preparedness for emergency response to radiological transportation incidents along or near U.S. Department of Energy (DOE) shipping corridors poses unique challenges to transportation operations management. Delayed or insufficient attention to State and Tribal preparedness needs may significantly impact the transportation operations schedule and budget. The DOE Transportation Emergency Preparedness Program (TEPP) has successfully used a cooperative planning process to develop strong partnerships with States, Tribes, Federal agencies and other national programs to support responder preparedness across the United States. DOE TEPP has found that building solid partnerships with key emergency response agencies ensures responders have access to the planning, training, technical expertise and assistance necessary to safely, efficiently and effectively respond to a radiological transportation accident. Through the efforts of TEPP over the past fifteen years, partnerships have resulted in States and Tribal Nations either using significant portions of the TEPP planning resources in their programs and/or adopting the Modular Emergency Response Radiological Transportation Training (MERRTT) program into their hazardous material training curriculums to prepare their fire departments, law enforcement, hazardous materials response teams, emergency management officials, public information officers and emergency medical technicians for responding to transportation incidents involving radioactive materials. In addition, through strong partnerships with Federal Agencies and other national programs TEPP provided technical expertise to support a variety of radiological response initiatives and assisted several programs with integration of the nationally recognized MERRTT program into other training venues, thus ensuring consistency of radiological response curriculums delivered to responders. This presentation will provide an overview of the steps to achieve coordination, to avoid redundancy, and to highlight several of the successful partnerships TEPP has formed with States, Tribes, Federal agencies and other ...
Date: February 1, 2001
Creator: Keister, Marsha
Partner: UNT Libraries Government Documents Department