26 Matching Results

Search Results

Advanced search parameters have been applied.

On the Reversibility of Newton-Raphson Root-Finding Method

Description: Reversibility of a computational method is the ability to execute the method forward as well as backward. Reversible computational methods are generally useful in undoing incorrect computation in a speculative execution setting designed for efficient parallel processing. Here, reversibility is explored of a common component in scientific codes, namely, the Newton-Raphson root-finding method. A reverse method is proposed that is aimed at retracing the sequence of points that are visited by the forward method during forward iterations. When given the root, along with the number of iterations, of the forward method, this reverse method is aimed at backtracking along the reverse sequence of points to finally recover the original starting point of the forward method. The operation of this reverse method is illustrated on a few example functions, serving to highlight the method's strengths and shortcomings.
Date: July 1, 2008
Creator: Perumalla, Kalyan S.; Wright, John P. & Kuruganti, Phani Teja
Partner: UNT Libraries Government Documents Department

Using the Domain Name System to Thwart Automated Client-Based Attacks

Description: On the Internet, attackers can compromise systems owned by other people and then use these systems to launch attacks automatically. When attacks such as phishing or SQL injections are successful, they can have negative consequences including server downtime and the loss of sensitive information. Current methods to prevent such attacks are limited in that they are application-specific, or fail to block attackers. Phishing attempts can be stopped with email filters, but if the attacker manages to successfully bypass these filters, then the user must determine if the email is legitimate or not. Unfortunately, they often are unable to do so. Since attackers have a low success rate, they attempt to compensate for it in volume. In order to have this high throughput, attackers take shortcuts and break protocols. We use this knowledge to address these issues by implementing a system that can detect malicious activity and use it to block attacks. If the client fails to follow proper procedure, they can be classified as an attacker. Once an attacker has been discovered, they will be isolated and monitored. This can be accomplished using existing software in Ubuntu Linux applications, along with our custom wrapper application. After running the system and seeing its performance on three popular Web browsers Chromium, Firefox and Internet Explorer as well as two popular email clients, Thunderbird and Evolution, we found that not only is this system conceivable, it is effective and has low overhead.
Date: September 1, 2011
Creator: Taylor, Curtis R & Shue, Craig A
Partner: UNT Libraries Government Documents Department

Scientific Application Requirements for Leadership Computing at the Exascale

Description: The Department of Energy s Leadership Computing Facility, located at Oak Ridge National Laboratory s National Center for Computational Sciences, recently polled scientific teams that had large allocations at the center in 2007, asking them to identify computational science requirements for future exascale systems (capable of an exaflop, or 1018 floating point operations per second). These requirements are necessarily speculative, since an exascale system will not be realized until the 2015 2020 timeframe, and are expressed where possible relative to a recent petascale requirements analysis of similar science applications [1]. Our initial findings, which beg further data collection, validation, and analysis, did in fact align with many of our expectations and existing petascale requirements, yet they also contained some surprises, complete with new challenges and opportunities. First and foremost, the breadth and depth of science prospects and benefits on an exascale computing system are striking. Without a doubt, they justify a large investment, even with its inherent risks. The possibilities for return on investment (by any measure) are too large to let us ignore this opportunity. The software opportunities and challenges are enormous. In fact, as one notable computational scientist put it, the scale of questions being asked at the exascale is tremendous and the hardware has gotten way ahead of the software. We are in grave danger of failing because of a software crisis unless concerted investments and coordinating activities are undertaken to reduce and close this hardwaresoftware gap over the next decade. Key to success will be a rigorous requirement for natural mapping of algorithms to hardware in a way that complements (rather than competes with) compilers and runtime systems. The level of abstraction must be raised, and more attention must be paid to functionalities and capabilities that incorporate intent into data structures, are aware of memory hierarchy, ...
Date: December 2007
Creator: Ahern, Sean; Alam, Sadaf R.; Fahey, Mark R.; Hartman-Baker, Rebecca J.; Barrett, Richard F.; Kendall, Ricky A. et al.
Partner: UNT Libraries Government Documents Department

Diagnosing Anomalous Network Performance with Confidence

Description: Variability in network performance is a major obstacle in effectively analyzing the throughput of modern high performance computer systems. High performance interconnec- tion networks offer excellent best-case network latencies; how- ever, highly parallel applications running on parallel machines typically require consistently high levels of performance to adequately leverage the massive amounts of available computing power. Performance analysts have usually quantified network performance using traditional summary statistics that assume the observational data is sampled from a normal distribution. In our examinations of network performance, we have found this method of analysis often provides too little data to under- stand anomalous network performance. Our tool, Confidence, instead uses an empirically derived probability distribution to characterize network performance. In this paper we describe several instances where the Confidence toolkit allowed us to understand and diagnose network performance anomalies that we could not adequately explore with the simple summary statis- tics provided by traditional measurement tools. In particular, we examine a multi-modal performance scenario encountered with an Infiniband interconnection network and we explore the performance repeatability on the custom Cray SeaStar2 interconnection network after a set of software and driver updates.
Date: April 1, 2011
Creator: Settlemyer, Bradley W; Hodson, Stephen W; Kuehn, Jeffery A & Poole, Stephen W
Partner: UNT Libraries Government Documents Department

Emulation to simulate low resolution atmospheric data

Description: Climate simulations require significant compute power, they are complex and therefore it is time consuming to simulate them. We have developed an emulator to simulate unknown climate datasets. The emulator uses stochastic collocation and multi-dimensional in- terpolation to simulate the datasets. We have used the emulator to determine various physical quantities such as temperature, short and long wave cloud forcing, zonal winds etc. The emulation gives results which are very close to those obtained by simulations. The emulator was tested on 2 degree atmospheric datasets. The work evaluates the pros and cons of evaluating the mean first and inter- polating and vice versa. To determine the physical quantities, we have assumed them to be a function of time, longitude, latitude and a random parameter. We have looked at parameters that govern high stable clouds, low stable clouds, timescale for convection etc. The emulator is especially useful as it requires negligible compute times when compared to the simulation itself.
Date: August 1, 2012
Creator: Hebbur Venkata Subba Rao, Vishwas; Archibald, Richard K & Evans, Katherine J
Partner: UNT Libraries Government Documents Department

High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

Description: Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and where appropriate, changes in Center metrics were introduced. This report covers CY 2010 and CY 2011 Year to ...
Date: August 1, 2011
Creator: Baker, Ann E; Bland, Arthur S Buddy; Hack, James J; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A et al.
Partner: UNT Libraries Government Documents Department

INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

Description: It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms we have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.
Date: October 1, 2012
Creator: Groer, Christopher S; Sullivan, Blair D & Weerapurage, Dinesh P
Partner: UNT Libraries Government Documents Department

Sampling Within k-Means Algorithm to Cluster Large Datasets

Description: Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use ...
Date: August 1, 2011
Creator: Bejarano, Jeremy; Bose, Koushiki; Brannan, Tyler; Thomas, Anita; Adragni, Kofi; Neerchal, Nagaraj et al.
Partner: UNT Libraries Government Documents Department

Parallel Algorithms for Graph Optimization using Tree Decompositions

Description: Although many $\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.
Date: June 1, 2012
Creator: Sullivan, Blair D; Weerapurage, Dinesh P & Groer, Christopher S
Partner: UNT Libraries Government Documents Department

In-Situ Statistical Analysis of Autotune Simulation Data using Graphical Processing Units

Description: Developing accurate building energy simulation models to assist energy efficiency at speed and scale is one of the research goals of the Whole-Building and Community Integration group, which is a part of Building Technologies Research and Integration Center (BTRIC) at Oak Ridge National Laboratory (ORNL). The aim of the Autotune project is to speed up the automated calibration of building energy models to match measured utility or sensor data. The workflow of this project takes input parameters and runs EnergyPlus simulations on Oak Ridge Leadership Computing Facility s (OLCF) computing resources such as Titan, the world s second fastest supercomputer. Multiple simulations run in parallel on nodes having 16 processors each and a Graphics Processing Unit (GPU). Each node produces a 5.7 GB output file comprising 256 files from 64 simulations. Four types of output data covering monthly, daily, hourly, and 15-minute time steps for each annual simulation is produced. A total of 270TB+ of data has been produced. In this project, the simulation data is statistically analyzed in-situ using GPUs while annual simulations are being computed on the traditional processors. Titan, with its recent addition of 18,688 Compute Unified Device Architecture (CUDA) capable NVIDIA GPUs, has greatly extended its capability for massively parallel data processing. CUDA is used along with C/MPI to calculate statistical metrics such as sum, mean, variance, and standard deviation leveraging GPU acceleration. The workflow developed in this project produces statistical summaries of the data which reduces by multiple orders of magnitude the time and amount of data that needs to be stored. These statistical capabilities are anticipated to be useful for sensitivity analysis of EnergyPlus simulations.
Date: August 1, 2013
Creator: Ranjan, Niloo; Sanyal, Jibonananda & New, Joshua Ryan
Partner: UNT Libraries Government Documents Department

High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

Description: Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation billions of gallons of fuel.
Date: February 1, 2012
Creator: Baker, Ann E; Barker, Ashley D; Bland, Arthur S Buddy; Boudwin, Kathlyn J.; Hack, James J; Kendall, Ricky A et al.
Partner: UNT Libraries Government Documents Department

Science Prospects And Benefits with Exascale Computing

Description: Scientific computation has come into its own as a mature technology in all fields of science. Never before have we been able to accurately anticipate, analyze, and plan for complex events that have not yet occurred from the operation of a reactor running at 100 million degrees centigrade to the changing climate a century down the road. Combined with the more traditional approaches of theory and experiment, scientific computation provides a profound tool for insight and solution as we look at complex systems containing billions of components. Nevertheless, it cannot yet do all we would like. Much of scientific computation s potential remains untapped in areas such as materials science, Earth science, energy assurance, fundamental science, biology and medicine, engineering design, and national security because the scientific challenges are far too enormous and complex for the computational resources at hand. Many of these challenges are of immediate global importance. These challenges can be overcome by a revolution in computing that promises real advancement at a greatly accelerated pace. Planned petascale systems (capable of a petaflop, or 1015 floating point operations per second) in the next 3 years and exascale systems (capable of an exaflop, or 1018 floating point operations per second) in the next decade will provide an unprecedented opportunity to attack these global challenges through modeling and simulation. Exascale computers, with a processing capability similar to that of the human brain, will enable the unraveling of longstanding scientific mysteries and present new opportunities. Table ES.1 summarizes these scientific opportunities, their key application areas, and the goals and associated benefits that would result from solutions afforded by exascale computing.
Date: December 1, 2007
Creator: Kothe, Douglas B
Partner: UNT Libraries Government Documents Department

Infiniband Based Cable Comparison

Description: As Infiniband continues to be more broadly adopted in High Performance Computing (HPC) and datacenter applications, one major challenge still plagues implementation: cabling. With the transition to DDR (double data rate) from SDR (single datarate), currently available Infiniband implementations such as standard CX4/IB4x style copper cables severely constrain system design (10m maximum length for DDR copper cables, thermal management due to poor airflow, etc.). This paper will examine some of the options available and compare performance with the newly released Intel Connects Cables. In addition, we will take a glance at Intel's dual-core and quad-core systems to see if core counts have noticeable effect on expected IO patterns.
Date: July 1, 2007
Creator: Minich, Makia
Partner: UNT Libraries Government Documents Department

Running Infiniband on the Cray XT3

Description: In an effort to utilize the performance and cost benefits of the infiniband interconnect, this paper will discuss what was needed to install and load a single data rate infiniband host channel adapter into a service node on the Cray XT3. Along with the discussion on how to do it, this paper will also provide some performance numbers achieved from this connection to a remote system.
Date: May 1, 2007
Creator: Minich, Makia
Partner: UNT Libraries Government Documents Department

PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy

Description: In 2009 the Oak Ridge Leadership Computing Facility (OLCF), a U.S. Department of Energy (DOE) facility at the Oak Ridge National Laboratory (ORNL) National Center for Computational Sciences (NCCS), elicited petascale computational science requirements from leading computational scientists in the international science community. This effort targeted science teams whose projects received large computer allocation awards on OLCF systems. A clear finding of this process was that in order to reach their science goals over the next several years, multiple projects will require computational resources in excess of an order of magnitude more powerful than those currently available. Additionally, for the longer term, next-generation science will require computing platforms of exascale capability in order to reach DOE science objectives over the next decade. It is generally recognized that achieving exascale in the proposed time frame will require disruptive changes in computer hardware and software. Processor hardware will become necessarily heterogeneous and will include accelerator technologies. Software must undergo the concomitant changes needed to extract the available performance from this heterogeneous hardware. This disruption portends to be substantial, not unlike the change to the message passing paradigm in the computational science community over 20 years ago. Since technological disruptions take time to assimilate, we must aggressively embark on this course of change now, to insure that science applications and their underlying programming models are mature and ready when exascale computing arrives. This includes initiation of application readiness efforts to adapt existing codes to heterogeneous architectures, support of relevant software tools, and procurement of next-generation hardware testbeds for porting and testing codes. The 2009 OLCF requirements process identified numerous actions necessary to meet this challenge: (1) Hardware capabilities must be advanced on multiple fronts, including peak flops, node memory capacity, interconnect latency, interconnect bandwidth, and memory bandwidth. (2) Effective parallel programming interfaces must ...
Date: December 1, 2009
Creator: Joubert, Wayne; Kothe, Douglas B & Nam, Hai Ah
Partner: UNT Libraries Government Documents Department

Progress Report 2008: A Scalable and Extensible Earth System Model for Climate Change Science

Description: This project employs multi-disciplinary teams to accelerate development of the Community Climate System Model (CCSM), based at the National Center for Atmospheric Research (NCAR). A consortium of eight Department of Energy (DOE) National Laboratories collaborate with NCAR and the NASA Global Modeling and Assimilation Office (GMAO). The laboratories are Argonne (ANL), Brookhaven (BNL) Los Alamos (LANL), Lawrence Berkeley (LBNL), Lawrence Livermore (LLNL), Oak Ridge (ORNL), Pacific Northwest (PNNL) and Sandia (SNL). The work plan focuses on scalablity for petascale computation and extensibility to a more comprehensive earth system model. Our stated goal is to support the DOE mission in climate change research by helping ... To determine the range of possible climate changes over the 21st century and beyond through simulations using a more accurate climate system model that includes the full range of human and natural climate feedbacks with increased realism and spatial resolution.
Date: January 1, 2009
Creator: Drake, John B; Worley, Patrick H; Hoffman, Forrest M & Jones, Phil
Partner: UNT Libraries Government Documents Department

Understanding Lustre Internals

Description: Lustre was initiated and funded, almost a decade ago, by the U.S. Department of Energy (DoE) Office of Science and National Nuclear Security Administration laboratories to address the need for an open source, highly-scalable, high-performance parallel filesystem on by then present and future supercomputing platforms. Throughout the last decade, it was deployed over numerous medium-to-large-scale supercomputing platforms and clusters, and it performed and met the expectations of the Lustre user community. As it stands at the time of writing this document, according to the Top500 list, 15 of the top 30 supercomputers in the world use Lustre filesystem. This report aims to present a streamlined overview on how Lustre works internally at reasonable details including relevant data structures, APIs, protocols and algorithms involved for Lustre version 1.6 source code base. More importantly, it tries to explain how various components interconnect with each other and function as a system. Portions of this report are based on discussions with Oak Ridge National Laboratory Lustre Center of Excellence team members and portions of it are based on our own understanding of how the code works. We, as the authors team bare all responsibilities for all errors and omissions in this document. We can only hope it helps current and future Lustre users and Lustre code developers as much as it helped us understanding the Lustre source code and its internal workings.
Date: April 1, 2009
Creator: Wang, Feiyi; Oral, H Sarp; Shipman, Galen M; Drokin, Oleg; Wang, Di & Huang, He
Partner: UNT Libraries Government Documents Department

A Vertical Grid Module for Baroclinic Models of the Atmosphere

Description: The vertical grid of an atmospheric model assigns dynamic and thermo- dynamic variables to grid locations. The vertical coordinate is typically not height but one of a class of meteorological variables that vary with atmo- spheric conditions. The grid system is chosen to further numerical approx- imations of the boundary conditions so that the system is terrain following at the surface. Lagrangian vertical coordinates are useful in reducing the numerical errors from advection processes. That the choices will effect the numercial properties and accuracy is explored in this report. A MATLAB class for Lorentz vertical grids is described and applied to the vertical struc- ture equation and baroclinic atmospheric circulation. A generalized meteo- rolgoical coordinate system is developed which can support σ, isentropic θ vertical coordinate, or Lagrangian vertical coordinates. The vertical atmo- spheric column is a MATLAB class that includes the kinematic and ther- modynamic variables along with methods for computing geopoentials and terms relevant to a 3D baroclinc atmospheric model.
Date: April 1, 2008
Creator: Drake, John B.
Partner: UNT Libraries Government Documents Department

Modeling of Gap Closure in Uranium-Zirconium Alloy Metal Fuel - A Test Problem

Description: Uranium based binary and ternary alloy fuel is a possible candidate for advanced fast spectrum reactors with long refueling intervals and reduced liner heat rating [1]. An important metal fuel issue that can impact the fuel performance is the fuel-cladding gap closure, and fuel axial growth. The dimensional change in the fuel during irradiation is due to a superposition of the thermal expansion of the fuel due to heating, volumetric changes due to possible phase transformations that occur during heating and the swelling due to fission gas retention. The volumetric changes due to phase transformation depend both on the thermodynamics of the alloy system and the kinetics of phase change reactions that occur at the operating temperature. The nucleation and growth of fission gas bubbles that contributes to fuel swelling is also influenced by the local fuel chemistry and the microstructure. Once the fuel expands and contacts the clad, expansion in the radial direction is constrained by the clad, and the overall deformation of the fuel clad assembly depends upon the dynamics of the contact problem. The neutronics portion of the problem is also inherently coupled with microstructural evolution in terms of constituent redistribution and phase transformation. Because of the complex nature of the problem, a series of test problems have been defined with increasing complexity with the objective of capturing the fuel-clad interaction in complex fuels subjected to a wide range of irradiation and temperature conditions. The abstract, if short, is inserted here before the introduction section. If the abstract is long, it should be inserted with the front material and page numbered as such, then this page would begin with the introduction section.
Date: October 1, 2009
Creator: Simunovic, Srdjan; Ott, Larry J; Gorti, Sarma B; Nukala, Phani K; Radhakrishnan, Balasubramaniam & Turner, John A
Partner: UNT Libraries Government Documents Department

Requirements Definition for ORNL Trusted Corridors Project

Description: The ORNL Trusted Corridors Project has several other names: SensorNet Transportation Pilot; Identification and Monitoring of Radiation (in commerce) Shipments (IMR(ic)S); and Southeastern Transportation Corridor Pilot (SETCP). The project involves acquisition and analysis of transportation data at two mobile and three fixed inspection stations in five states (Kentucky, Mississippi, South Carolina, Tennessee, and Washington DC). Collaborators include the State Police organizations that are responsible for highway safety, law enforcement, and incident response. The three states with fixed weigh-station deployments (KY, SC, TN) are interested in coordination of this effort for highway safety, law enforcement, and sorting/targeting/interdiction of potentially non-compliant vehicles/persons/cargo. The Domestic Nuclear Detection Office (DNDO) in the U.S. Department of Homeland Security (DHS) is interested in these deployments, as a Pilot test (SETCP) to identify Improvised Nuclear Devices (INDs) in highway transport. However, the level of DNDO integration among these state deployments is presently uncertain. Moreover, DHS issues are considered secondary by the states, which perceive this work as an opportunity to leverage these (new) dual-use technologies for state needs. In addition, present experience shows that radiation detectors alone cannot detect DHS-identified IND threats. Continued SETCP success depends on the level of integration of current state/local police operations with the new DHS task of detecting IND threats, in addition to emergency preparedness and homeland security. This document describes the enabling components for continued SETCP development and success, including: sensors and their use at existing deployments (Section 1); personnel training (Section 2); concept of operations (Section 3); knowledge discovery from the copious data (Section 4); smart data collection, integration and database development, advanced algorithms for multiple sensors, and network communications (Section 5); and harmonization of local, state, and Federal procedures and protocols (Section 6).
Date: February 1, 2008
Creator: Walker, Randy M; Hill, David E; Smith, Cyrus M; DeNap, Frank A; White, James D; Gross, Ian G et al.
Partner: UNT Libraries Government Documents Department

Design and Implementation of a Scalable Membership Service for Supercomputer Resiliency-Aware Runtime

Description: As HPC systems and applications get bigger and more complex, we are approaching an era in which resiliency and run-time elasticity concerns become paramount. We offer a building block for an alternative resiliency approach in which computations will be able to make progress while components fail, in addition to enabling a dynamic set of nodes throughout a computation lifetime. The core of our solution is a hierarchical scalable membership service providing eventual consistency semantics. An attribute replication service is used for hierarchy organization, and is exposed to external applications. Our solution is based on P2P technologies and provides resiliency and elastic runtime support at ultra large scales. Resulting middleware is general purpose while exploiting HPC platform unique features and architecture. We have implemented and tested this system on BlueGene/P with Linux, and using worst-case analysis, evaluated the service scalability as effective for up to 1M nodes.
Date: January 1, 2013
Creator: Tock, Yoav; Mandler, Benjamin; Moreira, Jose & Jones, Terry R
Partner: UNT Libraries Government Documents Department

High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

Description: Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools and resources for next-generation systems.
Date: August 1, 2010
Creator: Bland, Arthur S Buddy; Hack, James J; Baker, Ann E; Barker, Ashley D; Boudwin, Kathlyn J.; Kendall, Ricky A et al.
Partner: UNT Libraries Government Documents Department

Proceedings of the 4th Annual Workshop on Cyber Security and Information Intelligence Research: Developing Strategies To Meet The Cyber Security And Information Intelligence Challenges Ahead

Description: As our dependence on the cyber infrastructure grows ever larger, more complex and more distributed, the systems that compose it become more prone to failures and/or exploitation. Intelligence is information valued for its currency and relevance rather than its detail or accuracy. Information explosion describes the pervasive abundance of (public/private) information and the effects of such. Gathering, analyzing, and making use of information constitutes a business- / sociopolitical- / military-intelligence gathering activity and ultimately poses significant advantages and liabilities to the survivability of "our" society. The combination of increased vulnerability, increased stakes and increased threats make cyber security and information intelligence (CSII) one of the most important emerging challenges in the evolution of modern cyberspace "mechanization." The goal of the workshop was to challenge, establish and debate a far-reaching agenda that broadly and comprehensively outlined a strategy for cyber security and information intelligence that is founded on sound principles and technologies. We aimed to discuss novel theoretical and applied research focused on different aspects of software security/dependability, as software is at the heart of the cyber infrastructure.
Date: January 1, 2008
Creator: Sheldon, Frederick T; Krings, Axel; Abercrombie, Robert K & Mili, Ali
Partner: UNT Libraries Government Documents Department

Proceedings of the 5th Annual Workshop on Cyber Security and Information Intelligence Research: Cyber Security and Information Intelligence Challenges and Strategies

Description: Our reliance on the cyber infrastructure has further grown and the dependencies have become more complex. The infrastructure and applications running on it are not generally governed by the rules of bounded systems and inherit the properties of unbounded systems, such as the absence of global control, borders and barriers. Furthermore, the quest for increasing functionality and ease of operation is often at the cost of controllability, potentially opening up avenues for exploitation and failures. Intelligence is information valued for its currency and relevance rather than its detail or accuracy. In the presence of information explosion, i.e., the pervasive abundance of (public/private) information and the effects of such, intelligence has the potential to shift the advantages in the dynamic game of defense and attacks in cyber space. Gathering, analyzing, and making use of information constitutes a business-/sociopolitical-/military-intelligence gathering activity and ultimately poses significant advantages and liabilities to the survivability of "our" society. The combination of increased vulnerability, increased stakes and increased threats make cyber security and information intelligence (CSII) one of the most important emerging challenges in the evolution of modern cyberspace. The goal of the workshop is to establish, debate and challenge the far-reaching agenda that broadly and comprehensively outlines a strategy for cyber security and information intelligence that is founded on sound principles and technologies.
Date: January 1, 2009
Creator: Sheldon, Frederick T; Peterson, Greg D; Krings, Axel; Abercrombie, Robert K & Mili, Ali
Partner: UNT Libraries Government Documents Department