556 Matching Results

Search Results

Advanced search parameters have been applied.

STATUS OF DATA RETRIEVAL SYSTEM AS OF APRIL 1, 1969

Description: Battelle-Northwest is developing a general-purpose data retrieval system. The system is planned to provide almost unrestricted flexibility in application and use, with no limitations on the quantity of data, except those imposed by hardware. This data retrieval system is being programmed in Fortran-V, for batch processing on a Univac-1108. The control, search-and-retrieval, query-input, output, and startup programs have been written in preliminary form, and are in the initial debugging and test stages. Current versions of these programs are presented.
Date: June 1, 1969
Creator: Reynolds, R. L.; Engel, R. L.; Toyooka, R. T. & Wells, E. L.
Partner: UNT Libraries Government Documents Department

An investigation of exploitation versus exploration in GBEA optimization of PORS 15 and 16 Problems

Description: It was hypothesized that the variations in time to solution are driven by the competing mechanisms of exploration and exploitation.This thesis explores this hypothesis by examining two contrasting problems that embody the hypothesized tradeoff between exploration and exploitation. Plus one recall store (PORS) is an optimization problem based on the idea of a simple calculator with four buttons: plus, one, store, and recall. Integer addition and store are classified as operations, and one and memory recall are classified as terminals. The goal is to arrange a fixed number of keystrokes in a way that maximizes the numerical result. PORS 15 (15 keystrokes) represents the subset of difficult PORS problems and PORS 16 (16 keystrokes) represents the subset of PORS problems that are easiest to optimize. The goal of this work is to examine the tradeoff between exploitation and exploration in graph based evolutionary algorithm (GBEA) optimization. To do this, computational experiments are used to examine how solutions evolve in PORS 15 and 16 problems when solved using GBEAs. The experiment is comprised of three components; the graphs and the population, the evolutionary algorithm rule set, and the example problems. The complete, hypercube, and cycle graphs were used for this experiment. A fixed population size was used.
Date: May 8, 2012
Creator: Koch, Kaelynn
Partner: UNT Libraries Government Documents Department

Paralization and check pointing of GPU applications through program transformation

Description: GPUs have emerged as a powerful tool for accelerating general-purpose applications. The availability of programming languages that makes writing general-purpose applications for running on GPUs tractable have consolidated GPUs as an alternative for accelerating generalpurpose applications. Among the areas that have bene#12;ted from GPU acceleration are: signal and image processing, computational uid dynamics, quantum chemistry, and, in general, the High Performance Computing (HPC) Industry. In order to continue to exploit higher levels of parallelism with GPUs, multi-GPU systems are gaining popularity. In this context, single-GPU applications are parallelized for running in multi-GPU systems. Furthermore, multi-GPU systems help to solve the GPU memory limitation for applications with large application memory footprint. Parallelizing single-GPU applications has been approached by libraries that distribute the workload at runtime, however, they impose execution overhead and are not portable. On the other hand, on traditional CPU systems, parallelization has been approached through application transformation at pre-compile time, which enhances the application to distribute the workload at application level and does not have the issues of library-based approaches. Hence, a parallelization scheme for GPU systems based on application transformation is needed. Like any computing engine of today, reliability is also a concern in GPUs. GPUs are vulnerable to transient and permanent failures. Current checkpoint/restart techniques are not suitable for systems with GPUs. Checkpointing for GPU systems present new and interesting challenges, primarily due to the natural di#11;erences imposed by the hardware design, the memory subsystem architecture, the massive number of threads, and the limited amount of synchronization among threads. Therefore, a checkpoint/restart technique suitable for GPU systems is needed. The goal of this work is to exploit higher levels of parallelism and to develop support for application-level fault tolerance in applications using multiple GPUs. Our techniques reduce the burden of enhancing single-GPU applications to support these features. ...
Date: November 15, 2012
Creator: Solano-Quinde, Lizandro Dami#19 & Laboratory], an
Partner: UNT Libraries Government Documents Department

Multicore Architecture-aware Scientific Applications

Description: Modern high performance systems are becoming increasingly complex and powerful due to advancements in processor and memory architecture. In order to keep up with this increasing complexity, applications have to be augmented with certain capabilities to fully exploit such systems. These may be at the application level, such as static or dynamic adaptations or at the system level, like having strategies in place to override some of the default operating system polices, the main objective being to improve computational performance of the application. The current work proposes two such capabilites with respect to multi-threaded scientific applications, in particular a large scale physics application computing ab-initio nuclear structure. The first involves using a middleware tool to invoke dynamic adaptations in the application, so as to be able to adjust to the changing computational resource availability at run-time. The second involves a strategy for effective placement of data in main memory, to optimize memory access latencies and bandwidth. These capabilties when included were found to have a significant impact on the application performance, resulting in average speedups of as much as two to four times.
Date: November 28, 2011
Creator: Srinivasa, Avinash
Partner: UNT Libraries Government Documents Department

Framework for Adaptable Operating and Runtime Systems: Final Project Report

Description: In this grant, we examined a wide range of techniques for constructing high-performance con#12;gurable system software for HPC systems and its application to DOE-relevant problems. Overall, research and development on this project focused in three specifc areas: (1) software frameworks for constructing and deploying con#12;gurable system software, (2) applcation of these frameworks to HPC-oriented adaptable networking software, (3) performance analysis of HPC system software to understand opportunities for performance optimization.
Date: February 1, 2012
Creator: Bridges, Patrick G.
Partner: UNT Libraries Government Documents Department

RATMAC PRIMER

Description: The language RATMAC is a direct descendant of one of the most successful structured FORTRAN languages, rational FORTRAN, RATFOR. RATMAC has all of the characteristics of RATFOR, but is augmented by a powerful recursive macro processor which is extremely useful in generating transportable FORTRAN programs. A macro is a collection of programming steps which are associated with a keyword. This keyword uniquely identifies the macro, and whenever it appears in a RATMAC program it is replaced by the collection of steps. This primer covers the language's control and decision structures, macros, file inclusion, symbolic constants, and error messages.
Date: October 1, 1980
Creator: Munn, R.J.; Stewart, J.M.; Norden, A.P. & Pagoaga, M. Katherine
Partner: UNT Libraries Government Documents Department

Scientific Data Services -- A High-Performance I/O System with Array Semantics

Description: As high-performance computing approaches exascale, the existing I/O system design is having trouble keeping pace in both performance and scalability. We propose to address this challenge by adopting database principles and techniques in parallel I/O systems. First, we propose to adopt an array data model because many scientific applications represent their data in arrays. This strategy follows a cardinal principle from database research, which separates the logical view from the physical layout of data. This high-level data model gives the underlying implementation more freedom to optimize the physical layout and to choose the most effective way of accessing the data. For example, knowing that a set of write operations is working on a single multi-dimensional array makes it possible to keep the subarrays in a log structure during the write operations and reassemble them later into another physical layout as resources permit. While maintaining the high-level view, the storage system could compress the user data to reduce the physical storage requirement, collocate data records that are frequently used together, or replicate data to increase availability and fault-tolerance. Additionally, the system could generate secondary data structures such as database indexes and summary statistics. We expect the proposed Scientific Data Services approach to create a “live” storage system that dynamically adjusts to user demands and evolves with the massively parallel storage hardware.
Date: September 21, 2011
Creator: Wu, Kesheng; Byna, Surendra; Rotem, Doron & Shoshani, Arie
Partner: UNT Libraries Government Documents Department

Web-based Tool Identifies and Quantifies Potential Cost Savings Measures at the Hanford Site - 14366

Description: The Technical Improvement system is an approachable web-based tool that is available to Hanford DOE staff, site contractors, and general support service contractors as part of the baseline optimization effort underway at the Hanford Site. Finding and implementing technical improvements are a large part of DOE’s cost savings efforts. The Technical Improvement dashboard is a key tool for brainstorming and monitoring the progress of submitted baseline optimization and potential cost/schedule efficiencies. The dashboard is accessible to users over the Hanford Local Area Network (HLAN) and provides a highly visual and straightforward status to management on the ideas provided, alleviating the need for resource intensive weekly and monthly reviews.
Date: January 9, 2014
Creator: Renevitz, Marisa J.; Peschong, Jon C.; Charboneau, Briant L. & Simpson, Brett C.
Partner: UNT Libraries Government Documents Department

PDP-7 HYBRID COMPUTER PROGRAM RECTANGULAR INTEGRATION SUBROUTINE

Description: The hybrid subroutine library contains three subroutines to be used in performing a rectangular integration, RINT$, CNP$, and ICL$. Subroutine RINT$ is used to perform the calculation of the integral and can be used in either synchronous or asynchronous mode. The purpose of CNP$ is for calculation {Delta}t , the time between integral calculations, for an asynchronous integration . Subroutine ICL$ is for initialization of the real-time clock and also contains the save and restore portions of the clock service subroutine.
Date: October 1, 1969
Creator: Gerhardstein,, L. H.
Partner: UNT Libraries Government Documents Department

PDP-7 HYBRID COMPUTER PROGRAM, VARIABLE MONITORING AND TRIP SYSTEM PROGRAM TRIP$

Description: TRIP$ provides the capability of monitoring any number of program variables comparing their values to some pre-determined trip point. If the trip point is reached a selected user subroutine will be entered continuously until the trip system is re-initialized.
Date: October 1, 1969
Creator: Worth, Grant A.
Partner: UNT Libraries Government Documents Department

DOE Hanford Network Upgrades and Disaster Recovery Exercise Support the Cleanup Mission Now and into the Future - 14303

Description: In 2013, the U.S. Department of Energy�s (DOE) Hanford Site, located in Washington State, funded an update to the critical network infrastructure supporting the Hanford Federal Cloud (HFC). The project, called ET-50, was the final step in a plan that was initiated five years ago called �Hanford�s IT Vision, 2015 and Beyond.� The ET-50 project upgraded Hanford�s core data center switches and routers along with a majority of the distribution layer switches. The upgrades allowed HFC the network intelligence to provide Hanford with a more reliable and resilient network architecture. The culmination of the five year plan improved network intelligence and high performance computing as well as helped to provide 10Gbps capable links between core backbone devices (10 times the previous bandwidth). These improvements allow Hanford the ability to further support bandwidth intense applications, such as video teleconferencing. The ET-50 switch upgrade, along with other upgrades implemented from the five year plan, have prepared Hanford�s network for the next evolution of technology in voice, video, and data. Hand-in-hand with ET-50�s major data center outage, Mission Support Alliance�s (MSA) Information Management (IM) organization executed a disaster recovery (DR) exercise to perform a true integration test and capability study. The DR scope was planned within the constraints of ET-50�s 14 hour datacenter outage window. This DR exercise tested Hanford�s Continuity of Operations (COOP) capability and failover plans for safety and business critical Hanford Federal Cloud applications. The planned suite of services to be tested was identified prior to the outage and plans were prepared to test the services ability to failover from the primary Hanford datacenter to the backup datacenter. The services tested were: � Core Network (backbone, firewall, load balancers) � Voicemail, � Voice over IP (VoIP) � Emergency Notification � Virtual desktops and; � Select set of production applications and ...
Date: November 7, 2013
Creator: Eckman, Todd J.; Hertzel, Ali K. & Lane, James J.
Partner: UNT Libraries Government Documents Department

Quantum Monte Carlo algorithms for electronic structure at the petascale; the endstation project.

Description: Over the past two decades, continuum quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting of the properties of matter from fundamental principles. By solving the Schrodinger equation through a stochastic projection, it achieves the greatest accuracy and reliability of methods available for physical systems containing more than a few quantum particles. QMC enjoys scaling favorable to quantum chemical methods, with a computational effort which grows with the second or third power of system size. This accuracy and scalability has enabled scientific discovery across a broad spectrum of disciplines. The current methods perform very efficiently at the terascale. The quantum Monte Carlo Endstation project is a collaborative effort among researchers in the field to develop a new generation of algorithms, and their efficient implementations, which will take advantage of the upcoming petaflop architectures. Some aspects of these developments are discussed here. These tools will expand the accuracy, efficiency and range of QMC applicability and enable us to tackle challenges which are currently out of reach. The methods will be applied to several important problems including electronic and structural properties of water, transition metal oxides, nanosystems and ultracold atoms.
Date: October 1, 2008
Creator: Kim, J; Ceperley, D M; Purwanto, W; Walter, E J; Krakauer, H; Zhang, S W et al.
Partner: UNT Libraries Government Documents Department

A Comparison of Three Voting Methods for Bagging with the MLEM2 Algorithm

Description: This paper presents results of experiments on some data sets using bagging on the MLEM2 rule induction algorithm. Three different methods of ensemble voting, based on support (a non-democratic voting in which ensembles vote with their strengths), strength only (an ensemble with the largest strength decides to which concept a case belongs) and democratic voting (each ensemble has at most one vote) were used. Our conclusions are that though in most cases democratic voting was the best, it is not significantly better than voting based on support. The strength voting was the worst voting method.
Date: March 17, 2010
Creator: Cohagan, Clinton; Grzymala-Busse, Jerzy W. & Hippe, Zdzislaw S.
Partner: UNT Libraries Government Documents Department

MARIANE: MApReduce Implementation Adapted for HPC Environments

Description: MapReduce is increasingly becoming a popular framework, and a potent programming model. The most popular open source implementation of MapReduce, Hadoop, is based on the Hadoop Distributed File System (HDFS). However, as HDFS is not POSIX compliant, it cannot be fully leveraged by applications running on a majority of existing HPC environments such as Teragrid and NERSC. These HPC environments typicallysupport globally shared file systems such as NFS and GPFS. On such resourceful HPC infrastructures, the use of Hadoop not only creates compatibility issues, but also affects overall performance due to the added overhead of the HDFS. This paper not only presents a MapReduce implementation directly suitable for HPC environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems' functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPCenvironments, but also allows for better performance in such settings. This paper shows the applicability and high performance of the MapReduce paradigm through MARIANE, an implementation designed for clustered and shared-disk file systems and as such not dedicated to a specific MapReduce solution. The paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in distributed environments over Apache Hadoop in a data intensive setting, on the Magellan testbed at the National Energy Research Scientific Computing Center (NERSC).
Date: July 6, 2011
Creator: Fadika, Zacharia; Dede, Elif; Govindaraju, Madhusudhan & Ramakrishnan, Lavanya
Partner: UNT Libraries Government Documents Department

Distributed Sensor Coordination for Advanced Energy Systems

Description: The ability to collect key system level information is critical to the safe, efficient and reli- able operation of advanced energy systems. With recent advances in sensor development, it is now possible to push some level of decision making directly to computationally sophisticated sensors, rather than wait for data to arrive to a massive centralized location before a decision is made. This type of approach relies on networked sensors (called “agents” from here on) to actively collect and process data, and provide key control deci- sions to significantly improve both the quality/relevance of the collected data and the as- sociating decision making. The technological bottlenecks for such sensor networks stem from a lack of mathematics and algorithms to manage the systems, rather than difficulties associated with building and deploying them. Indeed, traditional sensor coordination strategies do not provide adequate solutions for this problem. Passive data collection methods (e.g., large sensor webs) can scale to large systems, but are generally not suited to highly dynamic environments, such as ad- vanced energy systems, where crucial decisions may need to be reached quickly and lo- cally. Approaches based on local decisions on the other hand cannot guarantee that each agent performing its task (maximize an agent objective) will lead to good network wide solution (maximize a network objective) without invoking cumbersome coordination rou- tines. There is currently a lack of algorithms that will enable self-organization and blend the efficiency of local decision making with the system level guarantees of global decision making, particularly when the systems operate in dynamic and stochastic environments. In this work we addressed this critical gap and provided a comprehensive solution to the problem of sensor coordination to ensure the safe, reliable, and robust operation of advanced energy systems. The differentiating aspect of the proposed work is in shift- ...
Date: July 31, 2013
Creator: Tumer, Kagan
Partner: UNT Libraries Government Documents Department

CDP - Adaptive Supervisory Control and Data Acquisition (SCADA) Technology for Infrastructure Protection

Description: Supervisory Control and Data Acquisition (SCADA) Systems are a type of Industrial Control System characterized by the centralized (or hierarchical) monitoring and control of geographically dispersed assets. SCADA systems combine acquisition and network components to provide data gathering, transmission, and visualization for centralized monitoring and control. However these integrated capabilities, especially when built over legacy systems and protocols, generally result in vulnerabilities that can be exploited by attackers, with potentially disastrous consequences. Our research project proposal was to investigate new approaches for secure and survivable SCADA systems. In particular, we were interested in the resilience and adaptability of large-scale mission-critical monitoring and control infrastructures. Our research proposal was divided in two main tasks. The first task was centered on the design and investigation of algorithms for survivable SCADA systems and a prototype framework demonstration. The second task was centered on the characterization and demonstration of the proposed approach in illustrative scenarios (simulated or emulated).
Date: May 14, 2012
Creator: Carvalho, Marco & Ford, Richard
Partner: UNT Libraries Government Documents Department

National Computational Infrastructure for LatticeGauge Theory SciDAC-2 Closeout Report

Description: As part of the reliability project work, researchers from Vanderbilt University, Fermi National Laboratory and Illinois Institute of technology developed a real-time cluster fault-tolerant cluster monitoring framework. The goal for the scientific workflow project is to investigate and develop domain-specific workflow tools for LQCD to help effectively orchestrate, in parallel, computational campaigns consisting of many loosely-coupled batch processing jobs. Major requirements for an LQCD workflow system include: a system to manage input metadata, e.g. physics parameters such as masses, a system to manage and permit the reuse of templates describing workflows, a system to capture data provenance information, a systems to manage produced data, a means of monitoring workflow progress and status, a means of resuming or extending a stopped workflow, fault tolerance features to enhance the reliability of running workflows. In summary, these achievements are reported: • Implemented a software system to manage parameters. This includes a parameter set language based on a superset of the JSON data-interchange format, parsers in multiple languages (C++, Python, Ruby), and a web-based interface tool. It also includes a templating system that can produce input text for LQCD applications like MILC. • Implemented a monitoring sensor framework in software that is in production on the Fermilab USQCD facility. This includes equipment health, process accounting, MPI/QMP process tracking, and batch system (Torque) job monitoring. All sensor data are available from databases, and various query tools can be used to extract common data patterns and perform ad hoc searches. Common batch system queries such as job status are available in command line tools and are used in actual workflow-based production by a subset of Fermilab users. • Developed a formal state machine model for scientific workflow and reliability systems. This includes the use of Vanderbilt’s Generic Modeling Envirnment (GME) tool for code generation for the ...
Date: July 18, 2013
Creator: Bapty, Theodore & Dubey, Abhishek
Partner: UNT Libraries Government Documents Department

Final Technical Report - Large Deviation Methods for the Analysis and Design of Monte Carlo Schemes in Physics and Chemistry - DE-SC0002413

Description: This proposal is concerned with applications of Monte Carlo to problems in physics and chemistry where rare events degrade the performance of standard Monte Carlo. One class of problems is concerned with computation of various aspects of the equilibrium behavior of some Markov process via time averages. The problem to be overcome is that rare events interfere with the efficient sampling of all relevant parts of phase space. A second class concerns sampling transitions between two or more stable attractors. Here, rare events do not interfere with the sampling of all relevant parts of phase space, but make Monte Carlo inefficient because of the very large number of samples required to obtain variance comparable to the quantity estimated. The project uses large deviation methods for the mathematical analyses of various Monte Carlo techniques, and in particular for algorithmic analysis and design. This is done in the context of relevant application areas, mainly from chemistry and biology.
Date: March 14, 2014
Creator: Dupuis, Paul
Partner: UNT Libraries Government Documents Department

A Framework for Adaptable Operating and Runtime Systems

Description: The emergence of new classes of HPC systems where performance improvement is enabled by Moore’s Law for technology is manifest through multi-core-based architectures including specialized GPU structures. Operating systems were originally designed for control of uniprocessor systems. By the 1980s multiprogramming, virtual memory, and network interconnection were integral services incorporated as part of most modern computers. HPC operating systems were primarily derivatives of the Unix model with Linux dominating the Top-500 list. The use of Linux for commodity clusters was first pioneered by the NASA Beowulf Project. However, the rapid increase in number of cores to achieve performance gain through technology advances has exposed the limitations of POSIX general-purpose operating systems in scaling and efficiency. This project was undertaken through the leadership of Sandia National Laboratories and in partnership of the University of New Mexico to investigate the alternative of composable lightweight kernels on scalable HPC architectures to achieve superior performance for a wide range of applications. The use of composable operating systems is intended to provide a minimalist set of services specifically required by a given application to preclude overheads and operational uncertainties (“OS noise”) that have been demonstrated to degrade efficiency and operational consistency. This project was undertaken as an exploration to investigate possible strategies and methods for composable lightweight kernel operating systems towards support for extreme scale systems.
Date: March 4, 2014
Creator: Sterling, Thomas
Partner: UNT Libraries Government Documents Department

Efficient Feature-Driven Visualization of Large-Scale Scientific Data

Description: Very large, complex scientific data acquired in many research areas creates critical challenges for scientists to understand, analyze, and organize their data. The objective of this project is to expand the feature extraction and analysis capabilities to develop powerful and accurate visualization tools that can assist domain scientists with their requirements in multiple phases of scientific discovery. We have recently developed several feature-driven visualization methods for extracting different data characteristics of volumetric datasets. Our results verify the hypothesis in the proposal and will be used to develop additional prototype systems.
Date: December 12, 2012
Creator: Lu, Aidong
Partner: UNT Libraries Government Documents Department

Large Scale Computational Problems in Numerical Optimization

Description: Our work under this support broadly falls into five categories: automatic differentiation, sparsity, constraints, parallel computation, and applications. Automatic Differentiation (AD): We developed strong practical methods for computing sparse Jacobian and Hessian matrices which arise frequently in large scale optimization problems [10,35]. In addition, we developed a novel view of "structure" in applied problems along with AD techniques that allowed for the efficient application of sparse AD techniques to dense, but structured, problems. Our AD work included development of freely available MATLAB AD software. Sparsity: We developed new effective and practical techniques for exploiting sparsity when solving a variety of optimization problems. These problems include: bound constrained problems, robust regression problems, the null space problem, and sparse orthogonal factorization. Our sparsity work included development of freely available and published software [38,39]. Constraints: Effectively handling constraints in large scale optimization remains a challenge. We developed a number of new approaches to constrained problems with emphasis on trust region methodologies. Parallel Computation: Our work included the development of specifically parallel techniques for the linear algebra tasks underpinning optimization algorithms. Our work contributed to the nonlinear least-squares problem, nonlinear equations, triangular systems, orthogonalization, and linear programming. Applications: Our optimization work is broadly applicable across numerous application domains. Nevertheless we have specifically worked in several application areas including molecular conformation, molecular energy minimization, computational finance, and bone remodeling.
Date: July 1, 2000
Creator: coleman, thomas f.
Partner: UNT Libraries Government Documents Department