1,894 Matching Results

Search Results

Advanced search parameters have been applied.

STATUS OF DATA RETRIEVAL SYSTEM AS OF APRIL 1, 1969

Description: Battelle-Northwest is developing a general-purpose data retrieval system. The system is planned to provide almost unrestricted flexibility in application and use, with no limitations on the quantity of data, except those imposed by hardware. This data retrieval system is being programmed in Fortran-V, for batch processing on a Univac-1108. The control, search-and-retrieval, query-input, output, and startup programs have been written in preliminary form, and are in the initial debugging and test stages. Current versions of these programs are presented.
Date: June 1, 1969
Creator: Reynolds, R. L.; Engel, R. L.; Toyooka, R. T. & Wells, E. L.
Partner: UNT Libraries Government Documents Department

An investigation of exploitation versus exploration in GBEA optimization of PORS 15 and 16 Problems

Description: It was hypothesized that the variations in time to solution are driven by the competing mechanisms of exploration and exploitation.This thesis explores this hypothesis by examining two contrasting problems that embody the hypothesized tradeoff between exploration and exploitation. Plus one recall store (PORS) is an optimization problem based on the idea of a simple calculator with four buttons: plus, one, store, and recall. Integer addition and store are classified as operations, and one and memory recall are classified as terminals. The goal is to arrange a fixed number of keystrokes in a way that maximizes the numerical result. PORS 15 (15 keystrokes) represents the subset of difficult PORS problems and PORS 16 (16 keystrokes) represents the subset of PORS problems that are easiest to optimize. The goal of this work is to examine the tradeoff between exploitation and exploration in graph based evolutionary algorithm (GBEA) optimization. To do this, computational experiments are used to examine how solutions evolve in PORS 15 and 16 problems when solved using GBEAs. The experiment is comprised of three components; the graphs and the population, the evolutionary algorithm rule set, and the example problems. The complete, hypercube, and cycle graphs were used for this experiment. A fixed population size was used.
Date: May 8, 2012
Creator: Koch, Kaelynn
Partner: UNT Libraries Government Documents Department

Paralization and check pointing of GPU applications through program transformation

Description: GPUs have emerged as a powerful tool for accelerating general-purpose applications. The availability of programming languages that makes writing general-purpose applications for running on GPUs tractable have consolidated GPUs as an alternative for accelerating generalpurpose applications. Among the areas that have bene#12;ted from GPU acceleration are: signal and image processing, computational uid dynamics, quantum chemistry, and, in general, the High Performance Computing (HPC) Industry. In order to continue to exploit higher levels of parallelism with GPUs, multi-GPU systems are gaining popularity. In this context, single-GPU applications are parallelized for running in multi-GPU systems. Furthermore, multi-GPU systems help to solve the GPU memory limitation for applications with large application memory footprint. Parallelizing single-GPU applications has been approached by libraries that distribute the workload at runtime, however, they impose execution overhead and are not portable. On the other hand, on traditional CPU systems, parallelization has been approached through application transformation at pre-compile time, which enhances the application to distribute the workload at application level and does not have the issues of library-based approaches. Hence, a parallelization scheme for GPU systems based on application transformation is needed. Like any computing engine of today, reliability is also a concern in GPUs. GPUs are vulnerable to transient and permanent failures. Current checkpoint/restart techniques are not suitable for systems with GPUs. Checkpointing for GPU systems present new and interesting challenges, primarily due to the natural di#11;erences imposed by the hardware design, the memory subsystem architecture, the massive number of threads, and the limited amount of synchronization among threads. Therefore, a checkpoint/restart technique suitable for GPU systems is needed. The goal of this work is to exploit higher levels of parallelism and to develop support for application-level fault tolerance in applications using multiple GPUs. Our techniques reduce the burden of enhancing single-GPU applications to support these features. ...
Date: November 15, 2012
Creator: Solano-Quinde, Lizandro Dami#19 & Laboratory], an
Partner: UNT Libraries Government Documents Department

Multicore Architecture-aware Scientific Applications

Description: Modern high performance systems are becoming increasingly complex and powerful due to advancements in processor and memory architecture. In order to keep up with this increasing complexity, applications have to be augmented with certain capabilities to fully exploit such systems. These may be at the application level, such as static or dynamic adaptations or at the system level, like having strategies in place to override some of the default operating system polices, the main objective being to improve computational performance of the application. The current work proposes two such capabilites with respect to multi-threaded scientific applications, in particular a large scale physics application computing ab-initio nuclear structure. The first involves using a middleware tool to invoke dynamic adaptations in the application, so as to be able to adjust to the changing computational resource availability at run-time. The second involves a strategy for effective placement of data in main memory, to optimize memory access latencies and bandwidth. These capabilties when included were found to have a significant impact on the application performance, resulting in average speedups of as much as two to four times.
Date: November 28, 2011
Creator: Srinivasa, Avinash
Partner: UNT Libraries Government Documents Department

Framework for Adaptable Operating and Runtime Systems: Final Project Report

Description: In this grant, we examined a wide range of techniques for constructing high-performance con#12;gurable system software for HPC systems and its application to DOE-relevant problems. Overall, research and development on this project focused in three specifc areas: (1) software frameworks for constructing and deploying con#12;gurable system software, (2) applcation of these frameworks to HPC-oriented adaptable networking software, (3) performance analysis of HPC system software to understand opportunities for performance optimization.
Date: February 1, 2012
Creator: Bridges, Patrick G.
Partner: UNT Libraries Government Documents Department

RATMAC PRIMER

Description: The language RATMAC is a direct descendant of one of the most successful structured FORTRAN languages, rational FORTRAN, RATFOR. RATMAC has all of the characteristics of RATFOR, but is augmented by a powerful recursive macro processor which is extremely useful in generating transportable FORTRAN programs. A macro is a collection of programming steps which are associated with a keyword. This keyword uniquely identifies the macro, and whenever it appears in a RATMAC program it is replaced by the collection of steps. This primer covers the language's control and decision structures, macros, file inclusion, symbolic constants, and error messages.
Date: October 1, 1980
Creator: Munn, R.J.; Stewart, J.M.; Norden, A.P. & Pagoaga, M. Katherine
Partner: UNT Libraries Government Documents Department

Synchronization in complex networks

Description: Synchronization processes in populations of locally interacting elements are in the focus of intense research in physical, biological, chemical, technological and social systems. The many efforts devoted to understand synchronization phenomena in natural systems take now advantage of the recent theory of complex networks. In this review, we report the advances in the comprehension of synchronization phenomena when oscillating elements are constrained to interact in a complex network topology. We also overview the new emergent features coming out from the interplay between the structure and the function of the underlying pattern of connections. Extensive numerical work as well as analytical approaches to the problem are presented. Finally, we review several applications of synchronization in complex networks to different disciplines: biological systems and neuroscience, engineering and computer science, and economy and social sciences.
Date: December 12, 2007
Creator: Arenas, A.; Diaz-Guilera, A.; Moreno, Y.; Zhou, C. & Kurths, J.
Partner: UNT Libraries Government Documents Department

Scientific Data Services -- A High-Performance I/O System with Array Semantics

Description: As high-performance computing approaches exascale, the existing I/O system design is having trouble keeping pace in both performance and scalability. We propose to address this challenge by adopting database principles and techniques in parallel I/O systems. First, we propose to adopt an array data model because many scientific applications represent their data in arrays. This strategy follows a cardinal principle from database research, which separates the logical view from the physical layout of data. This high-level data model gives the underlying implementation more freedom to optimize the physical layout and to choose the most effective way of accessing the data. For example, knowing that a set of write operations is working on a single multi-dimensional array makes it possible to keep the subarrays in a log structure during the write operations and reassemble them later into another physical layout as resources permit. While maintaining the high-level view, the storage system could compress the user data to reduce the physical storage requirement, collocate data records that are frequently used together, or replicate data to increase availability and fault-tolerance. Additionally, the system could generate secondary data structures such as database indexes and summary statistics. We expect the proposed Scientific Data Services approach to create a “live” storage system that dynamically adjusts to user demands and evolves with the massively parallel storage hardware.
Date: September 21, 2011
Creator: Wu, Kesheng; Byna, Surendra; Rotem, Doron & Shoshani, Arie
Partner: UNT Libraries Government Documents Department

Web-based Tool Identifies and Quantifies Potential Cost Savings Measures at the Hanford Site - 14366

Description: The Technical Improvement system is an approachable web-based tool that is available to Hanford DOE staff, site contractors, and general support service contractors as part of the baseline optimization effort underway at the Hanford Site. Finding and implementing technical improvements are a large part of DOE’s cost savings efforts. The Technical Improvement dashboard is a key tool for brainstorming and monitoring the progress of submitted baseline optimization and potential cost/schedule efficiencies. The dashboard is accessible to users over the Hanford Local Area Network (HLAN) and provides a highly visual and straightforward status to management on the ideas provided, alleviating the need for resource intensive weekly and monthly reviews.
Date: January 9, 2014
Creator: Renevitz, Marisa J.; Peschong, Jon C.; Charboneau, Briant L. & Simpson, Brett C.
Partner: UNT Libraries Government Documents Department

PDP-7 HYBRID COMPUTER PROGRAM RECTANGULAR INTEGRATION SUBROUTINE

Description: The hybrid subroutine library contains three subroutines to be used in performing a rectangular integration, RINT$, CNP$, and ICL$. Subroutine RINT$ is used to perform the calculation of the integral and can be used in either synchronous or asynchronous mode. The purpose of CNP$ is for calculation {Delta}t , the time between integral calculations, for an asynchronous integration . Subroutine ICL$ is for initialization of the real-time clock and also contains the save and restore portions of the clock service subroutine.
Date: October 1, 1969
Creator: Gerhardstein,, L. H.
Partner: UNT Libraries Government Documents Department

PDP-7 HYBRID COMPUTER PROGRAM, VARIABLE MONITORING AND TRIP SYSTEM PROGRAM TRIP$

Description: TRIP$ provides the capability of monitoring any number of program variables comparing their values to some pre-determined trip point. If the trip point is reached a selected user subroutine will be entered continuously until the trip system is re-initialized.
Date: October 1, 1969
Creator: Worth, Grant A.
Partner: UNT Libraries Government Documents Department

DOE Hanford Network Upgrades and Disaster Recovery Exercise Support the Cleanup Mission Now and into the Future - 14303

Description: In 2013, the U.S. Department of Energy�s (DOE) Hanford Site, located in Washington State, funded an update to the critical network infrastructure supporting the Hanford Federal Cloud (HFC). The project, called ET-50, was the final step in a plan that was initiated five years ago called �Hanford�s IT Vision, 2015 and Beyond.� The ET-50 project upgraded Hanford�s core data center switches and routers along with a majority of the distribution layer switches. The upgrades allowed HFC the network intelligence to provide Hanford with a more reliable and resilient network architecture. The culmination of the five year plan improved network intelligence and high performance computing as well as helped to provide 10Gbps capable links between core backbone devices (10 times the previous bandwidth). These improvements allow Hanford the ability to further support bandwidth intense applications, such as video teleconferencing. The ET-50 switch upgrade, along with other upgrades implemented from the five year plan, have prepared Hanford�s network for the next evolution of technology in voice, video, and data. Hand-in-hand with ET-50�s major data center outage, Mission Support Alliance�s (MSA) Information Management (IM) organization executed a disaster recovery (DR) exercise to perform a true integration test and capability study. The DR scope was planned within the constraints of ET-50�s 14 hour datacenter outage window. This DR exercise tested Hanford�s Continuity of Operations (COOP) capability and failover plans for safety and business critical Hanford Federal Cloud applications. The planned suite of services to be tested was identified prior to the outage and plans were prepared to test the services ability to failover from the primary Hanford datacenter to the backup datacenter. The services tested were: � Core Network (backbone, firewall, load balancers) � Voicemail, � Voice over IP (VoIP) � Emergency Notification � Virtual desktops and; � Select set of production applications and ...
Date: November 7, 2013
Creator: Eckman, Todd J.; Hertzel, Ali K. & Lane, James J.
Partner: UNT Libraries Government Documents Department

Quantum Monte Carlo algorithms for electronic structure at the petascale; the endstation project.

Description: Over the past two decades, continuum quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting of the properties of matter from fundamental principles. By solving the Schrodinger equation through a stochastic projection, it achieves the greatest accuracy and reliability of methods available for physical systems containing more than a few quantum particles. QMC enjoys scaling favorable to quantum chemical methods, with a computational effort which grows with the second or third power of system size. This accuracy and scalability has enabled scientific discovery across a broad spectrum of disciplines. The current methods perform very efficiently at the terascale. The quantum Monte Carlo Endstation project is a collaborative effort among researchers in the field to develop a new generation of algorithms, and their efficient implementations, which will take advantage of the upcoming petaflop architectures. Some aspects of these developments are discussed here. These tools will expand the accuracy, efficiency and range of QMC applicability and enable us to tackle challenges which are currently out of reach. The methods will be applied to several important problems including electronic and structural properties of water, transition metal oxides, nanosystems and ultracold atoms.
Date: October 1, 2008
Creator: Kim, J; Ceperley, D M; Purwanto, W; Walter, E J; Krakauer, H; Zhang, S W et al.
Partner: UNT Libraries Government Documents Department

MARIANE: MApReduce Implementation Adapted for HPC Environments

Description: MapReduce is increasingly becoming a popular framework, and a potent programming model. The most popular open source implementation of MapReduce, Hadoop, is based on the Hadoop Distributed File System (HDFS). However, as HDFS is not POSIX compliant, it cannot be fully leveraged by applications running on a majority of existing HPC environments such as Teragrid and NERSC. These HPC environments typicallysupport globally shared file systems such as NFS and GPFS. On such resourceful HPC infrastructures, the use of Hadoop not only creates compatibility issues, but also affects overall performance due to the added overhead of the HDFS. This paper not only presents a MapReduce implementation directly suitable for HPC environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems' functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPCenvironments, but also allows for better performance in such settings. This paper shows the applicability and high performance of the MapReduce paradigm through MARIANE, an implementation designed for clustered and shared-disk file systems and as such not dedicated to a specific MapReduce solution. The paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in distributed environments over Apache Hadoop in a data intensive setting, on the Magellan testbed at the National Energy Research Scientific Computing Center (NERSC).
Date: July 6, 2011
Creator: Fadika, Zacharia; Dede, Elif; Govindaraju, Madhusudhan & Ramakrishnan, Lavanya
Partner: UNT Libraries Government Documents Department

Distributed Sensor Coordination for Advanced Energy Systems

Description: The ability to collect key system level information is critical to the safe, efficient and reli- able operation of advanced energy systems. With recent advances in sensor development, it is now possible to push some level of decision making directly to computationally sophisticated sensors, rather than wait for data to arrive to a massive centralized location before a decision is made. This type of approach relies on networked sensors (called “agents” from here on) to actively collect and process data, and provide key control deci- sions to significantly improve both the quality/relevance of the collected data and the as- sociating decision making. The technological bottlenecks for such sensor networks stem from a lack of mathematics and algorithms to manage the systems, rather than difficulties associated with building and deploying them. Indeed, traditional sensor coordination strategies do not provide adequate solutions for this problem. Passive data collection methods (e.g., large sensor webs) can scale to large systems, but are generally not suited to highly dynamic environments, such as ad- vanced energy systems, where crucial decisions may need to be reached quickly and lo- cally. Approaches based on local decisions on the other hand cannot guarantee that each agent performing its task (maximize an agent objective) will lead to good network wide solution (maximize a network objective) without invoking cumbersome coordination rou- tines. There is currently a lack of algorithms that will enable self-organization and blend the efficiency of local decision making with the system level guarantees of global decision making, particularly when the systems operate in dynamic and stochastic environments. In this work we addressed this critical gap and provided a comprehensive solution to the problem of sensor coordination to ensure the safe, reliable, and robust operation of advanced energy systems. The differentiating aspect of the proposed work is in shift- ...
Date: July 31, 2013
Creator: Tumer, Kagan
Partner: UNT Libraries Government Documents Department