8 Matching Results

Search Results

Advanced search parameters have been applied.

The grand challenge of managing the petascale facility.

Description: This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, we should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected DOE computation facilities, science trends, and ...
Date: February 28, 2007
Creator: Aiken, R. J. & Science, Mathematics and Computer
Partner: UNT Libraries Government Documents Department

DSDP5 user guide - software for semidefinite programming.

Description: DSDP implements the dual-scaling algorithm for semidefinite programming. The source code of this interior-point solver, written entirely in ANSI C, is freely available. The solver can be used as a subroutine library, as a function within the Matlab environment, or as an executable that reads and writes to files. Initiated in 1997, DSDP has developed into an efficient and robust general-purpose solver for semidefinite programming. Although the solver is written with semidefinite programming in mind, it can also be used for linear programming and other constraint cones. The features of DSDP include the following: a robust algorithm with a convergence proof and polynomially bounded complexity under mild assumptions on the data, primal and dual solutions, feasible solutions when they exist or approximate certificates of infeasibility, initial points that can be feasible or infeasible, relatively low memory requirements for an interior-point method, sparse and low-rank data structures, extensibility that allows applications to customize the solver and improve its performance, a subroutine library that enables it to be linked to larger applications, scalable performance for large problems on parallel architectures, and a well-documented interface and examples of its use. The package has been used in many applications and tested for efficiency, robustness, and ease of use. We welcome and encourage further use under the terms of the license included in the distribution.
Date: January 24, 2006
Creator: Benson, S. J.; Ye, Y. & Science, Mathematics and Computer
Partner: UNT Libraries Government Documents Department

Optimal explicit strong-stability-preserving general linear methods : complete results.

Description: This paper constructs strong-stability-preserving general linear time-stepping methods that are well suited for hyperbolic PDEs discretized by the method of lines. These methods generalize both Runge-Kutta (RK) and linear multistep schemes. They have high stage orders and hence are less susceptible than RK methods to order reduction from source terms or nonhomogeneous boundary conditions. A global optimization strategy is used to find the most efficient schemes that have low storage requirements. Numerical results illustrate the theoretical findings.
Date: March 3, 2009
Creator: Constantinescu, E. M.; Sandu, A.; Science, Mathematics and Computer & Univ., Virginia Polytechnic Inst. and State
Partner: UNT Libraries Government Documents Department

Heights integrated model as instrument for simulation of hydrodynamic, radiation transport, and heat conduction phenomena of laser-produced plasma in EUV applications.

Description: The HEIGHTS integrated model has been developed as an instrument for simulation and optimization of laser-produced plasma (LPP) sources relevant to extreme ultraviolet (EUV) lithography. The model combines three general parts: hydrodynamics, radiation transport, and heat conduction. The first part employs a total variation diminishing scheme in the Lax-Friedrich formulation (TVD-LF); the second part, a Monte Carlo model; and the third part, implicit schemes with sparse matrix technology. All model parts consider physical processes in three-dimensional geometry. The influence of a generated magnetic field on laser plasma behavior was estimated, and it was found that this effect could be neglected for laser intensities relevant to EUV (up to {approx}10{sup 12} W/cm{sup 2}). All applied schemes were tested on analytical problems separately. Benchmark modeling of the full EUV source problem with a planar tin target showed good correspondence with experimental and theoretical data. Preliminary results are presented for tin droplet- and planar-target LPP devices. The influence of three-dimensional effects on EUV properties of source is discussed.
Date: January 16, 2007
Creator: Sizyuk, V.; Hassanein, A.; Morozov, V.; Sizyuk, T. & Science, Mathematics and Computer
Partner: UNT Libraries Government Documents Department

Perspectives on distributed computing : thirty people, four user types, and the distributed computing user experience.

Description: This report summarizes the methodology and results of a user perspectives study conducted by the Community Driven Improvement of Globus Software (CDIGS) project. The purpose of the study was to document the work-related goals and challenges facing today's scientific technology users, to record their perspectives on Globus software and the distributed-computing ecosystem, and to provide recommendations to the Globus community based on the observations. Globus is a set of open source software components intended to provide a framework for collaborative computational science activities. Rather than attempting to characterize all users or potential users of Globus software, our strategy has been to speak in detail with a small group of individuals in the scientific community whose work appears to be the kind that could benefit from Globus software, learn as much as possible about their work goals and the challenges they face, and describe what we found. The result is a set of statements about specific individuals experiences. We do not claim that these are representative of a potential user community, but we do claim to have found commonalities and differences among the interviewees that may be reflected in the user community as a whole. We present these as a series of hypotheses that can be tested by subsequent studies, and we offer recommendations to Globus developers based on the assumption that these hypotheses are representative. Specifically, we conducted interviews with thirty technology users in the scientific community. We included both people who have used Globus software and those who have not. We made a point of including individuals who represent a variety of roles in scientific projects, for example, scientists, software developers, engineers, and infrastructure providers. The following material is included in this report: (1) A summary of the reported work-related goals, significant issues, and points of satisfaction with the ...
Date: October 15, 2008
Creator: Childers, L.; Liming, L.; Foster, I.; Science, Mathematics and Computer & Chicago, Univ. of
Partner: UNT Libraries Government Documents Department

Extending the POSIX I/O interface: a parallel file system perspective.

Description: The POSIX interface does not lend itself well to enabling good performance for high-end applications. Extensions are needed in the POSIX I/O interface so that high-concurrency HPC applications running on top of parallel file systems perform well. This paper presents the rationale, design, and evaluation of a reference implementation of a subset of the POSIX I/O interfaces on a widely used parallel file system (PVFS) on clusters. Experimental results on a set of micro-benchmarks confirm that the extensions to the POSIX interface greatly improve scalability and performance.
Date: December 11, 2008
Creator: Vilayannur, M.; Lang, S.; Ross, R.; Klundt, R.; Ward, L.; Science, Mathematics and Computer et al.
Partner: UNT Libraries Government Documents Department

Virtual venue management users manual : access grid toolkit documentation, version 2.3.

Description: An Access Grid Venue Server provides access to individual Virtual Venues, virtual spaces where users can collaborate using the Access Grid Venue Client software. This manual describes the Venue Server component of the Access Grid Toolkit, version 2.3. Covered here are the basic operations of starting a venue server, modifying its configuration, and modifying the configuration of the individual venues.
Date: October 24, 2007
Creator: Judson, I. R.; Lefvert, S.; Olson, E.; Uram, T. D. & Science, Mathematics and Computer
Partner: UNT Libraries Government Documents Department

Structured hints : extracting and abstracting domain expertise.

Description: We propose a new framework for providing information to help optimize domain-specific application codes. Its design addresses problems that derive from the widening gap between the domain problem statement by domain experts and the architectural details of new and future high-end computing systems. The design is particularly well suited to program execution models that incorporate dynamic adaptive methodologies for live tuning of program performance and resource utilization. This new framework, which we call 'structured hints', couples a vocabulary of annotations to a suite of performance metrics. The immediate target is development of a process by which a domain expert describes characteristics of objects and methods in the application code that would not be readily apparent to the compiler; the domain expert provides further information about what quantities might provide the best indications of desirable effect; and the interactive preprocessor identifies potential opportunities for the domain expert to evaluate. Our development of these ideas is progressing in stages from case study, through manual implementation, to automatic or semi-automatic implementation. In this paper we discuss results from our case study, an examination of a large simulation of a neural network modeled after the neocortex.
Date: March 16, 2009
Creator: Hereld, M.; Stevens, R.; Sterling, T.; Gao, G. R.; Science, Mathematics and Computer; Tech., California Inst. of et al.
Partner: UNT Libraries Government Documents Department