9 Matching Results

Search Results

Advanced search parameters have been applied.

Language constructs and runtime systems for compositional parallel programming

Description: In task-parallel programs, diverse activities can take place concurrently, and communication and synchronization patterns are complex and not easily predictable. Previous work has identified compositionality as an important design principle for task-parallel programs. In this paper, we discuss alternative approaches to the realization of this principle. We first provide a review and critical analysis of Strand, an early compositional programming language. We examine the strengths of the Strand approach and also its weaknesses, which we attribute primarily to the use of a specialized language. Then, we present an alternative programming language framework that overcomes these weaknesses. This framework uses simple extensions to existing sequential languages (C++ and Fortran) and a common runtime system to provide a basis for the construction of large, task-parallel programs. We also discuss the runtime system techniques required to support these languages on parallel and distributed computer systems.
Date: March 1, 1995
Creator: Foster, I. & Kesselman, C.
Partner: UNT Libraries Government Documents Department

The Nexus task-parallel runtime system

Description: A runtime system provides a parallel language compiler with an interface to the low-level facilities required to support interaction between concurrently executing program components. Nexus is a portable runtime system for task-parallel programming languages. Distinguishing features of Nexus include its support for multiple threads of control, dynamic processor acquisition, dynamic address space creation, a global memory model via interprocessor references, and asynchronous events. In addition, it supports heterogeneity at multiple levels, allowing a single computation to utilize different programming languages, executables, processors, and network protocols. Nexus is currently being used as a compiler target for two task-parallel languages: Fortran M and Compositional C++. In this paper, we present the Nexus design, outline techniques used to implement Nexus on parallel computers, show how it is used in compilers, and compare its performance with that of another runtime system.
Date: December 31, 1994
Creator: Foster, I.; Tuecke, S. & Kesselman, C.
Partner: UNT Libraries Government Documents Department

High performance computing and communications grand challenges program

Description: The so-called protein folding problem has numerous aspects, however it is principally concerned with the {ital de novo} prediction of three-dimensional (3D) structure from the protein primary amino acid sequence, and with the kinetics of the protein folding process. Our current project focuses on the 3D structure prediction problem which has proved to be an elusive goal of molecular biology and biochemistry. The number of local energy minima is exponential in the number of amino acids in the protein. All current methods of 3D structure prediction attempt to alleviate this problem by imposing various constraints that effectively limit the volume of conformational space which must be searched. Our Grand Challenge project consists of two elements: (1) a hierarchical methodology for 3D protein structure prediction; and (2) development of a parallel computing environment, the Protein Folding Workbench, for carrying out a variety of protein structure prediction/modeling computations. During the first three years of this project, we are focusing on the use of two proteins selected from the Brookhaven Protein Data Base (PDB) of known structure to provide validation of our prediction algorithms and their software implementation, both serial and parallel. Both proteins, protein L from {ital peptostreptococcus magnus}, and {ital streptococcal} protein G, are known to bind to IgG, and both have an {alpha} {plus} {beta} sandwich conformation. Although both proteins bind to IgG, they do so at different sites on the immunoglobin and it is of considerable biological interest to understand structurally why this is so. 12 refs., 1 fig.
Date: October 1, 1994
Creator: Solomon, J.E.; Barr, A.; Chandy, K.M.; Goddard, W.A., III & Kesselman, C.
Partner: UNT Libraries Government Documents Department

Multimethod communication for high-performance metacomputing applications

Description: Metacomputing systems use high-speed networks to connect supercomputers, mass storage systems, scientific instruments, and display devices with the objective of enabling parallel applications to access geographically distributed computing resources. However, experience shows that high performance often can be achieved only if applications can integrate diverse communication substrates, transport mechanisms, and protocols, chosen according to where communication is directed, what is communicated, or when communication is performed. In this article, we describe a software architecture that addresses this requirement. This architecture allows multiple communication methods to be supported transparently in a single application, with either automatic or user-specified selection criteria guiding the methods used for each communication. We describe an implementation of this architecture, based on the Nexus communication library, and use this implementation to evaluate performance issues. The implementation supported a wide variety of applications in the I-WAY metacomputing experiment at Supercomputing 95; we use one of these applications to provide a quantitative demonstration of the advantages of multimethod communication in a heterogeneous networked environment.
Date: December 31, 1996
Creator: Foster, I.; Geisler, J.; Tuecke, S. & Kesselman, C.
Partner: UNT Libraries Government Documents Department

A resource management architecture for metacomputing systems.

Description: Metacomputing systems are intended to support remote and/or concurrent use of geographically distributed computational resources. Resource management in such systems is complicated by five concerns that do not typically arise in other situations: site autonomy and heterogeneous substrates at the resources, and application requirements for policy extensibility, co-allocation, and online control. We describe a resource management architecture that addresses these concerns. This architecture distributes the resource management problem among distinct local manager, resource broker, and resource co-allocator components and defines an extensible resource specification language to exchange information about requirements. We describe how these techniques have been implemented in the context of the Globus metacomputing toolkit and used to implement a variety of different resource management strategies. We report on our experiences applying our techniques in a large testbed, GUSTO, incorporating 15 sites, 330 computers, and 3600 processors.
Date: August 24, 1999
Creator: Czajkowski, K.; Foster, I.; Karonis, N.; Kesselman, C.; Martin, S.; Smith, W. et al.
Partner: UNT Libraries Government Documents Department

A directory service for configuring high-performance distributed computations

Description: High-performance execution in distributed computing environments often requires careful selection and configuration not only of computers, networks, and other resources but also of the protocols and algorithms used by applications. Selection and configuration in turn require access to accurate, up-to-date information on the structure and state of available resources. Unfortunately, no standard mechanism exists for organizing or accessing such information. Consequently, different tools and applications adopt ad hoc mechanisms, or they compromise their portability and performance by using default configurations. We propose a Metacomputing Directory Service that provides efficient and scalable access to diverse, dynamic, and distributed information about resource structure and state. We define an extensible data model to represent required information and present a scalable, high-performance, distributed implementation. The data representation and application programming interface are adopted from the Lightweight Directory Access Protocol; the data model and implementation are new. We use the Globus distributed computing toolkit to illustrate how this directory service enables the development of more flexible and efficient distributed computing services and applications.
Date: August 1, 1997
Creator: Fitzgerald, S.; Kesselman, C. & Foster, I.
Partner: UNT Libraries Government Documents Department

Secure, efficient data transport and replica management for high-performance data-intensive computing.

Description: An emerging class of data-intensive applications involve the geographically dispersed extraction of complex scientific information from very large collections of measured or computed data. Such applications arise, for example, in experimental physics, where the data in question is generated by accelerators, and in simulation science, where the data is generated by supercomputers. So-called Data Grids provide essential infrastructure for such applications, much as the Internet provides essential services for applications such as e-mail and the Web. We describe here two services that we believe are fundamental to any Data Grid: reliable, high-speed transport and replica management. Our high-speed transport service, GridFTP, extends the popular FTP protocol with new features required for Data Grid applications, such as striping and partial file access. Our replica management service integrates a replica catalog with GridFTP transfers to provide for the creation, registration, location, and management of dataset replicas. We present the design of both services and also preliminary performance results. Our implementations exploit security and other services provided by the Globus Toolkit.
Date: February 23, 2001
Creator: Allcock, B.; Bester, J.; Bresnahan, J.; Chervenak, A.L.; Foster, I.; Kesselman, C. et al.
Partner: UNT Libraries Government Documents Department

A quasi-realtime x-ray microtomography system at the Advanced Photon Source.

Description: The combination of high-brilliance x-ray sources, fast detector systems, wide-bandwidth networks, and parallel computers can substantially reduce the time required to acquire, reconstruct, and visualize high-resolution three-dimensional tomographic datasets. A quasi-realtime computed x-ray microtomography system has been implemented at the 2-BM beamline at the Advanced Photon Source at Argonne National Laboratory. With this system, a complete tomographic data set can be collected in about 15 minutes. Immediately after each projection is obtained, it is rapidly transferred to the Mathematics and Computing Sciences Division where preprocessing and reconstruction calculations are performed concurrently with the data acquisition by a SGI parallel computer. The reconstruction results, once completed, are transferred to a visualization computer that performs the volume rendering calculations. Rendered images of the reconstructed data are available for viewing back at the beamline experiment station minutes after the data acquisition was complete. The fully pipelined data acquisition and reconstruction system also gives us the option to acquire the tomographic data set in several cycles, initially with coarse then with fine angular steps. At present the projections are acquired with a straight-ray projection imaging scheme using 5-20 keV hard x rays in either phase or amplitude contrast mode at a 1-10 pm resolution. In the future, we expect to increase the resolution of the projections to below 100 nm by using a focused x-ray beam at the 2-ID-B beamline and to reduce the combined acquisition and computation time to the 1 min scale with improvements in the detectors, network links, software pipeline, and computation algorithms.
Date: July 16, 1999
Creator: DeCarlo, F.; Foster, I.; Insley, J.; Kesselman, C.; Lane, P.; Mancini, D. et al.
Partner: UNT Libraries Government Documents Department

Real-time analysis, visualization, and steering of microtomography experiments at photon sources

Description: A new generation of specialized scientific instruments called synchrotron light sources allow the imaging of materials at very fine scales. However, in contrast to a traditional microscope, interactive use has not previously been possible because of the large amounts of data generated and the considerable computation required translating this data into a useful image. The authors describe a new software architecture that uses high-speed networks and supercomputers to enable quasi-real-time and hence interactive analysis of synchrotron light source data. This architecture uses technologies provided by the Globus computational grid toolkit to allow dynamic creation of a reconstruction pipeline that transfers data from a synchrotron source beamline to a preprocessing station, next to a parallel reconstruction system, and then to multiple visualization stations. Collaborative analysis tools allow multiple users to control data visualization. As a result, local and remote scientists can see and discuss preliminary results just minutes after data collection starts. The implications for more efficient use of this scarce resource and for more effective science appear tremendous.
Date: February 29, 2000
Creator: von Laszeski, G.; Insley, J. A.; Foster, I.; Bresnahan, J.; Kesselman, C.; Su, M. et al.
Partner: UNT Libraries Government Documents Department