5 Matching Results

Search Results

Advanced search parameters have been applied.

I/O Performance of Virtualized Cloud Environments

Description: The scientific community is exploring the suitability of cloud infrastructure to handle High Performance Computing (HPC) applications. The goal of Magellan, a project funded through DOE ASCR, is to investigate the potential role of cloud computing to address the computing needs of the Department of Energy?s Office of Science, especially for mid-range computing and data-intensive applications which are not served through existing DOE centers today. Prior work has shown that applications with significant communication orI/O tend to perform poorly in virtualized cloud environments. However, there is a limited understanding of the I/O characteristics in virtualized cloud environments. This paper will present our results in benchmarking the I/O performance over different cloud and HPC platforms to identify the major bottlenecks in existing infrastructure. We compare the I/O performance using IOR benchmark on two cloud platforms - Amazon and Magellan. We analyze the performance of different storage options available, different instance types in multiple availability zones. Finally, we perform large-scale tests in order to analyze the variability in the I/O patterns over time and region. Our results highlight the overhead and variability in I/O performance on both public and private cloud solutions. Our results will help applications decide between the different storage options enabling applications to make effective choices.
Date: November 3, 2011
Creator: Ghoshal, Devarshi; Canon, Shane & Ramakrishnan, Lavanya
Partner: UNT Libraries Government Documents Department

MARIANE: MApReduce Implementation Adapted for HPC Environments

Description: MapReduce is increasingly becoming a popular framework, and a potent programming model. The most popular open source implementation of MapReduce, Hadoop, is based on the Hadoop Distributed File System (HDFS). However, as HDFS is not POSIX compliant, it cannot be fully leveraged by applications running on a majority of existing HPC environments such as Teragrid and NERSC. These HPC environments typicallysupport globally shared file systems such as NFS and GPFS. On such resourceful HPC infrastructures, the use of Hadoop not only creates compatibility issues, but also affects overall performance due to the added overhead of the HDFS. This paper not only presents a MapReduce implementation directly suitable for HPC environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems' functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPCenvironments, but also allows for better performance in such settings. This paper shows the applicability and high performance of the MapReduce paradigm through MARIANE, an implementation designed for clustered and shared-disk file systems and as such not dedicated to a specific MapReduce solution. The paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in distributed environments over Apache Hadoop in a data intensive setting, on the Magellan testbed at the National Energy Research Scientific Computing Center (NERSC).
Date: July 6, 2011
Creator: Fadika, Zacharia; Dede, Elif; Govindaraju, Madhusudhan & Ramakrishnan, Lavanya
Partner: UNT Libraries Government Documents Department

A Multi-Dimensional Classification Model for Scientific Workflow Characteristics

Description: Workflows have been used to model repeatable tasks or operations in manufacturing, business process, and software. In recent years, workflows are increasingly used for orchestration of science discovery tasks that use distributed resources and web services environments through resource models such as grid and cloud computing. Workflows have disparate re uirements and constraints that affects how they might be managed in distributed environments. In this paper, we present a multi-dimensional classification model illustrated by workflow examples obtained through a survey of scientists from different domains including bioinformatics and biomedical, weather and ocean modeling, astronomy detailing their data and computational requirements. The survey results and classification model contribute to the high level understandingof scientific workflows.
Date: April 5, 2010
Creator: Ramakrishnan, Lavanya & Plale, Beth
Partner: UNT Libraries Government Documents Department

On-demand Overlay Networks for Large Scientific Data Transfers

Description: Large scale scientific data transfers are central to scientific processes. Data from large experimental facilities have to be moved to local institutions for analysis or often data needs to be moved between local clusters and large supercomputing centers. In this paper, we propose and evaluate a network overlay architecture to enable highthroughput, on-demand, coordinated data transfers over wide-area networks. Our work leverages Phoebus and On-demand Secure Circuits and AdvanceReservation System (OSCARS) to provide high performance wide-area network connections. OSCARS enables dynamic provisioning of network paths with guaranteed bandwidth and Phoebus enables the coordination and effective utilization of the OSCARS network paths. Our evaluation shows that this approach leads to improved end-to-end data transfer throughput with minimal overheads. The achievedthroughput using our overlay was limited only by the ability of the end hosts to sink the data.
Date: October 12, 2009
Creator: Ramakrishnan, Lavanya; Guok, Chin; Jackson, Keith; Kissel, Ezra; Swany, D. Martin & Agarwal, Deborah
Partner: UNT Libraries Government Documents Department

A Noisy 10GB Provenance Database

Description: Provenance of scientific data is a key piece of the metadata record for the data's ongoing discovery and reuse. Provenance collection systems capture provenance on the fly, however, the protocol between application and provenance tool may not be reliable. Consequently, the provenance record can be partial, partitioned, and simply inaccurate. We use a workflow emulator that models faults to construct a large 10GB database of provenance that we know is noisy (that is, has errors). We discuss the process of generating the provenance database, and show early results on the kinds of provenance analysis enabled by the large provenance.
Date: June 6, 2011
Creator: Cheah, You-Wei; Plale, Beth; Kendall-Morwick, Joey; Leake, David & Ramakrishnan, Lavanya
Partner: UNT Libraries Government Documents Department