4 Matching Results

Search Results

Advanced search parameters have been applied.

Scalable Computation of Streamlines on Very Large Datasets

Description: Understanding vector fields resulting from large scientific simulations is an important and often difficult task. Streamlines, curves that are tangential to a vector field at each point, are a powerful visualization method in this context. Application of streamline-based visualization to very large vector field data represents a significant challenge due to the non-local and data-dependent nature of streamline computation, and requires careful balancing of computational demands placed on I/O, memory, communication, and processors. In this paper we review two parallelization approaches based on established parallelization paradigms (static decomposition and on-demand loading) and present a novel hybrid algorithm for computing streamlines. Our algorithm is aimed at good scalability and performance across the widely varying computational characteristics of streamline-based problems. We perform performance and scalability studies of all three algorithms on a number of prototypical application problems and demonstrate that our hybrid scheme is able to perform well in different settings.
Date: September 1, 2009
Creator: Pugmire, David; Childs, Hank; Garth, Christoph; Ahern, Sean & Weber, Gunther H.
Partner: UNT Libraries Government Documents Department

Extreme Scaling of Production Visualization Software on Diverse Architectures

Description: We present the results of a series of experiments studying how visualization software scales to massive data sets. Although several paradigms exist for processing large data, we focus on pure parallelism, the dominant approach for production software. These experiments utilized multiple visualization algorithms and were run on multiple architectures. Two types of experiments were performed. For the first, we examined performance at massive scale: 16,000 or more cores and one trillion or more cells. For the second, we studied weak scaling performance. These experiments were performed on the largest data set sizes published to date in visualization literature, and the findings on scaling characteristics and bottlenecks contribute to understanding of how pure parallelism will perform at high levels of concurrency and with very large data sets.
Date: December 22, 2009
Creator: Childs, Henry; Pugmire, David; Ahern, Sean; Whitlock, Brad; Howison, Mark; Weber, Gunther et al.
Partner: UNT Libraries Government Documents Department

Recent Advances in VisIt: AMR Streamlines and Query-Driven Visualization

Description: Adaptive Mesh Refinement (AMR) is a highly effective method for simulations spanning a large range of spatiotemporal scales such as those encountered in astrophysical simulations. Combining research in novel AMR visualization algorithms and basic infrastructure work, the Department of Energy's (DOEs) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) has extended VisIt, an open source visualization tool that can handle AMR data without converting it to alternate representations. This paper focuses on two recent advances in the development of VisIt. First, we have developed streamline computation methods that properly handle multi-domain data sets and utilize effectively multiple processors on parallel machines. Furthermore, we are working on streamline calculation methods that consider an AMR hierarchy and detect transitions from a lower resolution patch into a finer patch and improve interpolation at level boundaries. Second, we focus on visualization of large-scale particle data sets. By integrating the DOE Scientific Data Management (SDM) Center's FastBit indexing technology into VisIt, we are able to reduce particle counts effectively by thresholding and by loading only those particles from disk that satisfy the thresholding criteria. Furthermore, using FastBit it becomes possible to compute parallel coordinate views efficiently, thus facilitating interactive data exploration of massive particle data sets.
Date: November 12, 2009
Creator: Weber, Gunther; Ahern, Sean; Bethel, Wes; Borovikov, Sergey; Childs, Hank; Deines, Eduard et al.
Partner: UNT Libraries Government Documents Department

Occam's Razor and Petascale Visual Data Analysis

Description: One of the central challenges facing visualization research is how to effectively enable knowledge discovery. An effective approach will likely combine application architectures that are capable of running on today?s largest platforms to address the challenges posed by large data with visual data analysis techniques that help find, represent, and effectively convey scientifically interesting features and phenomena.
Date: June 12, 2009
Creator: Bethel, E. Wes; Johnson, Chris; Ahern, Sean; Bell, John; Bremer, Peer-Timo; Childs, Hank et al.
Partner: UNT Libraries Government Documents Department