23 Matching Results

Search Results

Advanced search parameters have been applied.

Hybrid Parallelism for Volume Rendering on Large, Multi-core Systems

Description: This work studies the performance and scalability characteristics of"hybrid'"parallel programming and execution as applied to raycasting volume rendering -- a staple visualization algorithm -- on a large, multi-core platform. Historically, the Message Passing Interface (MPI) has become the de-facto standard for parallel programming and execution on modern parallel systems. As the computing industry trends towards multi-core processors, with four- and six-core chips common today and 128-core chips coming soon, we wish to better understand how algorithmic and parallel programming choices impact performance and scalability on large, distributed-memory multi-core systems. Our findings indicate that the hybrid-parallel implementation, at levels of concurrency ranging from 1,728 to 216,000, performs better, uses a smaller absolute memory footprint, and consumes less communication bandwidth than the traditional, MPI-only implementation.
Date: July 12, 2010
Creator: Howison, Mark; Bethel, E. Wes & Childs, Hank
Partner: UNT Libraries Government Documents Department

Hybrid Parallelism for Volume Rendering on Large, Multi-core Systems

Description: This work studies the performance and scalability characteristics of"hybrid" parallel programming and execution as applied to raycasting volume rendering -- a staple visualization algorithm -- on a large, multi-core platform. Historically, the Message Passing Interface (MPI) has become the de-facto standard for parallel programming and execution on modern parallel systems. As the computing industry trends towards multi-core processors, with four- and six-core chips common today and 128-core chips coming soon, we wish to better understand how algorithmic and parallel programming choices impact performance and scalability on large, distributed-memory multi-core systems. Our findings indicate that the hybrid-parallel implementation, at levels of concurrency ranging from 1,728 to 216,000, performs better, uses a smaller absolute memory footprint, and consumes less communication bandwidth than the traditional, MPI-only implementation.
Date: June 14, 2010
Creator: Howison, Mark; Bethel, E. Wes & Childs, Hank
Partner: UNT Libraries Government Documents Department

MPI-hybrid Parallelism for Volume Rendering on Large, Multi-core Systems

Description: This work studies the performance and scalability characteristics of"hybrid'" parallel programming and execution as applied to raycasting volume rendering -- a staple visualization algorithm -- on a large, multi-core platform. Historically, the Message Passing Interface (MPI) has become the de-facto standard for parallel programming and execution on modern parallel systems. As the computing industry trends towards multi-core processors, with four- and six-core chips common today and 128-core chips coming soon, we wish to better understand how algorithmic and parallel programming choices impact performance and scalability on large, distributed-memory multi-core systems. Our findings indicate that the hybrid-parallel implementation, at levels of concurrency ranging from 1,728 to 216,000, performs better, uses a smaller absolute memory footprint, and consumes less communication bandwidth than the traditional, MPI-only implementation.
Date: March 20, 2010
Creator: Howison, Mark; Bethel, E. Wes & Childs, Hank
Partner: UNT Libraries Government Documents Department

Data-Parallel Mesh Connected Components Labeling and Analysis

Description: We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.
Date: April 10, 2011
Creator: Harrison, Cyrus; Childs, Hank & Gaither, Kelly
Partner: UNT Libraries Government Documents Department

Hybrid Parallelism for Volume Rendering on Large, Multi- and Many-core Systems

Description: With the computing industry trending towards multi- and many-core processors, we study how a standard visualization algorithm, ray-casting volume rendering, can benefit from a hybrid parallelism approach. Hybrid parallelism provides the best of both worlds: using distributed-memory parallelism across a large numbers of nodes increases available FLOPs and memory, while exploiting shared-memory parallelism among the cores within each node ensures that each node performs its portion of the larger calculation as efficiently as possible. We demonstrate results from weak and strong scaling studies, at levels of concurrency ranging up to 216,000, and with datasets as large as 12.2 trillion cells. The greatest benefit from hybrid parallelism lies in the communication portion of the algorithm, the dominant cost at higher levels of concurrency. We show that reducing the number of participants with a hybrid approach significantly improves performance.
Date: January 1, 2011
Creator: Howison, Mark; Bethel, E. Wes & Childs, Hank
Partner: UNT Libraries Government Documents Department

Scalable Computation of Streamlines on Very Large Datasets

Description: Understanding vector fields resulting from large scientific simulations is an important and often difficult task. Streamlines, curves that are tangential to a vector field at each point, are a powerful visualization method in this context. Application of streamline-based visualization to very large vector field data represents a significant challenge due to the non-local and data-dependent nature of streamline computation, and requires careful balancing of computational demands placed on I/O, memory, communication, and processors. In this paper we review two parallelization approaches based on established parallelization paradigms (static decomposition and on-demand loading) and present a novel hybrid algorithm for computing streamlines. Our algorithm is aimed at good scalability and performance across the widely varying computational characteristics of streamline-based problems. We perform performance and scalability studies of all three algorithms on a number of prototypical application problems and demonstrate that our hybrid scheme is able to perform well in different settings.
Date: September 1, 2009
Creator: Pugmire, David; Childs, Hank; Garth, Christoph; Ahern, Sean & Weber, Gunther H.
Partner: UNT Libraries Government Documents Department

Streamline Integration using MPI-Hybrid Parallelism on a Large Multi-Core Architecture

Description: Streamline computation in a very large vector field data set represents a significant challenge due to the non-local and datadependentnature of streamline integration. In this paper, we conduct a study of the performance characteristics of hybrid parallel programmingand execution as applied to streamline integration on a large, multicore platform. With multi-core processors now prevalent in clustersand supercomputers, there is a need to understand the impact of these hybrid systems in order to make the best implementation choice.We use two MPI-based distribution approaches based on established parallelization paradigms, parallelize-over-seeds and parallelize-overblocks,and present a novel MPI-hybrid algorithm for each approach to compute streamlines. Our findings indicate that the work sharing betweencores in the proposed MPI-hybrid parallel implementation results in much improved performance and consumes less communication andI/O bandwidth than a traditional, non-hybrid distributed implementation.
Date: November 1, 2010
Creator: Camp, David; Garth, Christoph; Childs, Hank; Pugmire, Dave & Joy, Kenneth I.
Partner: UNT Libraries Government Documents Department

Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys

Description: Supercomputing Centers (SC's) are unique resources that aim to enable scientific knowledge discovery through the use of large computational resources, the Big Iron. Design, acquisition, installation, and management of the Big Iron are activities that are carefully planned and monitored. Since these Big Iron systems produce a tsunami of data, it is natural to co-locate visualization and analysis infrastructure as part of the same facility. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys does not receive the same level of treatment as that of the Big Iron. The main focus of this article is to explore different aspects of planning, designing, fielding, and maintaining the visualization and analysis infrastructure at supercomputing centers. Some of the questions we explore in this article include:"How should the Little Iron be sized to adequately support visualization and analysis of data coming off the Big Iron?" What sort of capabilities does it need to have?" Related questions concern the size of visualization support staff:"How big should a visualization program be (number of persons) and what should the staff do?" and"How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?"
Date: December 1, 2010
Creator: Bethel, E. Wes; van Rosendale, John; Southard, Dale; Gaither, Kelly; Childs, Hank; Brugger, Eric et al.
Partner: UNT Libraries Government Documents Department

Modern Scientific Visualization is more than Just Pretty Pictures

Description: While the primary product of scientific visualization is images and movies, its primary objective is really scientific insight. Too often, the focus of visualization research is on the product, not the mission. This paper presents two case studies, both that appear in previous publications, that focus on using visualization technology to produce insight. The first applies"Query-Driven Visualization" concepts to laser wakefield simulation data to help identify and analyze the process of beam formation. The second uses topological analysis to provide a quantitative basis for (i) understanding the mixing process in hydrodynamic simulations, and (ii) performing comparative analysis of data from two different types of simulations that model hydrodynamic instability.
Date: December 5, 2008
Creator: Bethel, E Wes; Rubel, Oliver; Wu, Kesheng; Weber, Gunther; Pascucci, Valerio; Childs, Hank et al.
Partner: UNT Libraries Government Documents Department

Production-quality Tools for Adaptive Mesh RefinementVisualization

Description: Adaptive Mesh Refinement (AMR) is a highly effectivesimulation method for spanning a large range of spatiotemporal scales,such as astrophysical simulations that must accommodate ranges frominterstellar to sub-planetary. Most mainstream visualization tools stilllack support for AMR as a first class data type and AMR code teams usecustom built applications for AMR visualization. The Department ofEnergy's (DOE's) Science Discovery through Advanced Computing (SciDAC)Visualization and Analytics Center for Enabling Technologies (VACET) isextending and deploying VisIt, an open source visualization tool thataccommodates AMR as a first-class data type, for use asproduction-quality, parallel-capable AMR visual data analysisinfrastructure. This effort will help science teams that use AMR-basedsimulations and who develop their own AMR visual data analysis softwareto realize cost and labor savings.
Date: October 25, 2007
Creator: Weber, Gunther H.; Childs, Hank; Bonnell, Kathleen; Meredith,Jeremy; Miller, Mark; Whitlock, Brad et al.
Partner: UNT Libraries Government Documents Department

Visualization of Scalar Adaptive Mesh Refinement Data

Description: Adaptive Mesh Refinement (AMR) is a highly effective computation method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations, which must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR grids as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR scalar data visualization research.
Date: December 6, 2007
Creator: VACET; Weber, Gunther; Weber, Gunther H.; Beckner, Vince E.; Childs, Hank; Ligocki, Terry J. et al.
Partner: UNT Libraries Government Documents Department

Recent Advances in VisIt: AMR Streamlines and Query-Driven Visualization

Description: Adaptive Mesh Refinement (AMR) is a highly effective method for simulations spanning a large range of spatiotemporal scales such as those encountered in astrophysical simulations. Combining research in novel AMR visualization algorithms and basic infrastructure work, the Department of Energy's (DOEs) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) has extended VisIt, an open source visualization tool that can handle AMR data without converting it to alternate representations. This paper focuses on two recent advances in the development of VisIt. First, we have developed streamline computation methods that properly handle multi-domain data sets and utilize effectively multiple processors on parallel machines. Furthermore, we are working on streamline calculation methods that consider an AMR hierarchy and detect transitions from a lower resolution patch into a finer patch and improve interpolation at level boundaries. Second, we focus on visualization of large-scale particle data sets. By integrating the DOE Scientific Data Management (SDM) Center's FastBit indexing technology into VisIt, we are able to reduce particle counts effectively by thresholding and by loading only those particles from disk that satisfy the thresholding criteria. Furthermore, using FastBit it becomes possible to compute parallel coordinate views efficiently, thus facilitating interactive data exploration of massive particle data sets.
Date: November 12, 2009
Creator: Weber, Gunther; Ahern, Sean; Bethel, Wes; Borovikov, Sergey; Childs, Hank; Deines, Eduard et al.
Partner: UNT Libraries Government Documents Department

Visualization Tools for Adaptive Mesh Refinement Data

Description: Adaptive Mesh Refinement (AMR) is a highly effective method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations that must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR visualization research and tools and describe how VisIt currently handles AMR data.
Date: May 9, 2007
Creator: Weber, Gunther H.; Beckner, Vincent E.; Childs, Hank; Ligocki,Terry J.; Miller, Mark C.; Van Straalen, Brian et al.
Partner: UNT Libraries Government Documents Department

Seeing the Unseeable

Description: The SciDAC Visualization and Analytics Center for Enabling Technologies (VACET) isa highly productive effort combining the forces of leading visualization researchersfrom five different institutions to solve some of the most challenging dataunderstanding problems in modern science. The VACET technology portfolio isdiverse, spanning all typical visual data analysis use models and effectivelybalancing forward-looking research with focused software architecture andengineering resulting in a production-quality software infrastructure. One of the keyelements in VACET's success is a rich set of projects that are collaborations withscience stakeholders: these efforts focus on identifying and overcoming obstacles toscientific knowledge discovery in modern, large, and complex scientific datasets.
Date: May 30, 2008
Creator: Bethel, Edward W; Bethel, E. Wes; Johnson, Chris; Hansen, Charles; Silva, Claudio; Parker, Steven et al.
Partner: UNT Libraries Government Documents Department

High Performance Multivariate Visual Data Exploration for Extremely Large Data

Description: One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system.
Date: August 22, 2008
Creator: Rubel, Oliver; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle et al.
Partner: UNT Libraries Government Documents Department

Application of High-performance Visual Analysis Methods to Laser Wakefield Particle Acceleration Data

Description: Our work combines and extends techniques from high-performance scientific data management and visualization to enable scientific researchers to gain insight from extremely large, complex, time-varying laser wakefield particle accelerator simulation data. We extend histogram-based parallel coordinates for use in visual information display as well as an interface for guiding and performing data mining operations, which are based upon multi-dimensional and temporal thresholding and data subsetting operations. To achieve very high performance on parallel computing platforms, we leverage FastBit, a state-of-the-art index/query technology, to accelerate data mining and multi-dimensional histogram computation. We show how these techniques are used in practice by scientific researchers to identify, visualize and analyze a particle beam in a large, time-varying dataset.
Date: August 28, 2008
Creator: Rubel, Oliver; Prabhat, Mr.; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R. et al.
Partner: UNT Libraries Government Documents Department

FastBit: Interactively Searching Massive Data

Description: As scientific instruments and computer simulations produce more and more data, the task of locating the essential information to gain insight becomes increasingly difficult. FastBit is an efficient software tool to address this challenge. In this article, we present a summary of the key underlying technologies, namely bitmap compression, encoding, and binning. Together these techniques enable FastBit to answer structured (SQL) queries orders of magnitude faster than popular database systems. To illustrate how FastBit is used in applications, we present three examples involving a high-energy physics experiment, a combustion simulation, and an accelerator simulation. In each case, FastBit significantly reduces the response time and enables interactive exploration on terabytes of data.
Date: June 23, 2009
Creator: Wu, Kesheng; Ahern, Sean; Bethel, E. Wes; Chen, Jacqueline; Childs, Hank; Cormier-Michel, Estelle et al.
Partner: UNT Libraries Government Documents Department

Coupling Visualization and Data Analysis for Knowledge Discovery from Multi-dimensional Scientific Data

Description: Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies"such as efficient data management" supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.
Date: June 8, 2010
Creator: Rubel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle et al.
Partner: UNT Libraries Government Documents Department

VACET: Proposed SciDAC2 Visualization and Analytics Center forEnabling Technologies

Description: This paper accompanies a poster that is being presented atthe SciDAC 2006 meeting in Denver, CO. This project focuses on leveragingscientific visualization and analytics software technology as an enablingtechnology for increasing scientific productivity and insight. Advancesincomputational technology have resultedin an "information big bang,"which in turn has createda significant data understanding challenge. Thischallenge is widely acknowledged to be one of the primary bottlenecks incontemporary science. The vision for our Center is to respond directly tothat challenge by adapting, extending, creating when necessary anddeploying visualization and data understanding technologies for ourscience stakeholders. Using an organizational model as a Visualizationand Analytics Center for Enabling Technologies (VACET), we are wellpositioned to be responsive to the needs of a diverse set of scientificstakeholders in a coordinated fashion using a range of visualization,mathematics, statistics, computer and computational science and datamanagement technologies.
Date: June 19, 2006
Creator: Bethel, W.; Johnson, Chris; Hansen, Charles; Parker, Steve; Sanderson, Allen; Silva, Claudio et al.
Partner: UNT Libraries Government Documents Department

SciDAC Visualization and Analytics Center for EnablingTechnologies

Description: The Visualization and Analytics Center for EnablingTechnologies (VACET) focuses on leveraging scientific visualization andanalytics software technology as an enabling technology for increasingscientific productivity and insight. Advances in computational technologyhave resulted in an 'information big bang,' which in turn has created asignificant data understanding challenge. This challenge is widelyacknowledged to be one of the primary bottlenecks in contemporaryscience. The vision of VACET is to adapt, extend, create when necessary,and deploy visual data analysis solutions that are responsive to theneeds of DOE'scomputational and experimental scientists. Our center isengineered to be directly responsive to those needs and to deliversolutions for use in DOE's large open computing facilities. The researchand development directly target data understanding problems provided byour scientific application stakeholders. VACET draws from a diverse setof visualization technology ranging from production quality applicationsand application frameworks to state-of-the-art algorithms forvisualization, analysis, analytics, data manipulation, and datamanagement.
Date: June 30, 2007
Creator: Bethel, E. Wes; Johnson, Chris; Joy, Ken; Ahern, Sean; Pascucci,Valerio; Childs, Hank et al.
Partner: UNT Libraries Government Documents Department

SciDAC Visualization and Analytics Center for EnablingTechnology

Description: The SciDAC2 Visualization and Analytics Center for EnablingTechnologies (VACET) began operation on 10/1/2006. This document, dated11/27/2006, is the first version of the VACET project management plan. Itwas requested by and delivered to ASCR/DOE. It outlines the Center'saccomplishments in the first six weeks of operation along with broadobjectives for the upcoming future (12-24 months).
Date: November 28, 2006
Creator: Bethel, E. Wes; Johnson, Chris; Joy, Ken; Ahern, Sean; Pascucci,Valerio; Childs, Hank et al.
Partner: UNT Libraries Government Documents Department

DOE's SciDAC Visualization and Analytics Center for EnablingTechnologies -- Strategy for Petascale Visual Data Analysis Success

Description: The focus of this article is on how one group of researchersthe DOE SciDAC Visualization and Analytics Center for EnablingTechnologies (VACET) is tackling the daunting task of enabling knowledgediscovery through visualization and analytics on some of the world slargest and most complex datasets and on some of the world's largestcomputational platforms. As a Center for Enabling Technology, VACET smission is the creation of usable, production-quality visualization andknowledge discovery software infrastructure that runs on large, parallelcomputer systems at DOE's Open Computing facilities and that providessolutions to challenging visual data exploration and knowledge discoveryneeds of modern science, particularly the DOE sciencecommunity.
Date: October 1, 2007
Creator: Bethel, E. Wes; Johnson, Chris; Aragon, Cecilia; Prabhat, ???; Rubel, Oliver; Weber, Gunther et al.
Partner: UNT Libraries Government Documents Department

Occam's Razor and Petascale Visual Data Analysis

Description: One of the central challenges facing visualization research is how to effectively enable knowledge discovery. An effective approach will likely combine application architectures that are capable of running on today?s largest platforms to address the challenges posed by large data with visual data analysis techniques that help find, represent, and effectively convey scientifically interesting features and phenomena.
Date: June 12, 2009
Creator: Bethel, E. Wes; Johnson, Chris; Ahern, Sean; Bell, John; Bremer, Peer-Timo; Childs, Hank et al.
Partner: UNT Libraries Government Documents Department