68 Matching Results

Search Results

Advanced search parameters have been applied.

Compressing bitmap indexes for faster search operations

Description: In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed.
Date: April 25, 2002
Creator: Wu, Kesheng; Otoo, Ekow J. & Shoshani, Arie
Partner: UNT Libraries Government Documents Department

Dynamic Group Diffie-Hellman Key Exchange under standard assumptions

Description: Authenticated Diffie-Hellman key exchange allows two principals communicating over a public network, and each holding public-private keys, to agree on a shared secret value. In this paper we study the natural extension of this cryptographic problem to a group of principals. We begin from existing formal security models and refine them to incorporate major missing details (e.g., strong-corruption and concurrent sessions). Within this model we define the execution of a protocol for authenticated dynamic group Diffie-Hellman and show that it is provably secure under the decisional Diffie-Hellman assumption. Our security result holds in the standard model and thus provides better security guarantees than previously published results in the random oracle model.
Date: February 14, 2002
Creator: Bresson, Emmanuel; Chevassut, Olivier & Pointcheval, David
Partner: UNT Libraries Government Documents Department

Top500 list's twice-yearly snapshots of world's fastest supercomputers develop into big picture of changing technology

Description: Now in its 10th year, the twice-yearly TOP500 list of supercomputers serves as a ''Who's Who'' in the field of High Performance Computing (HPC). The TOP500 list was started in 1993 as a project to compile and publish twice a year a list of the most powerful supercomputers in the world. But it is more than just a ranking system and serves as major source of information to analyze trends in HPC. The list of manufacturers active in this market segment has changed continuously and quite dramatically during the 10 year history of this project. And while the architectures of the systems in the list have also seen a constant change, it turns out that the overall increase in the performance levels recorded is rather smooth and predictable. HPC performance levels grow exponentially. The most important single factor for this growth is - of course the increase of processor performance described by Moore's Law. However, the TOP500 list clearly illustrates that HPC performance has actually outpaced Moore's Law, due to the increasing processor numbers in HPC systems. On the other hand changes in computer architecture make it more and more of a challenge to achieve high performance efficiencies in the Linpack benchmark used to rank the 500 systems. With knowledge and effort, the Linpack benchmark can still be implemented in very efficient ways as recently demonstrated by a new implementation developed at the U.S. Department of Energy's National Energy Research Scientific Computing Center for their 6,656 processor IBM SP system.
Date: July 3, 2003
Creator: Strohmaier, Erich
Partner: UNT Libraries Government Documents Department

Ordering sparse matrices for cache-based systems

Description: The Conjugate Gradient (CG) algorithm is the oldest and best-known Krylov subspace method used to solve sparse linear systems. Most of the coating-point operations within each CG iteration is spent performing sparse matrix-vector multiplication (SPMV). We examine how various ordering and partitioning strategies affect the performance of CG and SPMV when different programming paradigms are used on current commercial cache-based computers. However, a multithreaded implementation on the cacheless Cray MTA demonstrates high efficiency and scalability without any special ordering or partitioning.
Date: January 11, 2001
Creator: Biswas, Rupak & Oliker, Leonid
Partner: UNT Libraries Government Documents Department

Subjective surfaces: a geometric model for boundary completion

Description: We present a geometric model and a computational method for segmentation of images with missing boundaries. In many situations, the human visual system fills in missing gaps in edges and boundaries, building and completing information that is not present. Boundary completion presents a considerable challenge in computer vision, since most algorithms attempt to exploit existing data. A large body of work concerns completion models, which postulate how to construct missing data; these models are often trained and specific to particular images. In this paper, we take the following, alternative perspective: we consider a reference point within an image as given, and then develop an algorithm which tries to build missing information on the basis of the given point of view and the available information as boundary data to the algorithm. Starting from this point of view, a surface is constructed. It is then evolved with the mean curvature flow in the metric induced by the image until a piecewise constant solution is reached. We test the computational model on modal completion, amodal completion, texture, photo and medical images. We extend the geometric model and the algorithm to 3D in order to extract shapes from low signal/noise ratio medical volumes. Results in 3D echocardiography and 3D fetal echography are presented.
Date: June 1, 2000
Creator: Sarti, Alessandro; Malladi, Ravi & Sethian, J.A.
Partner: UNT Libraries Government Documents Department

The intergroup protocols: Scalable group communication for the internet

Description: Reliable group ordered delivery of multicast messages in a distributed system is a useful service that simplifies the programming of distributed applications. Such a service helps to maintain the consistency of replicated information and to coordinate the activities of the various processes. With the increasing popularity of the Internet, there is an increasing interest in scaling the protocols that provide this service to the environment of the Internet. The InterGroup protocol suite, described in this dissertation, provides such a service, and is intended for the environment of the Internet with scalability to large numbers of nodes and high latency links. The InterGroup protocols approach the scalability problem from various directions. They redefine the meaning of group membership, allow voluntary membership changes, add a receiver-oriented selection of delivery guarantees that permits heterogeneity of the receiver set, and provide a scalable reliability service. The InterGroup system comprises several components, executing at various sites within the system. Each component provides part of the services necessary to implement a group communication system for the wide-area. The components can be categorized as: (1) control hierarchy, (2) reliable multicast, (3) message distribution and delivery, and (4) process group membership. We have implemented a prototype of the InterGroup protocols in Java, and have tested the system performance in both local-area and wide-area networks.
Date: November 1, 2000
Creator: Berket, K.
Partner: UNT Libraries Government Documents Department

High-speed wide area, data intensive computing: A Ten Year Retrospective

Description: Modern scientific computing involves organizing, moving, visualizing, and analyzing massive amounts of data from around the world, as well as employing large-scale computation. The distributed systems that solve large-scale problems will always involve aggregating and scheduling many resources. Data must be located and staged, cache and network capacity must be available at the same time as computing capacity, etc. Every aspect of such a system is dynamic: locating and scheduling resources, adapting running application systems to availability and congestion in the middleware and infrastructure, responding to human interaction, etc. The technologies, the middleware services, and the architectures that are used to build useful high-speed, wide area distributed systems, constitute the field of data intensive computing. This paper explores some of the history and future directions of that field.
Date: May 1, 1998
Creator: Johnston, William E.
Partner: UNT Libraries Government Documents Department

Security Implications of Typical Grid Computing Usage Scenarios

Description: A Computational Grid is a collection of heterogeneous computers and resources spread across multiple administrative domains with the intent of providing users uniform access to these resources. There are many ways to access the resources of a Computational Grid, each with unique security requirements and implications for both the resource user and the resource provider. A comprehensive set of Grid usage scenarios are presented and analyzed with regard to security requirements such as authentication, authorization, integrity, and confidentiality. The main value of these scenarios and the associated security discussions are to provide a library of situations against which an application designer can match, thereby facilitating security-aware application use and development from the initial stages of the application design and invocation. A broader goal of these scenarios are to increase the awareness of security issues in Grid Computing.
Date: June 5, 2001
Creator: Humphrey, Marty & Thompson, Mary R.
Partner: UNT Libraries Government Documents Department

On the use of the singular value decomposition for text retrieval

Description: The use of the Singular Value Decomposition (SVD) has been proposed for text retrieval in several recent works. This technique uses the SVD to project very high dimensional document and query vectors into a low dimensional space. In this new space it is hoped that the underlying structure of the collection is revealed thus enhancing retrieval performance. Theoretical results have provided some evidence for this claim and to some extent experiments have confirmed this. However, these studies have mostly used small test collections and simplified document models. In this work we investigate the use of the SVD on large document collections. We show that, if interpreted as a mechanism for representing the terms of the collection, this technique alone is insufficient for dealing with the variability in term occurrence. Section 2 introduces the text retrieval concepts necessary for our work. A short description of our experimental architecture is presented in Section 3. Section 4 describes how term occurrence variability affects the SVD and then shows how the decomposition influences retrieval performance. A possible way of improving SVD-based techniques is presented in Section 5 and concluded in Section 6.
Date: December 4, 2000
Creator: Husbands, P.; Simon, H.D. & Ding, C.
Partner: UNT Libraries Government Documents Department

Provably authenticated group Diffie-Hellman key exchange - The dynamic case (Extended abstract)

Description: Dynamic group Diffie-Hellman protocols for Authenticated Key Exchange(AKE) are designed to work in scenario in which the group membership is not known in advance but where parties may join and may also leave the multicast group at any given time. While several schemes have been proposed to deal with this scenario no formal treatment for this cryptographic problem has ever been suggested. In this paper, we define a security model for this problem and use it to precisely define Authenticated Key Exchange (AKE) with ''implicit'' authentication as the fundamental goal, and the entity-authentication goal as well. We then define in this model the execution of a protocol modified from a dynamic group Diffie-Hellman scheme offered in the literature and prove its security.
Date: September 20, 2001
Creator: Bresson, Emmanuel; Chevassut, Olivier & Pointcheval, David
Partner: UNT Libraries Government Documents Department

A survey of packages for large linear systems

Description: This paper evaluates portable software packages for the iterative solution of very large sparse linear systems on parallel architectures. While we cannot hope to tell individual users which package will best suit their needs, we do hope that our systematic evaluation provides essential unbiased information about the packages and the evaluation process may serve as an example on how to evaluate these packages. The information contained here include feature comparisons, usability evaluations and performance characterizations. This review is primarily focused on self-contained packages that can be easily integrated into an existing program and are capable of computing solutions to very large sparse linear systems of equations. More specifically, it concentrates on portable parallel linear system solution packages that provide iterative solution schemes and related preconditioning schemes because iterative methods are more frequently used than competing schemes such as direct methods. The eight packages evaluated are: Aztec, BlockSolve,ISIS++, LINSOL, P-SPARSLIB, PARASOL, PETSc, and PINEAPL. Among the eight portable parallel iterative linear system solvers reviewed, we recommend PETSc and Aztec for most application programmers because they have well designed user interface, extensive documentation and very responsive user support. Both PETSc and Aztec are written in the C language and are callable from Fortran. For those users interested in using Fortran 90, PARASOL is a good alternative. ISIS++is a good alternative for those who prefer the C++ language. Both PARASOL and ISIS++ are relatively new and are continuously evolving. Thus their user interface may change. In general, those packages written in Fortran 77 are more cumbersome to use because the user may need to directly deal with a number of arrays of varying sizes. Languages like C++ and Fortran 90 offer more convenient data encapsulation mechanisms which make it easier to implement a clean and intuitive user interface. In addition to reviewing these ...
Date: February 11, 2000
Creator: Wu, Kesheng & Milne, Brent
Partner: UNT Libraries Government Documents Department

Visapult: A Prototype Remote and Distributed Visualization Application and Framework

Description: We describe an approach used for implementing a highly efficient and scalable method for direct volume rendering. Our approach uses a pipelined-parallel decomposition composed of parallel computers and commodity desktop hardware. With our approach, desktop interactivity is divorced from the latency inherent in network-based applications.
Date: April 17, 2000
Creator: Bethel, Wes
Partner: UNT Libraries Government Documents Department

How are we doing? A self-assessment of the quality of services and systems at NERSC (Oct. 1, 1999 - Sept. 30, 2000)

Description: This fourth annual self-assessment of the systems and services provided by the U.S. Department of Energy's National Energy Research Scientific Computing Center describes the efforts of the NERSC staff to support advanced computing for scientific discovery. Our staff applies experience and expertise to provide world-class systems and unparalleled services for NERSC users. At the same time, members of our organization are leading contributors to advancing the field of high-performance computing through conference presentations, published papers, collaborations with scientific researchers and through regular meetings with members of similar institutions. We believe that, by any measure, the results of our efforts underscore NERSC's position as a global leader in scientific computing.
Date: April 1, 2001
Creator: Kramer, William T.
Partner: UNT Libraries Government Documents Department

An unsymmetrized multifrontal LU factorization

Description: A well-known approach to compute the LU factorization of a general unsymmetric matrix bf A is to build the elimination tree associated with the pattern of the symmetric matrix A + A{sup T} and use it as a computational graph to drive the numerical factorization. This approach, although very efficient on a large range of unsymmetric matrices, does not capture the unsymmetric structure of the matrices. We introduce a new algorithm which detects and exploits the structural unsymmetry of the submatrices involved during the process of the elimination tree. We show that with the new algorithm significant gains both in memory and in time to perform the factorization can be obtained.
Date: July 17, 2000
Creator: Amestoy, Patrick R. & Puglisi, Chiara
Partner: UNT Libraries Government Documents Department

Interfacing interactive data analysis tools with the grid: The PPDG CS-11 activity

Description: For today's physicists, who work in large geographically distributed collaborations, the data grid promises significantly greater capabilities for analysis of experimental data and production of physics results than is possible with today's ''remote access'' technologies. The goal of letting scientists at their home institutions interact with and analyze data as if they were physically present at the major laboratory that houses their detector and computer center has yet to be accomplished. The Particle Physics DataGrid project (www.ppdg.net) has recently embarked on an effort to ''Interface and Integrate Interactive Data Analysis Tools with the grid and identify Common Components and Services.'' The initial activities are to collect known and identify new requirements for grid services and analysis tools from a range of current and future experiments (ALICE, ATLAS, BaBar, D0, CMS, JLab, STAR, others welcome), to determine if existing plans for tools and services meet these requirements. Follow-on activities will foster the interaction between grid service developers, analysis tool developers, experiment analysis frame work developers and end user physicists, and will identify and carry out specific development/integration work so that interactive analysis tools utilizing grid services actually provide the capabilities that users need. This talk will summarize what we know of requirements for analysis tools and grid services, as well as describe the identified areas where more development work is needed.
Date: October 9, 2002
Creator: Olson, Douglas L. & Perl, Joseph
Partner: UNT Libraries Government Documents Department

Supporting collaborative computing and interaction

Description: To enable collaboration on the daily tasks involved in scientific research, collaborative frameworks should provide lightweight and ubiquitous components that support a wide variety of interaction modes. We envision a collaborative environment as one that provides a persistent space within which participants can locate each other, exchange synchronous and asynchronous messages, share documents and applications, share workflow, and hold videoconferences. We are developing the Pervasive Collaborative Computing Environment (PCCE) as such an environment. The PCCE will provide integrated tools to support shared computing and task control and monitoring. This paper describes the PCCE and the rationale for its design.
Date: May 22, 2002
Creator: Agarwal, Deborah; McParland, Charles & Perry, Marcia
Partner: UNT Libraries Government Documents Department

Numerical simulation of laminar reacting flows with complex chemistry

Description: We present an adaptive algorithm for low Mach number reacting flows with complex chemistry. Our approach uses a form of the low Mach number equations that discretely conserves both mass and energy. The discretization methodology is based on a robust projection formulation that accommodates large density contrasts. The algorithm uses an operator-split treatment of stiff reaction terms and includes effects of differential diffusion. The basic computational approach is embedded in an adaptive projection framework that uses structured hierarchical grids with subcycling in time that preserves the discrete conservation properties of the underlying single-grid algorithm. We present numerical examples illustrating the performance of the method on both premixed and non-premixed flames.
Date: December 1, 1999
Creator: Day, Marcus S. & Bell, John B.
Partner: UNT Libraries Government Documents Department

The group Diffie-Hellman problems

Description: In this paper they study generalizations of the Diffie-Hellman problems recently used to construct cryptographic schemes for practical purposes. The Group Computational and the Group Decisional Diffie-Hellman assumptions not only enable one to construct efficient pseudo-random functions but also to naturally extend the Diffie-Hellman protocol to allow more than two parties to agree on a secret key. In this paper they provide results that add to their confidence in the GCDH problem. They reach this aim by showing exact relations among the GCDH, GDDH, CDH and DDH problems.
Date: July 20, 2002
Creator: Bresson, Emmanuel; Chevassut, Olivier & Pointcheval, David
Partner: UNT Libraries Government Documents Department