164 Matching Results

Search Results

Advanced search parameters have been applied.

DeepView: A collaborative framework for distributed microscopy

Description: This paper outlines the motivation, requirements, and architecture of a collaborative framework for distributed virtual microscopy. In this context, the requirements are specified in terms of (1) functionality, (2) scalability, (3) interactivity, and (4) safety and security. Functionality refers to what and how an instrument does something. Scalability refers to the number of instruments, vendor-specific desktop workstations, analysis programs, and collaborators that can be accessed. Interactivity refers to how well the system can be steered either for static or dynamic experiments. Safety and security refers to safe operation of an instrument coupled with user authentication, privacy, and integrity of data communication. To meet these requirements, we introduce three types of services in the architecture: Instrument Services (IS), Exchange Services (ES), and Computational Services (CS). These services may reside on any host in the distributed system. The IS provide an abstraction for manipulating different types of microscopes; the ES provide common services that are required between different resources; and the CS provide analytical capabilities for data analysis and simulation. These services are brought together through CORBA and its enabling services, e.g., Event Services, Time Services, Naming Services, and Security Services. Two unique applications have been introduced into the CS for analyzing scientific images either for instrument control or recovery of a model for objects of interest. These include: in-situ electron microscopy and recovery of 3D shape from holographic microscopy. The first application provides a near real-time processing of the video-stream for on-line quantitative analysis and the use of that information for closed-loop servo control. The second application reconstructs a 3D representation of an inclusion (a crystal structure in a matrix) from multiple views through holographic electron microscopy. These application require steering external stimuli or computational parameters for a particular result. In a sense, ''computational instruments'' (symmetric multiprocessors) interact closely with data ...
Date: August 10, 1998
Creator: Parvin, B.; Taylor, J. & Cong, G.
Partner: UNT Libraries Government Documents Department

Surface reconstruction from sparse fringe contours

Description: A new approach for reconstruction of 3D surfaces from 2D cross-sectional contours is presented. By using the so-called ''Equal Importance Criterion,'' we reconstruct the surface based on the assumption that every point in the region contributes equally to the surface reconstruction process. In this context, the problem is formulated in terms of a partial differential equation (PDE), and we show that the solution for dense contours can be efficiently derived from distance transform. In the case of sparse contours, we add a regularization term to insure smoothness in surface recovery. The proposed technique allows for surface recovery at any desired resolution. The main advantage of the proposed method is that inherent problems due to correspondence, tiling, and branching are avoided. Furthermore, the computed high resolution surface is better represented for subsequent geometric analysis. We present results on both synthetic and real data.
Date: August 10, 1998
Creator: Cong, G. & Parvin, B.
Partner: UNT Libraries Government Documents Department

Experiences with TCP/IP over an ATM OC12 WAN

Description: This paper discusses the performance testing experiences of a 622.08 Mbps OC12 link. The link will be used for large bulk data transfer, and as such, of interest are both the ATM level throughput rates and end-to-end TCP/IP throughput rates. Tests were done to evaluate the ATM switches, the IP routers, the end hosts, as well as the underlying ATM service provided by the carrier. A low level of cell loss, (resulting in <.01 % packet loss), decreased the TCP throughput rate considerably when one TCP flow was trying to use the entire OC12 bandwidth. Identifying and correcting cell loss in the network proved to be extremely difficult. TCP Selective Acknowledgement (SACK) improved performance dramatically, and the maximum throughput rate increased from 300 Mbps to 400 Mbps. The effects of TCP slow start on performance at OC12 rates are also examined, and found to be insignificant for very large file transfers (e.g., for a 10 GB file). Finally, a history of TCP performance over high-speed networks is presented.
Date: December 23, 1999
Creator: Nitzan, Rebecca L. & Tierney, Brian L.
Partner: UNT Libraries Government Documents Department

Raman Spectroscopy Characterization of amorphous carbon coatings for computer hard disks

Description: Amorphous carbon films are used as protective coatings on magnetic media to protect the magnetic layer from wear and abrasion caused by the read/write head during hard disk drive start-up and operation. A key requirement in increasing the storage capacity and reliability of hard-disk drives is improving the performance of these coatings. This cooperative agreement used optical characterization techniques developed at LBNL to study thin-film hard disk media produced by Seagate Technology, major US hard drive manufacturer. The chief scientific goal was relating quantitatively the results of the optical characterization to the underlying chemical structure of the overcoat. In a collaboration with Seagate, LBNL, and Cambridge University, optical and electron-based characterization were used to evaluate the chemical structure of overcoats. The sp3 fraction of the sputtered amorphous carbon films was measured quantitatively for the first time and related to the optical spectroscopy results. This work and other selected aspects of the research performed under the agreement were presented at technical meetings and published in the open literature. The chief technical goal was designing manufacturing processes for the protective carbon overcoat for use in new generations of Seagate disk drives. To this end, joint research carried out under this agreement enabled Seagate to speed development of new coatings which are currently being used in the production of disk media in Seagate's disk-media manufacturing plants in Fremont, CA.
Date: May 7, 1998
Creator: Ager III, Joel W.
Partner: UNT Libraries Government Documents Department

Amorphous Diamond Flat Panel Displays - Final Report of ER-LTR CRADA project with SI Diamond Technology

Description: The objective of this project was to determine why diamond-based films are unusually efficient electron emitters (field emission cathodes) at room temperature. Efficient cathodes based on diamond are being developed by SI Diamond Technology (SIDT) as components for bright, sunlight-readable, flat panel displays. When the project started, it was known that only a small fraction (<1%) of the cathode area is active in electron emission and that the emission sites themselves are sub-micron in size. The critical challenge of this project was to develop new microcharacterization methods capable of examining known emission sites. The research team used a combination of cathode emission imaging (developed at SIDT), micro-Raman spectroscopy (LBNL), and electron microscopy and spectroscopy (National Center for Electron Microscopy, LBNL) to examine the properties of known emission sites. The most significant accomplishment of the project was the development at LBNL of a very high resolution scanning probe that, for the first time, measured simultaneously the topography and electrical characteristics of single emission sites. The increased understanding of the emission mechanism helped SIDT to develop a new cathode material,''nano-diamond,'' which they have incorporated into their Field Emission Picture Element (FEPix) product. SIDT is developing large-format flat panel displays based on these picture elements that will be brighter and more efficient than existing outdoor displays such as Jumbotrons. The energy saving that will be realized if field emission displays are introduced commercially is in line with the energy conservation mission of DOE. The unique characterization tools developed in this project (particularly the new scanning microscopy method) are being used in ongoing BES-funded basic research.
Date: May 8, 1998
Creator: Ager III, Joel W.
Partner: UNT Libraries Government Documents Department

The Optimal Prediction method

Description: The purpose of the work is to test and show how well the numerical method called Optima Prediction works. This method is relatively new and only a few experiment have been made. The authors first did a series of simple tests to see how the method behaves. In order to have a better understanding of the method, they then reproduced one of the main experiment which was done about Optimal Prediction by Kupferman. Once they obtained the same results that Kupferman had, they changed a few parameters to see how dependant the method was on this parameters. In this paper, they will present all the tests they made, the results they obtained and what they concluded about the method. Before talking about the experiments, they have to explain what is the Optimal Prediction method and how does it work. This will be done in the first section of this paper.
Date: August 1, 1999
Creator: Burin des Roziers, Thibaut
Partner: UNT Libraries Government Documents Department

UV Spectroscopy of Type Ia Supernovae at Low- andHigh-Redshift

Description: In the past three years two separate programs were initiated to study the restframe UV properties of Type Ia Supernovae. The low-redshift study was carried out using several ground-based facilities coupled with HST/STIS observations. The high-redshift program is an offshoot of the CFHT Legacy Survey and uses Keck/LRIS to obtain spectra. Here we present the preliminary results from each program and their implications for current cosmology measurements.
Date: April 20, 2005
Creator: Nugent, Peter
Partner: UNT Libraries Government Documents Department

Evaluation Strategies for Bitmap Indices with Binning

Description: Bitmap indices are efficient data structures for querying read-only data with low attribute cardinalities. To improve the efficiency of the bitmap indices on attributes with high cardinalities, we present a new strategy to evaluate queries using bitmap indices. This work is motivated by a number of scientific data analysis applications where most attributes have cardinalities in the millions. On these attributes, binning is a common strategy to reduce the size of the bitmap index. In this article we analyze how binning affects the number of pages accessed during query processing, and propose an optimal way of using the bitmap indices to reduce the number of pages accessed. Compared with two basic strategies the new algorithm reduces the query response time by up to a factor of two. On a set of 5-dimensional queries on real application data, the bitmap indices are on average 10 times faster than the projection index.
Date: June 3, 2004
Creator: Stockinger, Kurt; Wu, Kesheng & Shoshani, Arie
Partner: UNT Libraries Government Documents Department

Solving Large-scale Eigenvalue Problems in SciDACApplications

Description: Large-scale eigenvalue problems arise in a number of DOE applications. This paper provides an overview of the recent development of eigenvalue computation in the context of two SciDAC applications. We emphasize the importance of Krylov subspace methods, and point out its limitations. We discuss the value of alternative approaches that are more amenable to the use of preconditioners, and report the progression using the multi-level algebraic sub-structuring techniques to speed up eigenvalue calculation. In addition to methods for linear eigenvalue problems, we also examine new approaches to solving two types of non-linear eigenvalue problems arising from SciDAC applications.
Date: June 29, 2005
Creator: Yang, Chao
Partner: UNT Libraries Government Documents Department

John von Neumann Birthday Centennial

Description: In celebration of John von Neumann's 100th birthday, a series of four lectures were presented on the evening of February 10, 2003 during the SIAM Conference on Computational Science and Engineering in San Diego. The venue was appropriate because von Neumann spent much of the later part of his life, in the 1950's, as an unofficial ambassador for computational science. He was then the only senior American scientist who had experience with the new computers (digital, electronic, and programmable) and a vision of their future importance. No doubt he would have relished the chance to attend a meeting such as this. The first speaker, William Aspray, described the ''interesting times'' during which computers were invented. His remarks were based on his history [1] of this period in von Neumann's life. We were honored to have John von Neumann's daughter, Marina von Neumann-Whitman, as our second speaker. Other accounts of von Neumann's life can be found in books by two of his colleagues [2] and [3]. Our third speaker, Peter Lax, provided both mathematical and international perspectives on John von Neumann's career. Finally, Pete Stewart spoke about von Neumann's numerical error analysis [4] in the context of later work; this talk did not lend itself to transcription, but readers may consult the historical notes in [5]. Our thanks to all the speakers for a remarkable evening. We are grateful to the DOE Applied Mathematical Sciences (AMS) program for partially supporting these lectures. Thanks are also due to SIAM and William Kolata, to our emcee, Gene Golub, to Paul Saylor for recording and editing, and to Barbara Lytle for the transcriptions. More about von Neumann's work can be learned from the recent American Mathematical Society proceedings [6].
Date: November 12, 2004
Creator: Grcar, Joseph F.
Partner: UNT Libraries Government Documents Department

A Pseudo-Random Number Generator Based on Normal Numbers

Description: In a recent paper, Richard Crandall and the present author established that each of a certain class of explicitly given real constants, uncountably infinite in number, is b-normal, for an integer that appears in the formula defining the constant. A b-normal constant is one where every string of m digits appears in the base-b expansion of the constant with limiting frequency b{sup -m}. This paper shows how this result can be used to fashion an efficient and effective pseudo-random number generator, which generates successive strings of binary digits from one of the constants in this class. The resulting generator, which tests slightly faster than a conventional linear congruential generator, avoids difficulties with large power-of-two data access strides that may occur when using conventional generators. It is also well suited for parallel processing--each processor can quickly and independently compute its starting value, with the collective sequence generated by all processors being the same as that generated by a single processor.
Date: December 31, 2004
Creator: Bailey, David H.
Partner: UNT Libraries Government Documents Department

Quantum transport calculations using periodic boundaryconditions

Description: An efficient new method is presented to calculate the quantum transports using periodic boundary conditions. This method allows the use of conventional ground state ab initio programs without big changes. The computational effort is only a few times of a normal groundstate calculations, thus is makes accurate quantum transport calculations for large systems possible.
Date: June 15, 2004
Creator: Wang, Lin-Wang
Partner: UNT Libraries Government Documents Department

Performance Modeling: Understanding the Present and Predicting theFuture

Description: We present an overview of current research in performance modeling, focusing on efforts underway in the Performance Evaluation Research Center (PERC). Using some new techniques, we are able to construct performance models that can be used to project the sustained performance of large-scale scientific programs on different systems, over a range of job and system sizes. Such models can be used by vendors in system designs, by computing centers in system acquisitions, and by application scientists to improve the performance of their codes.
Date: November 30, 2005
Creator: Bailey, David H. & Snavely, Allan
Partner: UNT Libraries Government Documents Department

The NERSC Sustained System Performance (SSP) Metric

Description: Most plans and reports recently discuss only one of four distinct purposes benchmarks are used. The obvious purpose is selection of a system from among its competitors, something that is the main focus of this paper. This purpose is well discussed in many workshops and reports. The second use of benchmarks is validating the selected system actually works the way expected once it arrives. This purpose may be more important than the first reason. The second purpose is particularly key when systems are specified and selected based on performance projections rather than actual runs on the actual hardware. The third use of benchmarks, seldom mentioned, is to assure the system performs as expected throughout its lifetime1, (e.g. after upgrades, changes, and regular use.) Finally, benchmarks are used to guide system designs, something covered in detail in a companion paper from Berkeley's Institute for Performance Studies (BIPS).
Date: September 18, 2005
Creator: Kramer, William; Shalf, John & Strohmaier, Erich
Partner: UNT Libraries Government Documents Department

UPC Language Specifications V1.2

Description: UPC is an explicitly parallel extension to the ISO C 99Standard. UPC follows the partitioned global address space programming model. This document is the formal specification for the UPC language syntax and semantics.
Date: May 31, 2005
Creator: Consortium, UPC
Partner: UNT Libraries Government Documents Department

Simulating mesoscopic reaction-diffusion systems using the Gillespie algorithm

Description: We examine an application of the Gillespie algorithm to simulating spatially inhomogeneous reaction-diffusion systems in mesoscopic volumes such as cells and microchambers. The method involves discretizing the chamber into elements and modeling the diffusion of chemical species by the movement of molecules between neighboring elements. These transitions are expressed in the form of a set of reactions which are added to the chemical system. The derivation of the rates of these diffusion reactions is by comparison with a finite volume discretization of the heat equation on an unevenly spaced grid. The diffusion coefficient of each species is allowed to be inhomogeneous in space, including discontinuities. The resulting system is solved by the Gillespie algorithm using the fast direct method. We show that in an appropriate limit the method reproduces exact solutions of the heat equation for a purely diffusive system and the nonlinear reaction-rate equation describing the cubic autocatalytic reaction.
Date: December 12, 2004
Creator: Bernstein, David
Partner: UNT Libraries Government Documents Department

Continuum mechanical and computational aspects of material behavior

Description: The focus of the work is the application of continuum mechanics to materials science, specifically to the macroscopic characterization of material behavior at small length scales. The long-term goals are a continuum-mechanical framework for the study of materials that provides a basis for general theories and leads to boundary-value problems of physical relevance, and computational methods appropriate to these problems supplemented by physically meaningful regularizations to aid in their solution. Specific studies include the following: the development of a theory of polycrystalline plasticity that incorporates free energy associated with lattice mismatch between grains; the development of a theory of geometrically necessary dislocations within the context of finite-strain plasticity; the development of a gradient theory for single-crystal plasticity with geometrically necessary dislocations; simulations of dynamical fracture using a theory that allows for the kinking and branching of cracks; computation of segregation and compaction in flowing granular materials.
Date: February 10, 2000
Creator: Fried, Eliot & Gurtin, Morton E.
Partner: UNT Libraries Government Documents Department

Parallel beam dynamics simulation of linear accelerators

Description: In this paper we describe parallel particle-in-cell methods for the large scale simulation of beam dynamics in linear accelerators. These techniques have been implemented in the IMPACT (Integrated Map and Particle Accelerator Tracking) code. IMPACT is being used to study the behavior of intense charged particle beams and as a tool for the design of next-generation linear accelerators. As examples, we present applications of the code to the study of emittance exchange in high intensity beams and to the study of beam transport in a proposed accelerator for the development of accelerator-driven waste transmutation technologies.
Date: January 31, 2002
Creator: Qiang, Ji & Ryne, Robert D.
Partner: UNT Libraries Government Documents Department

Allinea DDT as a Parallel Debugging Alternative to Totalview

Description: Totalview, from the Etnus Corporation, is a sophisticated and feature rich software debugger for parallel applications. As Totalview has gained in popularity and market share its pricing model has increased to the point where it is often prohibitively expensive for massively parallel supercomputers. Additionally, many of Totalview's advanced features are not used by members of the scientific computing community. For these reasons, supercomputing centers have begun to search for a basic parallel debugging tool which can be used as an alternative to Totalview. As the cost and complexity of Totalview has increased over the years, scientific computing centers have started searching for a viable parallel debugging alternative. DDT (Distributed Debugging Tool) from Allinea Software is a relatively new parallel debugging tool which aims to provide much of the same functionality as Totalview. This review outlines the basic features and limitations of DDT to determine if it can be a reasonable substitute for Totalview. DDT was tested on the NERSC platforms Bassi, Seaborg, Jacquard and Davinci with Fortran90, C, and C++ codes using MPI and OpenMP for parallelism.
Date: March 5, 2007
Creator: Antypas, K.B.
Partner: UNT Libraries Government Documents Department

BER Science Network Requirements Workshop -- July 26-27,2007

Description: The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States of America. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In July 2007, ESnet and the Biological and Environmental Research (BER) Program Office of the DOE Office of Science organized a workshop to characterize the networking requirements of the science programs funded by the BER Program Office. These included several large programs and facilities, including Atmospheric Radiation Measurement (ARM) Program and the ARM Climate Research Facility (ACRF), Bioinformatics and Life Sciences Programs, Climate Sciences Programs, the Environmental Molecular Sciences Laboratory at PNNL, the Joint Genome Institute (JGI). National Center for Atmospheric Research (NCAR) also participated in the workshop and contributed a section to this report due to the fact that a large distributed data repository for climate data will be established at NERSC, ORNL and NCAR, and this will have an effect on ESnet. Workshop participants were asked to codify their requirements in a 'case study' format, which summarizes the instruments and facilities necessary for the science and the process by which the science is done, with emphasis on the network services needed and the way in which the network is used. Participants were asked to consider three time scales in their case studies--the near term (immediately and up to 12 months in the future), the medium term (3-5 years in the future), and the long term (greater than 5 years in the future). ...
Date: February 1, 2008
Creator: Tierney, Brian L. & Dart, Eli
Partner: UNT Libraries Government Documents Department

RRS: Replica Registration Service for Data Grids

Description: Over the last few years various scientific experiments and Grid projects have developed different catalogs for keeping track of their data files. Some projects use specialized file catalogs, others use distributed replica catalogs to reference files at different locations. Due to this diversity of catalogs, it is very hard to manage files across Grid projects, or to replace one catalog with another. In this paper we introduce a new Grid service called the Replica Registration Service (RRS). It can be thought of as an abstraction of the concepts for registering files and their replicas. In addition to traditional single file registration operations, the RRS supports collective file registration requests and keeps persistent registration queues. This approach is of particular importance for large-scale usage where thousands of files are copied and registered. Moreover, the RRS supports a set of error directives that are triggered in case of registration failures. Our goal is to provide a single uniform interface for various file catalogs to support the registration of files across multiple Grid projects, and to make Grid clients oblivious to the specific catalog used.
Date: July 15, 2005
Creator: Shoshani, Arie; Sim, Alex & Stockinger, Kurt
Partner: UNT Libraries Government Documents Department

Securing Resources in Collaborative Environments: A Peer-to-peerApproach

Description: We have developed a security model that facilitates control of resources by autonomous peers who act on behalf of collaborating users. This model allows a gradual build-up of trust. It enables secure interactions among users that do not necessarily know each other and allows them to build trust over the course of their collaboration. This paper describes various aspects of our security model and describes an architecture that implements this model to provide security in pure peer-to-peer environments.
Date: September 19, 2005
Creator: Berket, Karlo; Essiari, Abdelilah & Thompson, Mary R.
Partner: UNT Libraries Government Documents Department