59 Matching Results

Search Results

Adaptive mesh refinement in titanium

Description: In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.
Date: January 21, 2005
Creator: Colella, Phillip & Wen, Tong
Partner: UNT Libraries Government Documents Department

Moving the Hazard Prediction and Assessment Capability to a Distributed, Portable Architecture

Description: The Hazard Prediction and Assessment Capability (HPAC) has been re-engineered from a Windows application with tight binding between computation and a graphical user interface (GUI) to a new distributed object architecture. The key goals of this new architecture are platform portability, extensibility, deployment flexibility, client-server operations, easy integration with other systems, and support for a new map-based GUI. Selection of Java as the development and runtime environment is the major factor in achieving each of the goals, platform portability in particular. Portability is further enforced by allowing only Java components in the client. Extensibility is achieved via Java's dynamic binding and class loading capabilities and a design by interface approach. HPAC supports deployment on a standalone host, as a heavy client in client-server mode with data stored on the client but calculations performed on the server host, and as a thin client with data and calculations on the server host. The principle architectural element supporting deployment flexibility is the use of Universal Resource Locators (URLs) for all file references. Java WebStart{trademark} is used for thin client deployment. Although there were many choices for the object distribution mechanism, the Common Object Request Broker Architecture (CORBA) was chosen to support HPAC client server operation. HPAC complies with version 2.0 of the CORBA standard and does not assume support for pass-by-value method arguments. Execution in standalone mode is expedited by having most server objects run in the same process as client objects, thereby bypassing CORBA object transport. HPAC provides four levels for access by other tools and systems, starting with a Windows library providing transport and dispersion (T&D) calculations and output generation, detailed and more abstract sets of CORBA services, and reusable Java components.
Date: September 5, 2002
Creator: Lee, RW
Partner: UNT Libraries Government Documents Department

New local potential useful for genome annotation and 3D modeling

Description: A new potential energy function representing the conformational preferences of sequentially local regions of a protein backbone is presented. This potential is derived from secondary structure probabilities such as those produced by neural network-based prediction methods. The potential is applied to the problem of remote homolog identification, in combination with a distance dependent inter-residue potential and position-based scoring matrices. This fold recognition jury is implemented in a Java application called JThread. These methods are benchmarked on several test sets, including one released entirely after development and parameterization of JThread. In benchmark tests to identify known folds structurally similar (but not identical) to the native structure of a sequence, JThread performs significantly better than PSI-BLAST, with 10 percent more structures correctly identified as the most likely structural match in a fold library, and 20 percent more structures correctly narrowed down to a set of five possible candidates. JThread also significantly improves the average sequence alignment accuracy, from 53 percent to 62 percent of residues correctly aligned. Reliable fold assignments and alignments are identified, making the method useful for genome annotation. JThread is applied to predicted open reading frames (ORFs) from the genomes of Mycoplasma genitalium and Drosophila melanogaster, identifying 20 new structural annotations in the former and 801 in the latter.
Date: July 17, 2003
Creator: Chandonia, John-Marc & Cohen, Fred E.
Partner: UNT Libraries Government Documents Department

An evaluation of Java's I/O capabilities for high-performance computing.

Description: Java is quickly becoming the preferred language for writing distributed applications because of its inherent support for programming on distributed platforms. In particular, Java provides compile-time and run-time security, automatic garbage collection, inherent support for multithreading, support for persistent objects and object migration, and portability. Given these significant advantages of Java, there is a growing interest in using Java for high-performance computing applications. To be successful in the high-performance computing domain, however, Java must have the capability to efficiently handle the significant I/O requirements commonly found in high-performance computing applications. While there has been significant research in high-performance I/O using languages such as C, C++, and Fortran, there has been relatively little research into the I/O capabilities of Java. In this paper, we evaluate the I/O capabilities of Java for high-performance computing. We examine several approaches that attempt to provide high-performance I/O--many of which are not obvious at first glance--and investigate their performance in both parallel and multithreaded environments. We also provide suggestions for expanding the I/O capabilities of Java to better support the needs of high-performance computing applications.
Date: November 10, 2000
Creator: Dickens, P. M. & Thakur, R.
Partner: UNT Libraries Government Documents Department

The Feasibility of Multicasting in RMI

Description: Due to the growing need of the Internet and networking technologies, simple, powerful, easily maintained distributed applications needed to be developed. These kinds of applications can benefit greatly from distributed computing concepts. Despite its powerful mechanisms, Jini has yet to be accepted in mainstream Java development. Until that happens, we need to find better Remote Method Invocation (RMI) solutions. Feasibility of implementation of Multicasting in RMI is worked in this paper. Multicasting capability can be used in RMI using Jini-like technique. Support of Multicast over Unicast reference layer is also studied. A piece of code explaining how it can be done, is added.
Date: May 2003
Creator: Ujjinihavildar, Vinay
Partner: UNT Libraries

Grid-based asynchronous migration of execution context in Java virtual machines

Description: Previous research efforts for building thread migration systems have concentrated on the development of frameworks dealing with a small local environment controlled by a single user. Computational Grids provide the opportunity to utilize a large-scale environment controlled over different organizational boundaries. Using this class of large-scale computational resources as part of a thread migration system provides a significant challenge previously not addressed by this community. In this paper the authors present a framework that integrates Grid services to enhance the functionality of a thread migration system. To accommodate future Grid services, the design of the framework is both flexible and extensible. Currently, the thread migration system contains Grid services for authentication, registration, lookup, and automatic software installation. In the context of distributed applications executed on a Grid-based infrastructure, the asynchronous migration of an execution context can help solve problems such as remote execution, load balancing, and the development of mobile agents. The prototype is based on the migration of Java threads, allowing asynchronous and heterogeneous migration of the execution context of the running code.
Date: June 15, 2000
Creator: von Laszewski, G.; Shudo, K. & Muraoka, Y.
Partner: UNT Libraries Government Documents Department

General Index to Experiment Station Record Volumes 01-12, 1989-1901 and to Experiment Station Bulletin Number 2

Description: A topical, alphabetically arranged index to volumes 1-12 including experiment station records, publications reviewed, and foreign publications. It has a 'Consolidated Table of Contents' which lists all editorial notes and publications of the experiment stations and Department of Agriculture from the referenced volumes.
Date: 1903
Creator: United States. Office of Experiment Stations.
Partner: UNT Libraries Government Documents Department

CLARA: A Contemporary Approach to Physics Data Processing

Description: In traditional physics data processing (PDP) systems, data location is static and is accessed by analysis applications. In comparison, CLARA (CLAS12 Reconstruction and Analysis framework) is an environment where data processing algorithms filter continuously flowing data. In CLARA's domain of loosely coupled services, data is not stored, but rather flows from one service to another, mutating constantly along the way. Agents, performing event processing, can then subscribe to particular data/events at any stage of the data transformation, and make intricate decisions (e.g. particle ID) by correlating events from multiple, parallel data streams and/or services. This paper presents a PDP application development framework based on service oriented and event driven architectures. This system allows users to design (Java, C++, and Python languages are supported) and deploy data processing services, as well as dynamically compose PDP applications using available services. The PDP service bus provides a layer on top of a distributed pub-sub middleware implementation, which allows complex service composition and integration without writing code. Examples of service creation and deployment, along with the CLAS12 track reconstruction application design will be presented.
Date: December 1, 2011
Creator: V Gyurjyan, D Abbott, J Carbonneau, G Gilfoyle, D Heddle, G Heyes, S Paul, C Timmer, D Weygand, E Wolin
Partner: UNT Libraries Government Documents Department

A specialized framework for data retrieval Web applications

Description: Although many general-purpose frameworks have been developed to aid in web application development, they typically tend to be both comprehensive and complex. To address this problem, a specialized server-side Java framework designed specifically for data retrieval and visualization has been developed. The framework's focus is on maintainability and data security. The functionality is rich with features necessary for simplifying data display design, deployment, user management and application debugging, yet the scope is deliberately kept limited to allow for easy comprehension and rapid application development. The system clearly decouples the application processing and visualization, which in turn allows for clean separation of layout and processing development. Duplication of standard web page features such as toolbars and navigational aids is therefore eliminated. The framework employs the popular Model-View-Controller (MVC) architecture, but it also uses the filter mechanism for several of its base functionalities, which permits easy extension of the provided core functionality of the system.
Date: July 12, 2004
Creator: Nogiec, Jerzy; Trombly-Freytag, Kelley & Walbridge, Dana
Partner: UNT Libraries Government Documents Department

Data acquisition and analysis for the Fermilab Collider RunII

Description: Operating and improving the understanding of the Fermilab Accelerator Complex for the colliding beam experiments requires advanced software methods and tools. The Shot Data Acquisition and Analysis (SDA) has been developed to fulfill this need. The SDA takes a standard set of critical data at relevant stages during the complex series of beam manipulations leading to {radical}(s) {approx} 2 TeV collisions. Data is stored in a relational database, and is served to programs and users via Web based tools. Summary tables are systematically generated during and after a store. Written entirely in Java, SDA supports both interactive tools and application interfaces used for in-depth analysis. In this talk, we present the architecture and described some of our analysis tools. We also present some results on the recent Tevatron performance as illustrations of the capabilities of SDA.
Date: July 7, 2004
Creator: al., Paul L. G. Lebrun et
Partner: UNT Libraries Government Documents Department

Phylo-VISTA: An interactive visualization tool for multiple DNAsequence alignments

Description: Motivation. The power of multi-sequence comparison forbiological discovery is well established and sequence data from a growinglist of organisms is becoming available. Thus, a need exists forcomputational strategies to visually compare multiple aligned sequencesto support conservation analysis across various species. To be efficientthese visualization algorithms require the ability to universally handlea wide range of evolutionary distances while taking into accountphylogeny Results. We have developed Phylo-VISTA, an interactive tool foranalyzing multiple alignments by visualizing the similarity of DNAsequences among multiple species while considering their phylogenicrelationships. Features include a broad spectrum of resolution parametersfor examining the alignment and the ability to easily compare any subtreeof sequences within a complete alignment dataset. Phylo-VISTA uses VISTAconcepts that have been successfully applied previously to a wide rangeof comparative genomics data analysis problems. Availability Phylo-VISTAis an interactive java applet available for downloading athttp://graphics.cs.ucdavis.edu/~;nyshah/Phylo-VISTA. It is also availableon-line at http://www-gsd.lbl.gov/phylovista and is integrated with theglobal alignment program LAGAN athttp://lagan.stanford.edu.Contactphylovista@lbl.gov
Date: April 25, 2003
Creator: Shah, Nameeta; Couronne, Olivier; Pennacchio, Len A.; Brudno,Michael; Batzoglou, Serafim; Bethel, E. Wes et al.
Partner: UNT Libraries Government Documents Department

New results on the resistivity structure of Merapi Volcano(Indonesia), derived from 3D restricted inversion of long-offsettransient electromagnetic data

Description: Three long-offset transient electromagnetic (LOTEM) surveyswerecarried out at the active volcano Merapi in Central Java (Indonesia)during the years 1998, 2000, and 2001. The measurements focused on thegeneral resistivity structure of the volcanic edifice at depths of 0.5-2km and the further investigation of a southside anomaly. The measurementswere insufficient for a full 3D inversion scheme, which could enable theimaging of finely discretized resistivity distributions. Therefore, astable, damped least-squares joint-inversion approach is used to optimize3D models with a limited number of parameters. The mode ls feature therealistic simulation of topography, a layered background structure, andadditional coarse 3D blocks representing conductivity anomalies.Twenty-eight LOTEM transients, comprising both horizontal and verticalcomponents of the magnetic induction time derivative, were analyzed. Inview of the few unknowns, we were able to achieve reasonable data fits.The inversion results indicate an upwelling conductor below the summit,suggesting hydrothermal activity in the central volcanic complex. Ashallow conductor due to a magma-filled chamber, at depths down to 1 kmbelow the summit, suggested by earlier seismic studies, is not indicatedby the inversion results. In conjunction with an anomalous-density model,derived from arecent gravity study, our inversion results provideinformation about the southern geological structure resulting from amajor sector collapse during the Middle Merapi period. The density modelallows to assess a porosity range andthus an estimated vertical salinityprofile to explain the high conductivities on a larger scale, extendingbeyond the foothills of Merapi.
Date: June 14, 2006
Creator: Commer, Michael; Helwig, Stefan, L.; Hordt, Andreas; Scholl,Carsten & Tezkan, Bulent
Partner: UNT Libraries Government Documents Department

Reconstruction of a 4D Particle Distribution Using UnderdeterminedPhase-Space Data

Description: A well defined 4D distribution that describes the transverse spatial coordinates (x,y) and momenta (x',y') of the particles that make up an intense ion beam is of great value to theorists in the field of particle beam physics. If such a distribution truthfully captures the characteristic of the actual beam, it can be used to initialize an extensive simulation, and can yield insight into the processes that affect beam quality. Creating a proper representative distribution of particles is a challenge because the problem is, in general, quite underdetermined. Data is collected through a pair of ''optical slit'' diagnostics which provide two 3D distributions, f(x,y,x') and f(x,y,y'); the challenge is to coalesce these into a full 4D distribution f(x,y,x',y'). Further difficulties are introduced because the data is collected at different longitudinal planes and must be ''remapped'' to a common plane, taking into account the convergence or divergence of the beam as well as any off-centering. This challenge was met by developing a suitable algorithm and implementing it as a ''plug-in'' for the popular scientific image analysis program ImageJ, written entirely in the Java programming language. The algorithm accomplishes the desired remapping and synthesizes a 4D particle distribution, using Monte-Carlo techniques. Preliminary results show that this reconstructed distribution is consistent with actual data that was gathered from the same experiment using a different diagnostic. Also, ''forward'' particle-in-cell (PIC) simulations, that use the reconstructed distribution, match actual data gathered downstream in the experiment. Both these results give us some indication that the reconstruction is being done correctly. In addition to the multi-particle synthesis, the plug-in allows for the easy loading of digital data and the output of various plots that are useful to both experimenters and theorists. It also provides a framework by which its applicability can be extended to other types of ...
Date: August 10, 2005
Creator: Rostamizadeh, Afshin
Partner: UNT Libraries Government Documents Department

The CDMS II data acquisition system

Description: The Data Acquisition System for the CDMS II dark matter experiment was designed and built when the experiment moved to its new underground installation at the Soudan Lab. The combination of remote operation and increased data load necessitated a completely new design. Elements of the original LabView system remained as stand-alone diagnostic programs, but the main data processing moved to a VME-based system with custom electronics for signal conditioning, trigger formation and buffering. The data rate was increased 100-fold and the automated cryogenic system was linked to the data acquisition. A modular server framework with associated user interfaces was implemented in Java to allow control and monitoring of the entire experiment remotely.
Date: January 1, 2011
Creator: Bauer, D.A.; /Fermilab; Burke, S.; /UC, Santa Barbara; Cooley, J.; U., /Southern Methodist et al.
Partner: UNT Libraries Government Documents Department

Grid Logging: Best Practices Guide

Description: The purpose of this document is to help developers of Grid middleware and application software generate log files that will be useful to Grid administrators, users, developers and Grid middleware itself. Currently, most of the currently generated log files are only useful to the author of the program. Good logging practices are instrumental to performance analysis, problem diagnosis, and security auditing tasks such as incident tracing and damage assessment. This document does not discuss the issue of a logging API. It is assumed that a standard log API such as syslog (C), log4j (Java), or logger (Python) is being used. Other custom logging API or even printf could be used. The key point is that the logs must contain the required information in the required format. At a high level of abstraction, the best practices for Grid logging are: (1) Consistently structured, typed, log events; (2) A standard high-resolution timestamp; (3) Use of logging levels and categories to separate logs by detail and purpose; (4) Consistent use of global and local identifiers; and (5) Use of some regular, newline-delimited ASCII text format. The rest of this document describes each of these recommendations in detail.
Date: April 1, 2008
Creator: Tierney, Brian L; Tierney, Brian L & Gunter, Dan
Partner: UNT Libraries Government Documents Department

SERVER DEVELOPMENT FOR NSLS-II PHYSICS APPLICATIONS AND PERFORMANCE ANALYSIS

Description: The beam commissioning software framework of NSLS-II project adopts a client/server based architecture to replace the more traditional monolithic high level application approach. The server software under development is available via an open source sourceforge project named epics-pvdata, which consists of modules pvData, pvAccess, pvIOC, and pvService. Examples of two services that already exist in the pvService module are itemFinder, and gather. Each service uses pvData to store in-memory transient data, pvService to transfer data over the network, and pvIOC as the service engine. The performance benchmarking for pvAccess and both gather service and item finder service are presented in this paper. The performance comparison between pvAccess and Channel Access are presented also. For an ultra low emittance synchrotron radiation light source like NSLS II, the control system requirements, especially for beam control are tight. To control and manipulate the beam effectively, a use case study has been performed to satisfy the requirement and theoretical evaluation has been performed. The analysis shows that model based control is indispensable for beam commissioning and routine operation. However, there are many challenges such as how to re-use a design model for on-line model based control, and how to combine the numerical methods for modeling of a realistic lattice with the analytical techniques for analysis of its properties. To satisfy the requirements and challenges, adequate system architecture for the software framework for beam commissioning and operation is critical. The existing traditional approaches are self-consistent, and monolithic. Some of them have adopted a concept of middle layer to separate low level hardware processing from numerical algorithm computing, physics modelling, data manipulating and plotting, and error handling. However, none of the existing approaches can satisfy the requirement. A new design has been proposed by introducing service oriented architecture technology, and client interface is undergoing. The design ...
Date: March 28, 2011
Creator: Shen, G. & Kraimer, M.
Partner: UNT Libraries Government Documents Department

TRACKING ACCELERATOR SETTINGS.

Description: Recording setting changes within an accelerator facility provides information that can be used to answer questions about when, why, and how changes were made to some accelerator system. This can be very useful during normal operations, but can also aid with security concerns and in detecting unusual software behavior. The Set History System (SHS) is a new client-server system developed at the Collider-Accelerator Department of Brookhaven National Laboratory to provide these capabilities. The SHS has been operational for over two years and currently stores about IOOK settings per day into a commercial database management system. The SHS system consists of a server written in Java, client tools written in both Java and C++, and a web interface for querying the database of setting changes. The design of the SHS focuses on performance, portability, and a minimal impact on database resources. In this paper, we present an overview of the system design along with benchmark results showing the performance and reliability of the SHS over the last year.
Date: October 15, 2007
Creator: D OTTAVIO,T.; FU, W. & OTTAVIO, D.P.
Partner: UNT Libraries Government Documents Department

DOECGF 2010 Site Report

Description: The Data group provides data analysis and visualization support to its customers. This consists primarily of the development and support of VisIt, a data analysis and visualization tool. Support ranges from answering questions about the tool, providing classes on how to use the tool, and performing data analysis and visualization for customers. The Information Management and Graphics Group supports and develops tools that enhance our ability to access, display, and understand large, complex data sets. Activities include applying visualization software for large scale data exploration; running video production labs on two networks; supporting graphics libraries and tools for end users; maintaining PowerWalls and assorted other displays; and developing software for searching and managing scientific data. Researchers in the Center for Applied Scientific Computing (CASC) work on various projects including the development of visualization techniques for large scale data exploration that are funded by the ASC program, among others. The researchers also have LDRD projects and collaborations with other lab researchers, academia, and industry. The IMG group is located in the Terascale Simulation Facility, home to Dawn, Atlas, BGL, and others, which includes both classified and unclassified visualization theaters, a visualization computer floor and deployment workshop, and video production labs. We continued to provide the traditional graphics group consulting and video production support. We maintained five PowerWalls and many other displays. We deployed a 576-node Opteron/IB cluster with 72 TB of memory providing a visualization production server on our classified network. We continue to support a 128-node Opteron/IB cluster providing a visualization production server for our unclassified systems and an older 256-node Opteron/IB cluster for the classified systems, as well as several smaller clusters to drive the PowerWalls. The visualization production systems includes NFS servers to provide dedicated storage for data analysis and visualization. The ASC projects have delivered new versions ...
Date: April 12, 2010
Creator: Springmeyer, R. R.; Brugger, E. & Cook, R.
Partner: UNT Libraries Government Documents Department

Complete calculation of evaluated Maxwellian-averaged cross sections and their uncertainties for s-process nucleosynthesis

Description: Present contribution represents a significant improvement of our previous calculation of Maxwellian-averaged cross sections and astrophysical reaction rates. Addition of newly-evaluated neutron reaction libraries, such as ROSFOND and Low-Fidelity Covariance Project, and improvements in data processing techniques allowed us to extend it for entire range of sprocess nuclei, calculate Maxwellian-averaged cross section uncertainties for the first time, and provide additional insights on all currently available neutron-induced reaction data. Nuclear reaction calculations using ENDF libraries and current Java technologies will be discussed and new results will be presented.
Date: July 19, 2010
Creator: Pritychenko, B.
Partner: UNT Libraries Government Documents Department

A fast continuous magnetic field measurement system based on digital signal processors

Description: In order to study dynamic effects in accelerator magnets, such as the decay of the magnetic field during the dwell at injection and the rapid so-called ''snapback'' during the first few seconds of the resumption of the energy ramp, a fast continuous harmonics measurement system was required. A new magnetic field measurement system, based on the use of digital signal processors (DSP) and Analog to Digital (A/D) converters, was developed and prototyped at Fermilab. This system uses Pentek 6102 16 bit A/D converters and the Pentek 4288 DSP board with the SHARC ADSP-2106 family digital signal processor. It was designed to acquire multiple channels of data with a wide dynamic range of input signals, which are typically generated by a rotating coil probe. Data acquisition is performed under a RTOS, whereas processing and visualization are performed under a host computer. Firmware code was developed for the DSP to perform fast continuous readout of the A/D FIFO memory and integration over specified intervals, synchronized to the probe's rotation in the magnetic field. C, C++ and Java code was written to control the data acquisition devices and to process a continuous stream of data. The paper summarizes the characteristics of the system and presents the results of initial tests and measurements.
Date: September 1, 2005
Creator: Velev, G.V.; Carcagno, R.; DiMarco, J.; Kotelnikov, S.; Lamm, M.; Makulski, A. et al.
Partner: UNT Libraries Government Documents Department

Babel Fortran 2003 Binding for Structured Data Types

Description: Babel is a tool aimed at the high-performance computing community that addresses the need for mixing programming languages (Java, Python, C, C++, Fortran 90, FORTRAN 77) in order to leverage the specific benefits of those languages. Scientific codes often rely on structured data types (structs, derived data types) to encapsulate data, and Babel has been lacking in this type of support until recently. We present a new language binding that focuses on their interoperability of C/C++ with Fortran 2003. The new binding builds on the existing Fortran 90 infrastructure by using the iso-c-binding module defined in the Fortran 2003 standard as the basis for C/C++ interoperability. We present the technical approach for the new binding and discuss our initial experiences in applying the binding in FACETS (Framework Application for Core-Edge Transport Simulations) to integrate C++ with legacy Fortran codes.
Date: May 2, 2008
Creator: Muszala, S; Epperly, T & Wang, N
Partner: UNT Libraries Government Documents Department

Prototyping Faithful Execution in a Java virtual machine.

Description: This report presents the implementation of a stateless scheme for Faithful Execution, the design for which is presented in a companion report, ''Principles of Faithful Execution in the Implementation of Trusted Objects'' (SAND 2003-2328). We added a simple cryptographic capability to an already simplified class loader and its associated Java Virtual Machine (JVM) to provide a byte-level implementation of Faithful Execution. The extended class loader and JVM we refer to collectively as the Sandia Faithfully Executing Java architecture (or JavaFE for short). This prototype is intended to enable exploration of more sophisticated techniques which we intend to implement in hardware.
Date: September 1, 2003
Creator: Tarman, Thomas David; Campbell, Philip LaRoche & Pierson, Lyndon George
Partner: UNT Libraries Government Documents Department