165 Matching Results

Search Results

Advanced search parameters have been applied.

LONG TERM FILE MIGRATION - PART II: FILE REPLACEMENT ALGORITHMS

Description: The steady increase in the power and complexity of modern computer systems has encouraged the implementation of automatic file migration systems which move files dynamically between mass storage devices and disk in response to user reference patterns. Using information describing thirteen months of text editor data set file references, (analyzed in detail in the first part of this paper), they develop and evaluation algorithms for the selection of files to be moved from disk to mass storage. They find that algorithms based on both the file size and the time since the file was last used work well. The best realizable algorithms tested condition on the empirical distribution of the times between file references. Acceptable results are also obtained by selecting for replacement that file whose size times time to last reference is maximal. Comparisons are made with a number of standard algorithms developed for paging, such as Working Set. Sufficient information (parameter values, fitted equations) is provided that our algorithms may be easily implemented on other systems.
Date: October 1, 1978
Creator: Jay Smith, Alan
Partner: UNT Libraries Government Documents Department

GEOMETRY AND ELECTRONIC STRUCTURE OF (CO)3N1CH2. A MODEL TRANSITION METAL CARBENE

Description: The first application of nonempirical molecular electronic structure theory to a realistic transition metal carbene complex is reported. The system chosen was (CO){sub 3}NiCH{sub 2}, methylene (tricarbonyl) nickel(0). All studies were carried out at the self-consistent-field (SCF) level. A large and flexibly contracted basis set was chosen, labeled Ni(15s 11p 6d/11s 8p 3d); C,O(9s 5p/4s 2p); H(5s/3s). In addition, the importance of methylene carbon d functions was investigated. The critical predicted equilibrium geometrical parameters were R [Ni-C (methylene)]=1.83 {Angstrom}, {theta}(HCH)=108°. The sixfold barrier to rotation about the Ni-C (methylene) axis is small, ~o.2 kcal. The electronic structure of (CO){sub 3}NiCH{sub 2} is discussed and compared with those of the "naked" complex NiCH{sub 2} and the stable Ni(CO){sub 4} molecule.
Date: April 1, 1980
Creator: Spangler, Dale; Wendoloski, John J.; Dupuis, Michel; Chen, Maynard M.L. & Schaefer III, Henry F.
Partner: UNT Libraries Government Documents Department

ON THE USE OF PRESSURE BROADENING DATA TO ASSESS THE ACCURACY OF CO-He INTERACTION POTENTIALS

Description: The purpose of this Note is to consider the agreement of self-consistent field and configuration interaction (SCF-CI) surface with pressure broadening data. Since pressure broadening traditionally has been used to obtain information about intermolecular forces, it is of particular interest to inquire whether these two surfaces predict any differences in pressure broadening which might provide a means of differentiating among them on the basis of experimental data. The cross sections from the SCF-CI potential would appear to be in marginally better agreement with experiment than those from the electron gas potential.
Date: June 1, 1980
Creator: Green, Sheldon & Thomas, Lowell D.
Partner: UNT Libraries Government Documents Department

IMAGES, IMAGES, IMAGES

Description: The role of images of information (charts, diagrams, maps, and symbols) for effective presentation of facts and concepts is expanding dramatically because of advances in computer graphics technology, increasingly hetero-lingual, hetero-cultural world target populations of information providers, the urgent need to convey more efficiently vast amounts of information, the broadening population of (non-expert) computer users, the decrease of available time for reading texts and for decision making, and the general level of literacy. A coalition of visual performance experts, human engineering specialists, computer scientists, and graphic designers/artists is required to resolve human factors aspects of images of information. The need for, nature of, and benefits of interdisciplinary effort are discussed. The results of an interdisciplinary collaboration are demonstrated in a product for visualizing complex information about global energy interdependence. An invited panel will respond to the presentation.
Date: July 1, 1980
Creator: Marcus, A.
Partner: UNT Libraries Government Documents Department

RATMAC PRIMER

Description: The language RATMAC is a direct descendant of one of the most successful structured FORTRAN languages, rational FORTRAN, RATFOR. RATMAC has all of the characteristics of RATFOR, but is augmented by a powerful recursive macro processor which is extremely useful in generating transportable FORTRAN programs. A macro is a collection of programming steps which are associated with a keyword. This keyword uniquely identifies the macro, and whenever it appears in a RATMAC program it is replaced by the collection of steps. This primer covers the language's control and decision structures, macros, file inclusion, symbolic constants, and error messages.
Date: October 1, 1980
Creator: Munn, R.J.; Stewart, J.M.; Norden, A.P. & Pagoaga, M. Katherine
Partner: UNT Libraries Government Documents Department

A geometric level set model for ultrasounds analysis

Description: We propose a partial differential equation (PDE) for filtering and segmentation of echocardiographic images based on a geometric-driven scheme. The method allows edge-preserving image smoothing and a semi-automatic segmentation of the heart chambers, that regularizes the shapes and improves edge fidelity especially in presence of distinct gaps in the edge map as is common in ultrasound imagery. A numerical scheme for solving the proposed PDE is borrowed from level set methods. Results on human in vivo acquired 2D, 2D+time,3D, 3D+time echocardiographic images are shown.
Date: October 1, 1999
Creator: Sarti, A. & Malladi, R.
Partner: UNT Libraries Government Documents Department

Software Design for Particles in Incompressible flow, non-subcycled case

Description: To implement an AMR incompressible Navier-Stokes with particles algorithm, we have decided to use a non-subcycled algorithm to simplify the implementation of the particle drag forcing term. This requires a fairly broad redesign of the software from what was presented in [1], since we will no longer be using the AMR/AMR Level base classes to manage the AMR hierarchy. The new classes map on to the functionality of the classes in the original design in a fairly straightforward way, as illustrated in Table 1. The new PAmrNS class takes on the functionality of the Particle AMRNS class in the earlier implementation, along with the functionality of the AMR and Amr Level classes in the Chombo AMR Time Dependent library. The new Amr Projector class replaces the original CC Projector class, while the new AMR Particle Projector class replaces the original Particle Projector class. A basic diagram of the class relationships between the AMRINS-particles classes is depicted in Figure 1. The PAmrNS class will manage the AMR hierarchy and the non-subcycled advance. The non-subcycled advance is much simpler than the subcycled case, both in terms of algorithmic complexity (no need for synchronization projections, etc) and in terms of software implementation. The AMR Particle Projector will do the particle projection originally implemented in the Particle Projector class on an AMR hierarchy, including all image particle effects. The rest of the implementation (Drag Particle, etc) will be the same as in the original software design. Since much of the functionality and internal storage in the CC Projector class in the original AMRINS code is devoted to subcycling-related functionality, the Amr CC Projector is a stripped-down version of the CC Projector which only contains the functionality needed to do the multilevel cell-centered and face-centered projections.
Date: February 8, 2005
Creator: Martin, Daniel; Martin, Dan & Colella, Phil
Partner: UNT Libraries Government Documents Department

Assessment of Applying the PMaC Prediction Framework to NERSC-5 SSP Benchmarks

Description: NERSC procurement depends on application benchmarks, in particular the NERSC SSP. Machine vendors are asked to run SSP benchmarks at various scales to enable NERSC to assess system performance. However, it is often the case that the vendor cannot run the benchmarks at large concurrency as it is impractical to have that much hardware available. Additionally, there may be difficulties in porting the benchmarks to the hardware. The Performance Modeling and Characterization Lab (PMaC) at San Diego Supercomputing Center (SDSC) have developed a framework to predict the performance of codes on large parallel machines. The goal of this work was to apply the PMaC prediction framework to the NERSC-5 SSP benchmark applications and ultimately consider the accuracy of the predictions. Other tasks included identifying assumptions and simplifications in the process, determining the ease of use, and measuring the resources required to obtain predictions.
Date: September 30, 2006
Creator: Keen, Noel
Partner: UNT Libraries Government Documents Department

K-corrections and spectral templates of Type Ia supernovae

Description: With the advent of large dedicated Type Ia supernova (SN Ia) surveys, K-corrections of SNe Ia and their uncertainties have become especially important in the determination of cosmological parameters. While K-corrections are largely driven by SN Ia broadband colors, it is shown here that the diversity in spectral features of SNe Ia can also be important. For an individual observation, the statistical errors from the inhomogeneity in spectral features range from 0.01 (where the observed and rest-frame filters are aligned) to 0.04 (where the observed and rest-frame filters are misaligned). To minimize the systematic errors caused by an assumed SN Ia spectral energy distribution (SED), we outline a prescription for deriving a mean spectral template time series that incorporates a large and heterogeneous sample of observed spectra. We then remove the effects of broadband colors and measure the remaining uncertainties in the K-corrections associated with the diversity in spectral features. Finally, we present a template spectroscopic sequence near maximum light for further improvement on the K-correction estimate. A library of ~;;600 observed spectra of ~;;100 SNe Ia from heterogeneous sources is used for the analysis.
Date: March 20, 2007
Creator: Nugent, Peter E.; Hsiao, E.Y.; Conley, A.; Howell, D.A.; Sullivan, M.; Pritchet, C.J. et al.
Partner: UNT Libraries Government Documents Department

COMPASS, the COMmunity Petascale project for Accelerator Science and Simulation, a board computational accelerator physics initiative

Description: Accelerators are the largest and most costly scientific instruments of the Department of Energy, with uses across a broad range of science, including colliders for particle physics and nuclear science and light sources and neutron sources for materials studies. COMPASS, the Community Petascale Project for Accelerator Science and Simulation, is a broad, four-office (HEP, NP, BES, ASCR) effort to develop computational tools for the prediction and performance enhancement of accelerators. The tools being developed can be used to predict the dynamics of beams in the presence of optical elements and space charge forces, the calculation of electromagnetic modes and wake fields of cavities, the cooling induced by comoving beams, and the acceleration of beams by intense fields in plasmas generated by beams or lasers. In SciDAC-1, the computational tools had multiple successes in predicting the dynamics of beams and beam generation. In SciDAC-2 these tools will be petascale enabled to allow the inclusion of an unprecedented level of physics for detailed prediction.
Date: July 16, 2007
Creator: Cary, J.R.; Spentzouris, P.; Amundson, J.; McInnes, L.; Borland, M.; Mustapha, B. et al.
Partner: UNT Libraries Government Documents Department

Parallel Access of Out-Of-Core Dense Extendible Arrays

Description: Datasets used in scientific and engineering applications are often modeled as dense multi-dimensional arrays. For very large datasets, the corresponding array models are typically stored out-of-core as array files. The array elements are mapped onto linear consecutive locations that correspond to the linear ordering of the multi-dimensional indices. Two conventional mappings used are the row-major order and the column-major order of multi-dimensional arrays. Such conventional mappings of dense array files highly limit the performance of applications and the extendibility of the dataset. Firstly, an array file that is organized in say row-major order causes applications that subsequently access the data in column-major order, to have abysmal performance. Secondly, any subsequent expansion of the array file is limited to only one dimension. Expansions of such out-of-core conventional arrays along arbitrary dimensions, require storage reorganization that can be very expensive. Wepresent a solution for storing out-of-core dense extendible arrays that resolve the two limitations. The method uses a mapping function F*(), together with information maintained in axial vectors, to compute the linear address of an extendible array element when passed its k-dimensional index. We also give the inverse function, F-1*() for deriving the k-dimensional index when given the linear address. We show how the mapping function, in combination with MPI-IO and a parallel file system, allows for the growth of the extendible array without reorganization and no significant performance degradation of applications accessing elements in any desired order. We give methods for reading and writing sub-arrays into and out of parallel applications that run on a cluster of workstations. The axial-vectors are replicated and maintained in each node that accesses sub-array elements.
Date: July 26, 2007
Creator: Otoo, Ekow J & Rotem, Doron
Partner: UNT Libraries Government Documents Department

Effects of d-electrons in pseudopotential screened-exchange density functional calculations

Description: We report a theoretical study on the role of shallow d states in the screened-exchange local density approximation (sX-LDA) band structure of binary semiconductor systems.We found that inaccurate pseudo-wavefunctions can lead to 1) an overestimation of the screened-exchange interaction betweenthe localized d states and the delocalized higher energy s and p states and 2) an underestimation of the screened-exchange interaction between the d states. The resulting sX-LDA band structures have substantially smaller band gaps compared with experiments. We correct the pseudo-wavefunctions of d states by including the semicore s and p states of the same shell in the valence states. The correction of pseudo-wavefunctions yields band gaps and d state binding energies in good agreement with experiments and the full potential linearized augmented plane wave sX-LDA calculations. Compared with the quasi-particle GW method, our sX-LDA results shows not only similar quality on the band gaps but also much better d state binding energies. Combined with its capability of ground state structure calculation, the sX-LDA is expected to be a valuable theoretical tool for the II-VI and III-V (especially the III-N) bulk semiconductors and nanostructure studies.
Date: September 12, 2007
Creator: Lee, Byounghak; Canning, Andrew & Wang, Lin-Wang
Partner: UNT Libraries Government Documents Department

The IceCube Data Acquisition Software: Lessons Learned during Distributed, Collaborative, Multi-Disciplined Software Development.

Description: In this experiential paper we report on lessons learned during the development ofthe data acquisition software for the IceCube project - specifically, how to effectively address the unique challenges presented by a distributed, collaborative, multi-institutional, multi-disciplined project such as this. While development progress in software projects is often described solely in terms of technical issues, our experience indicates that non- and quasi-technical interactions play a substantial role in the effectiveness of large software development efforts. These include: selection and management of multiple software development methodologies, the effective useof various collaborative communication tools, project management structure and roles, and the impact and apparent importance of these elements when viewed through the differing perspectives of hardware, software, scientific and project office roles. Even in areas clearly technical in nature, success is still influenced by non-technical issues that can escape close attention. In particular we describe our experiences on software requirements specification, development methodologies and communication tools. We make observations on what tools and techniques have and have not been effective in this geographically disperse (including the South Pole) collaboration and offer suggestions on how similarly structured future projects may build upon our experiences.
Date: September 21, 2007
Creator: Beattie, Keith S.; Beattie, Keith; Day, Christopher; Glowacki, Dave; Hanson, Kael; Jacobsen, John et al.
Partner: UNT Libraries Government Documents Department

Optimization Strategies for the Vulnerability Analysis of the Electric Power Grid

Description: Identifying small groups of lines, whose removal would cause a severe blackout, is critical for the secure operation of the electric power grid. We show how power grid vulnerability analysis can be studied as a mixed integer nonlinear programming (MINLP) problem. Our analysis reveals a special structure in the formulation that can be exploited to avoid nonlinearity and approximate the original problem as a pure combinatorial problem. The key new observation behind our analysis is the correspondence between the Jacobian matrix (a representation of the feasibility boundary of the equations that describe the flow of power in the network) and the Laplacian matrix in spectral graph theory (a representation of the graph of the power grid). The reduced combinatorial problem is known as the network inhibition problem, for which we present a mixed integer linear programming formulation. Our experiments on benchmark power grids show that the reduced combinatorial model provides an accurate approximation, to enable vulnerability analyses of real-sized problems with more than 10,000 power lines.
Date: November 13, 2007
Creator: Pinar, A.; Meza, J.; Donde, V. & Lesieutre, B.
Partner: UNT Libraries Government Documents Department

Visualization of Scalar Adaptive Mesh Refinement Data

Description: Adaptive Mesh Refinement (AMR) is a highly effective computation method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations, which must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR grids as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR scalar data visualization research.
Date: December 6, 2007
Creator: VACET; Weber, Gunther; Weber, Gunther H.; Beckner, Vince E.; Childs, Hank; Ligocki, Terry J. et al.
Partner: UNT Libraries Government Documents Department

A three-level BDDC algorithm for Mortar discretizations

Description: In this paper, a three-level BDDC algorithm is developed for the solutions of large sparse algebraic linear systems arising from the mortar discretization of elliptic boundary value problems. The mortar discretization is considered on geometrically non-conforming subdomain partitions. In two-level BDDC algorithms, the coarse problem needs to be solved exactly. However, its size will increase with the increase of the number of the subdomains. To overcome this limitation, the three-level algorithm solves the coarse problem inexactly while a good rate of convergence is maintained. This is an extension of previous work, the three-level BDDC algorithms for standard finite element discretization. Estimates of the condition numbers are provided for the three-level BDDC method and numerical experiments are also discussed.
Date: December 9, 2007
Creator: Kim, H. & Tu, X.
Partner: UNT Libraries Government Documents Department

Solving partial differential equations on irregular domains with moving interfaces, with applications to superconformal electrodeposition in semiconductor manufacturing

Description: We present a numerical algorithm for solving partial differential equations on irregular domains with moving interfaces. Instead of the typical approach of solving in a larger rectangular domain, our approach performs most calculations only in the desired domain. To do so efficiently, we have developed a one-sided multigrid method to solve the corresponding large sparse linear systems. Our focus is on the simulation of the electrodeposition process in semiconductor manufacturing in both two and three dimensions. Our goal is to track the position of the interface between the metal and the electrolyte as the features are filled and to determine which initial configurations and physical parameters lead to superfilling. We begin by motivating the set of equations which model the electrodeposition process. Building on existing models for superconformal electrodeposition, we develop a model which naturally arises from a conservation law form of surface additive evolution. We then introduce several numerical algorithms, including a conservative material transport level set method and our multigrid method for one-sided diffusion equations. We then analyze the accuracy of our numerical methods. Finally, we compare our result with experiment over a wide range of physical parameters.
Date: December 10, 2007
Creator: Sethian, J.A. & Shan, Y.
Partner: UNT Libraries Government Documents Department

Monte Carlo without chains

Description: A sampling method for spin systems is presented. The spin lattice is written as the union of a nested sequence of sublattices, all but the last with conditionally independent spins, which are sampled in succession using their marginals. The marginals are computed concurrently by a fast algorithm; errors in the evaluation of the marginals are offset by weights. There are no Markov chains and each sample is independent of the previous ones; the cost of a sample is proportional to the number of spins (but the number of samples needed for good statistics may grow with array size). The examples include the Edwards-Anderson spin glass in three dimensions.
Date: December 12, 2007
Creator: Chorin, Alexandre J.
Partner: UNT Libraries Government Documents Department

Synchronization in complex networks

Description: Synchronization processes in populations of locally interacting elements are in the focus of intense research in physical, biological, chemical, technological and social systems. The many efforts devoted to understand synchronization phenomena in natural systems take now advantage of the recent theory of complex networks. In this review, we report the advances in the comprehension of synchronization phenomena when oscillating elements are constrained to interact in a complex network topology. We also overview the new emergent features coming out from the interplay between the structure and the function of the underlying pattern of connections. Extensive numerical work as well as analytical approaches to the problem are presented. Finally, we review several applications of synchronization in complex networks to different disciplines: biological systems and neuroscience, engineering and computer science, and economy and social sciences.
Date: December 12, 2007
Creator: Arenas, A.; Diaz-Guilera, A.; Moreno, Y.; Zhou, C. & Kurths, J.
Partner: UNT Libraries Government Documents Department

Object Classification at the Nearby Supernova Factory

Description: We present the results of applying new object classification techniques to the supernova search of the Nearby Supernova Factory. In comparison to simple threshold cuts, more sophisticated methods such as boosted decision trees, random forests, and support vector machines provide dramatically better object discrimination: we reduced the number of nonsupernova candidates by a factor of 10 while increasing our supernova identification efficiency. Methods such as these will be crucial for maintaining a reasonable false positive rate in the automated transient alert pipelines of upcoming large optical surveys.
Date: December 21, 2007
Creator: Aragon, Cecilia R.; Bailey, Stephen; Aragon, Cecilia R.; Romano, Raquel; Thomas, Rollin C.; Weaver, B. A. et al.
Partner: UNT Libraries Government Documents Department

Domain evolution and polarization of continuously graded ferroelectric films

Description: A thermodynamic analysis of graded ferroelectric films demonstrates that in the equilibrium state the films are subdivided into a single-domain band and a polydomain band which consists of wedge-shape domains. Polarization under an external electrostatic field proceeds through an inter-band boundary movement due to growth or shrinkage of the wedge domains. It is shown how the domain structure and evolution are determined by the principal characteristics of the film: the distribution of the spontaneous polarization and dielectric constant. Graded films exhibit a sharp increase of polarization with the field for weak fields, with a drop of the dielectric constant when the field is increasing. A general approach to finding the dependence of the displacement and the wedge-domain shape on the field as well as analytical solutions for the p{sup 4} Landau-Devonshire and parabolic potentials are presented.
Date: January 1, 2008
Creator: Roytburd, A. & Roytburd, V.
Partner: UNT Libraries Government Documents Department

Optimal information transmission in organizations: search and congestion

Description: We propose a stylized model of a problem-solving organization whose internal communication structure is given by a fixed network. Problems arrive randomly anywhere in this network and must find their way to their respective specialized solvers by relying on local information alone. The organization handles multiple problems simultaneously. For this reason, the process may be subject to congestion. We provide a characterization of the threshold of collapse of the network and of the stock of floating problems (or average delay) that prevails below that threshold. We build upon this characterization to address a design problem: the determination of what kind of network architecture optimizes performance for any given problem arrival rate. We conclude that, for low arrival rates, the optimal network is very polarized (i.e. star-like or centralized), whereas it is largely homogeneous (or decentralized) for high arrival rates. These observations are in line with a common transformation experienced by information-intensive organizations as their work flow has risen in recent years.
Date: January 1, 2008
Creator: Arenas, A.; Cabrales, A.; Danon, L.; Diaz-Guilera, A.; Guimera, R. & Vega-Redondo, F.
Partner: UNT Libraries Government Documents Department

Phase patterns of coupled oscillators with application to wireless communication

Description: Here we study the plausibility of a phase oscillators dynamical model for TDMA in wireless communication networks. We show that emerging patterns of phase locking states between oscillators can eventually oscillate in a round-robin schedule, in a similar way to models of pulse coupled oscillators designed to this end. The results open the door for new communication protocols in a continuous interacting networks of wireless communication devices.
Date: January 2, 2008
Creator: Arenas, A.
Partner: UNT Libraries Government Documents Department

Network Communication as a Service-Oriented Capability

Description: In widely distributed systems generally, and in science-oriented Grids in particular, software, CPU time, storage, etc., are treated as"services" -- they can be allocated and used with service guarantees that allows them to be integrated into systems that perform complex tasks. Network communication is currently not a service -- it is provided, in general, as a"best effort" capability with no guarantees and only statistical predictability. In order for Grids (and most types of systems with widely distributed components) to be successful in performing the sustained, complex tasks of large-scale science -- e.g., the multi-disciplinary simulation of next generation climate modeling and management and analysis of the petabytes of data that will come from the next generation of scientific instrument (which is very soon for the LHC at CERN) -- networks must provide communication capability that is service-oriented: That is it must be configurable, schedulable, predictable, and reliable. In order to accomplish this, the research and education network community is undertaking a strategy that involves changes in network architecture to support multiple classes of service; development and deployment of service-oriented communication services, and; monitoring and reporting in a form that is directly useful to the application-oriented system so that it may adapt to communications failures. In this paper we describe ESnet's approach to each of these -- an approach that is part of an international community effort to have intra-distributed system communication be based on a service-oriented capability.
Date: January 8, 2008
Creator: Johnston, William; Johnston, William; Metzger, Joe; Collins, Michael; Burrescia, Joseph; Dart, Eli et al.
Partner: UNT Libraries Government Documents Department