2,548 Matching Results

Search Results

Advanced search parameters have been applied.

LDRD final report : leveraging multi-way linkages on heterogeneous data.

Description: This report is a summary of the accomplishments of the 'Leveraging Multi-way Linkages on Heterogeneous Data' which ran from FY08 through FY10. The goal was to investigate scalable and robust methods for multi-way data analysis. We developed a new optimization-based method called CPOPT for fitting a particular type of tensor factorization to data; CPOPT was compared against existing methods and found to be more accurate than any faster method and faster than any equally accurate method. We extended this method to computing tensor factorizations for problems with incomplete data; our results show that you can recover scientifically meaningfully factorizations with large amounts of missing data (50% or more). The project has involved 5 members of the technical staff, 2 postdocs, and 1 summer intern. It has resulted in a total of 13 publications, 2 software releases, and over 30 presentations. Several follow-on projects have already begun, with more potential projects in development.
Date: September 1, 2010
Creator: Dunlavy, Daniel M. & Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)
Partner: UNT Libraries Government Documents Department

VISCOSITY OF AQUEOUS SODIUM CHLORIDE SOLUTIONS FROM 0 - 150o C

Description: A critical evaluation of data on the viscosity of aqueous sodium chloride solutions is presented. The literature was screened through October 1977, and a databank of evaluated data was established. Viscosity values were converted when necessary to units of centigrade, centipoise and molal concentration. The data were correlated with the aid of an empirical equation to facilitate interpolation and computer calculations. The result of the evaluation includes a table containing smoothed values for the viscosity of NaCl solutions to 150 C.
Date: October 1, 1977
Creator: Ozbek, H.; Fair, J.A. & Phillips, S.L.
Partner: UNT Libraries Government Documents Department

Visualization Tools for Adaptive Mesh Refinement Data

Description: Adaptive Mesh Refinement (AMR) is a highly effective method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations that must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR visualization research and tools and describe how VisIt currently handles AMR data.
Date: May 9, 2007
Creator: Weber, Gunther H.; Beckner, Vincent E.; Childs, Hank; Ligocki,Terry J.; Miller, Mark C.; Van Straalen, Brian et al.
Partner: UNT Libraries Government Documents Department

Possible effects of competition on electricity consumers in the Pacific Northwest

Description: In part, the impetus for restructuring the U.S. electricity industry stems from the large regional disparities in electricity prices. Indeed, industry reforms are moving most rapidly in high-cost states, such as California and those in the Northeast. Legislators, regulators, and many others in states that enjoy low electricity prices, on the other hand, ask whether increased competition will benefit consumers in their states. This report quantifies the effects of increased competition on electricity consumers and producers in two regions, the Pacific Northwest and California. California`s generating costs are roughly double those of the Northwest. We use a new strategic-planning model called Oak Ridge Competitive Electricity Dispatch (ORCED) to conduct these analyses. Specifically, we analyzed four cases: a pre-competition base case intended to represent conditions as they might exist under current regulation in the year 2000, a post-competition case in which customer loads and load shapes respond to real-time electricity pricing, a sensitivity case in which natural-gas prices are 20% higher than in the base case, and a sensitivity case in which the hydroelectric output in the Northwest is 20% less than in the base case. The ORCED analyses suggest that, absent regulatory intervention, retail competition would increase profits for producers in the Northwest and lower prices for consumers in California at the expense of consumers in the Northwest and producers in California. However, state regulators may be able to capture some or all of the increased profits and use them to lower electricity prices in the low-cost region. Perhaps the most straightforward way to allocate the costs and benefits to retail customers is through development of transition-cost charges or credits. With this option, the consumers in both regions can benefit from competition. The magnitude and even direction of bulk-power trading between regions depends strongly on the amount of hydroelectric power ...
Date: January 1, 1998
Creator: Hadley, S. & Hirst, E.
Partner: UNT Libraries Government Documents Department

On the computation of CMBR anisotropies from simulations of topological defects

Description: Techniques for computing the CMBR anisotropy from simulations of topological defects are discussed with an eye to getting as much information from a simulation as possible. Here we consider the practical details of which sums and multiplications to do and how many terms there are.
Date: May 1, 1997
Creator: Stebbins, A. & Dodelson, S.
Partner: UNT Libraries Government Documents Department

Empirical model for shell-corrected level densities

Description: An empirical model for calculating level densities of closed and near- closed shell nuclei has been developed and tested. This method is based on the calculation of shell plus pairing corrections for each relevant nuclide. A new version of the ALICE code is used to extract these corrections from the Myers-Swiatecki mass formula and to apply them to the calculation of effective excitations in level densities. The corrections are applied in a backshifted fashion to assure correct threshold dependence. We compare our calculated results with experimental data for the production of 56Ni and 88Y to test shell corrections near f7/c closure and the N=50 neutron shell. We also compare our results with those using pure Fermi gas (plus pairing) level densities, and with the more computationally intensive model of Kataria and Ramamurthy.
Date: April 29, 1997
Creator: Ross, M.A. & Blann, M.
Partner: UNT Libraries Government Documents Department

Dose refinement: ARAC's role

Description: The Atmospheric Release Advisory Capability (ARAC), located at the Lawrence Livermore National Laboratory, since the late 1970�s has been involved in assessing consequences from nuclear and other hazardous material releases into the atmosphere. ARAC�s primary role has been emergency response. However, after the emergency phase, there is still a significant role for dispersion modeling. This work usually involves refining the source term and, hence, the dose to the populations affected as additional information becomes available in the form of source term estimates�release rates, mix of material, and release geometry�and any measurements from passage of the plume and deposition on the ground. Many of the ARAC responses have been documented elsewhere. 1 Some of the more notable radiological releases that ARAC has participated in the post-emergency phase have been the 1979 Three Mile Island nuclear power plant (NPP) accident outside Harrisburg, PA, the 1986 Chernobyl NPP accident in the Ukraine, and the 1996 Japan Tokai nuclear processing plant explosion. ARAC has also done post-emergency phase analyses for the 1978 Russian satellite COSMOS 954 reentry and subsequent partial burn up of its on board nuclear reactor depositing radioactive materials on the ground in Canada, the 1986 uranium hexafluoride spill in Gore, OK, the 1993 Russian Tomsk-7 nuclear waste tank explosion, and lesser releases of mostly tritium. In addition, ARAC has performed a key role in the contingency planning for possible accidental releases during the launch of spacecraft with radioisotope thermoelectric generators (RTGs) on board (i.e. Galileo, Ulysses, Mars-Pathfinder, and Cassini), and routinely exercises with the Federal Radiological Monitoring and Assessment Center (FRMAC) in preparation for offsite consequences of radiological releases from NPPs and nuclear weapon accidents or incidents. Several accident post-emergency phase assessments are discussed in this paper in order to illustrate ARAC�s roll in dose refinement. A brief description of the ...
Date: June 1, 1998
Creator: Baskett, R L; Ellis, J S & Sullivan, T J
Partner: UNT Libraries Government Documents Department

Fields and First Order Perturbation Effects in Two-DimensionalConductor Dominated Magnets

Description: General expressions are given for the field and its expansion coefficients produced by a two dimensional conductor structure surrounded by iron with a circular inside boundary. Saturation effects are described in terms of the tangential field at that boundary. The effects of the following types of perturbations are discussed: displacement, rotation and error excitation of a conductor, change of conductor shape, and modification of the inside contour of the iron. A design criterion is given to minimize the error fields associated with a displacement of the iron shell relative to the conductor structure. Expressions for the force and torque acting on a conductor are derived both for the unperturbed and perturbed magnet. Formulae are presented that allow convenient and fast evaluation of pertinent quantities with a computer when the structure is too complicated for hand computations.
Date: July 1, 1969
Creator: Halbach, K.
Partner: UNT Libraries Government Documents Department

An Introduction to Algebraic Multigrid

Description: Algebraic multigrid (AMG) solves linear systems based on multigrid principles, but in a way that only depends on the coefficients in the underlying matrix. The author begins with a basic introduction to AMG methods, and then describes some more recent advances and theoretical developments
Date: April 25, 2006
Creator: Falgout, R D
Partner: UNT Libraries Government Documents Department

LQCD Phase 1 Runs with P4RHMC

Description: These results represent the first set of runs of 10 {beta} values ranging from 2000-7000 trajectories with the p4rhmc code. This initial run sequence spanned roughly 2-weeks in late January and Early February, 2007. To manage the submission of dependent jobs: subSet.pl--submits a set of dependent jobs for a single run; rmSet.pl--removes a set of dependent jobs in reverse order of submission; and statSet.pl--runs pstat command and prints parsed output along with directory contents. The results of running the statSet.pl command are printed for three different times during the start up the next sequence of runs using the milc code.
Date: February 13, 2007
Creator: Soltz, R & Gupta, R
Partner: UNT Libraries Government Documents Department

Science Prospects And Benefits with Exascale Computing

Description: Scientific computation has come into its own as a mature technology in all fields of science. Never before have we been able to accurately anticipate, analyze, and plan for complex events that have not yet occurred from the operation of a reactor running at 100 million degrees centigrade to the changing climate a century down the road. Combined with the more traditional approaches of theory and experiment, scientific computation provides a profound tool for insight and solution as we look at complex systems containing billions of components. Nevertheless, it cannot yet do all we would like. Much of scientific computation s potential remains untapped in areas such as materials science, Earth science, energy assurance, fundamental science, biology and medicine, engineering design, and national security because the scientific challenges are far too enormous and complex for the computational resources at hand. Many of these challenges are of immediate global importance. These challenges can be overcome by a revolution in computing that promises real advancement at a greatly accelerated pace. Planned petascale systems (capable of a petaflop, or 1015 floating point operations per second) in the next 3 years and exascale systems (capable of an exaflop, or 1018 floating point operations per second) in the next decade will provide an unprecedented opportunity to attack these global challenges through modeling and simulation. Exascale computers, with a processing capability similar to that of the human brain, will enable the unraveling of longstanding scientific mysteries and present new opportunities. Table ES.1 summarizes these scientific opportunities, their key application areas, and the goals and associated benefits that would result from solutions afforded by exascale computing.
Date: December 1, 2007
Creator: Kothe, Douglas B
Partner: UNT Libraries Government Documents Department

Shifted power method for computing tensor eigenpairs.

Description: Recent work on eigenvalues and eigenvectors for tensors of order m {>=} 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetric-tensor eigenpairs of the form Ax{sup m-1} = {lambda}x subject to {parallel}x{parallel} = 1, which is closely related to optimal rank-1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higher-order power method (SS-HOPM), which we showis guaranteed to converge to a tensor eigenpair. SS-HOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higher-order power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to fnding complex eigenpairs.
Date: October 1, 2010
Creator: Mayo, Jackson R. & Kolda, Tamara Gibson
Partner: UNT Libraries Government Documents Department

Fixing convergence of Gaussian belief propagation

Description: Gaussian belief propagation (GaBP) is an iterative message-passing algorithm for inference in Gaussian graphical models. It is known that when GaBP converges it converges to the correct MAP estimate of the Gaussian random vector and simple sufficient conditions for its convergence have been established. In this paper we develop a double-loop algorithm for forcing convergence of GaBP. Our method computes the correct MAP estimate even in cases where standard GaBP would not have converged. We further extend this construction to compute least-squares solutions of over-constrained linear systems. We believe that our construction has numerous applications, since the GaBP algorithm is linked to solution of linear systems of equations, which is a fundamental problem in computer science and engineering. As a case study, we discuss the linear detection problem. We show that using our new construction, we are able to force convergence of Montanari's linear detection algorithm, in cases where it would originally fail. As a consequence, we are able to increase significantly the number of users that can transmit concurrently.
Date: January 1, 2009
Creator: Johnson, Jason K; Bickson, Danny & Dolev, Danny
Partner: UNT Libraries Government Documents Department

State Compensation: A No-cost Scheme for Scalable Failure Recovery in Tree-based Overlay Networks

Description: Tree-based overlay networks (TB{bar O}Ns) have become important for scalable data multicast and aggregation. This infrastructure's generality has lead to widespread usage in large scale and widely distributed environments--environments in which reliability must be addressed. This paper presents state compensation, a novel reliability concept for TB{bar O}N environments that avoids explicit state replication (such as checkpoints) for failure recovery by leveraging general properties of TB{bar O}N computations that allow computational state from non-failed processes to compensate for state lost from failed ones. In this paper, we present our state compensation mechanisms, prove sufficient properties of distributed computations that make these mechanisms feasible and show how to derive computation-specific recovery primitives from these properties. We also present a case study of the recovery process. The result is a general TB{bar O}N recovery model that requires no additional storage, network, or computational resources during normal operation.
Date: July 11, 2006
Creator: Arnold, D C & Miller, B P
Partner: UNT Libraries Government Documents Department

Computer Calculations of Eddy-Current Power Loss in Rotating Titanium Wheels and Rims in Localized Axial Magnetic Fields

Description: We have performed preliminary computer-based, transient, magnetostatic calculations of the eddy-current power loss in rotating titanium-alloy and aluminum wheels and wheel rims in the predominantly axially-directed, steady magnetic fields of two small, solenoidal coils. These calculations have been undertaken to assess the eddy-current power loss in various possible International Linear Collider (ILC) positron target wheels. They have also been done to validate the simulation code module against known results published in the literature. The commercially available software package used in these calculations is the Maxwell 3D, Version 10, Transient Module from the Ansoft Corporation.
Date: May 15, 2006
Creator: Mayhall, D J; Stein, W & Gronberg, J B
Partner: UNT Libraries Government Documents Department

LDRD 149045 final report distinguishing documents.

Description: This LDRD 149045 final report describes work that Sandians Scott A. Mitchell, Randall Laviolette, Shawn Martin, Warren Davis, Cindy Philips and Danny Dunlavy performed in 2010. Prof. Afra Zomorodian provided insight. This was a small late-start LDRD. Several other ongoing efforts were leveraged, including the Networks Grand Challenge LDRD, and the Computational Topology CSRF project, and the some of the leveraged work is described here. We proposed a sentence mining technique that exploited both the distribution and the order of parts-of-speech (POS) in sentences in English language documents. The ultimate goal was to be able to discover 'call-to-action' framing documents hidden within a corpus of mostly expository documents, even if the documents were all on the same topic and used the same vocabulary. Using POS was novel. We also took a novel approach to analyzing POS. We used the hypothesis that English follows a dynamical system and the POS are trajectories from one state to another. We analyzed the sequences of POS using support vector machines and the cycles of POS using computational homology. We discovered that the POS were a very weak signal and did not support our hypothesis well. Our original goal appeared to be unobtainable with our original approach. We turned our attention to study an aspect of a more traditional approach to distinguishing documents. Latent Dirichlet Allocation (LDA) turns documents into bags-of-words then into mixture-model points. A distance function is used to cluster groups of points to discover relatedness between documents. We performed a geometric and algebraic analysis of the most popular distance functions and made some significant and surprising discoveries, described in a separate technical report.
Date: September 1, 2010
Creator: Mitchell, Scott A.
Partner: UNT Libraries Government Documents Department