2,283 Matching Results

Search Results

Optimizing parallel reduction operations

Description: A parallel program consists of sets of concurrent and sequential tasks. Often, a reduction (such as array sum) sequentially combines values produced by a parallel computation. Because reductions occur so frequently in otherwise parallel programs, they are good candidates for optimization. Since reductions may introduce dependencies, most languages separate computation and reduction. The Sisal functional language is unique in that reduction is a natural consequence of loop expressions; the parallelism is implicit in the language. Unfortunately, the original language supports only seven reduction operations. To generalize these expressions, the Sisal 90 definition adds user-defined reductions at the language level. Applicable optimizations depend upon the mathematical properties of the reduction. Compilation and execution speed, synchronization overhead, memory use and maximum size influence the final implementation. This paper (1) Defines reduction syntax and compares with traditional concurrent methods; (2) Defines classes of reduction operations; (3) Develops analysis of classes for optimized concurrency; (4) Incorporates reductions into Sisal 1.2 and Sisal 90; (5) Evaluates performance and size of the implementations.
Date: June 1, 1995
Creator: Denton, S.M.
Partner: UNT Libraries Government Documents Department

The picosecond dynamics of electron-hole pairs in graded and homogeneous CdS{sub x}Se{sub 1-x} semiconductors

Description: Wavelength and composition dependence of the time-resolved luminescence were examined. Effects of macroscopic composition gradient and microscopic alloy disorder on e{sup {minus}}-h{sup +} pair dynamics were probed. Materials with both increasing and decreasing S content with distance from the surface were examined, where 0{le} {times} {le}1 over the full range. In these graded materials, the band gap energy also varies with position. The graded semiconductor luminescence shows strong wavelength dependence, showing diffusion in both band gap and concentration gradients. A bottleneck in the diffusion is attributed to localization occurring primarily in the materials with greatest alloy disorder, i.e. around CdS{sub 0.5}Se{sub 0.50}. Homogeneous materials were studied for x = 0, 0.25, 0.50, 0.75, 1; the time-resolved luminescence depends strongly on the composition. The mixed compositions have longer decay constants than CdS and CdSe. Observed lifetimes agree with a picture of localized states induced by the alloy disorder. For a given homogeneous crystal, no wavelength dependence of the time decays was observed. Picosecond luminescence upconversion spectroscopy was used to study further the dependence of the luminescence on composition. Large nonexponential character in the decay functions was observed in the alloys; this long time tail can be attributed to a broad distribution of relaxation times as modeled by the Kohlrausch exponential.
Date: May 1, 1995
Creator: Hane, J.K.
Partner: UNT Libraries Government Documents Department

Optimization of the Mini-Flo flow cytometer

Description: A new method of collecting light scattering from a liquid flow cytometer has been proposed; this apparatus is named the Mini-Flo flow cytometer. The Mini-Flo uses a high numerical aperture collection immersed in the flow stream. The collector consists of a conically tipped fiber optic pipe and terminating optical detector. This study was performed to improve the signal/noise ration and optimize the Mini-Flo`s performance for HIV blood detection applications. Experiments were performed to gauge the effects of Raman scattering, lens/filter fluorescence, and fiber optic fluorescence on the Mini-Flo`s performance and signal/noise ratio. Results indicated that the fiber optic was a major source of fluorescence noise and reducing its length from 33 cm to 10 cm increased the signal noise ratio from 8 to 75. Therefore, one of the key issues in optimizing the Mini-Flo`s performance is a redesign of the holding structure such that the fiber optic length is minimized. Further improvements of the Mini-Flo`s performance can be achieved by studying the polish of the fiber optic, the flow over the fiber optics`s conical tip, and the optimal particle rates.
Date: June 1, 1996
Creator: Venkatesh, M.
Partner: UNT Libraries Government Documents Department

Nonequilibrium flows with smooth particle applied mechanics

Description: Smooth particle methods are relatively new methods for simulating solid and fluid flows through they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separately controlled. The gradient algorithm, based on differentiating the smooth particle expression for (u{rho}) and (T{rho}), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier`s heat-flow law and Newton`s viscous force law are used. Smooth particle methods show an interesting parallel linking to them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh-Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails.
Date: July 1, 1995
Creator: Kum, O.
Partner: UNT Libraries Government Documents Department

Characterization and refinement of carbide coating formation rates and dissolution kinetics in the Ta-C system

Description: The interaction between carbide coating formation rates and dissolution kinetics in the tantalum-carbon system was investigated. The research was driven by the need to characterize carbide coating formation rates. The characterization of the carbide coating formation rates was required to engineer an optimum processing scheme for the fabrication of the ultracorrosion-resistant composite, carbon-saturated tantalum. A packed-bed carburization process was successfully engineered and employed. The packed-bed carburization process produced consistent, predictable, and repeatable carbide coatings. A digital imaging analysis measurement process for accurate and consistent measurement of carbide coating thicknesses was developed. A process for removing the chemically stable and extremely hard tantalum-carbide coatings was also developed in this work.
Date: October 1, 1996
Creator: Rodriguez, P.J.
Partner: UNT Libraries Government Documents Department

Coupled elastic-plastic thermomechanically assisted diffusion: Theory development, numerical implementation, and application

Description: A fully coupled thermomechanical diffusion theory describing the thermal and mechanically assisted mass transport of dilute mobile constituents in an elastic solid is extended to include the effects of elastic-plastic deformation. Using the principles of modern continuum mechanics and classical plasticity theory, balance laws and constitutive equations are derived for a continuum composed of an immobile, but deformable, parent material and a dilute mobile constituent. The resulting equations are cast into a finite element formulation for incorporation into a finite element code. This code serves as a tool for modeling thermomechanically assisted phenomena in elastic-plastic solids. A number of simplified problems for which analytical solutions can be derived are used to benchmark the theory and finite element code. Potential uses of the numerical implementation of the theory are demonstrated using two problems. Specifically, tritium diffusion in a titanium alloy and hydrogen diffusion in a multiphase stainless steel are examined.
Date: December 1, 1995
Creator: Weinacht, D.J.
Partner: UNT Libraries Government Documents Department

Three dimensional simulations of space charge dominated heavy ion beams with applications to inertial fusion energy

Description: Heavy ion fusion requires injection, transport and acceleration of high current beams. Detailed simulation of such beams requires fully self-consistent space charge fields and three dimensions. WARP3D, developed for this purpose, is a particle-in-cell plasma simulation code optimized to work within the framework of an accelerator`s lattice of accelerating, focusing, and bending elements. The code has been used to study several test problems and for simulations and design of experiments. Two applications are drift compression experiments on the MBE-4 facility at LBL and design of the electrostatic quadrupole injector for the proposed ILSE facility. With aggressive drift compression on MBE-4, anomalous emittance growth was observed. Simulations carried out to examine possible causes showed that essentially all the emittance growth is result of external forces on the beam and not of internal beam space-charge fields. Dominant external forces are the dodecapole component of focusing fields, the image forces on the surrounding pipe and conductors, and the octopole fields that result from the structure of the quadrupole focusing elements. Goal of the design of the electrostatic quadrupole injector is to produce a beam of as low emittance as possible. The simulations show that the dominant effects that increase the emittance are the nonlinear octopole fields and the energy effect (fields in the axial direction that are off-axis). Injectors were designed that minimized the beam envelope in order to reduce the effect of the nonlinear fields. Alterations to the quadrupole structure that reduce the nonlinear fields further were examined. Comparisons were done with a scaled experiment resulted in very good agreement.
Date: November 1, 1994
Creator: Grote, D.P.
Partner: UNT Libraries Government Documents Department

New techniques for the scientific visualization of three-dimensional multi-variate and vector fields

Description: Volume rendering allows us to represent a density cloud with ideal properties (single scattering, no self-shadowing, etc.). Scientific visualization utilizes this technique by mapping an abstract variable or property in a computer simulation to a synthetic density cloud. This thesis extends volume rendering from its limitation of isotropic density clouds to anisotropic and/or noisy density clouds. Design aspects of these techniques are discussed that aid in the comprehension of scientific information. Anisotropic volume rendering is used to represent vector based quantities in scientific visualization. Velocity and vorticity in a fluid flow, electric and magnetic waves in an electromagnetic simulation, and blood flow within the body are examples of vector based information within a computer simulation or gathered from instrumentation. Understand these fields can be crucial to understanding the overall physics or physiology. Three techniques for representing three-dimensional vector fields are presented: Line Bundles, Textured Splats and Hair Splats. These techniques are aimed at providing a high-level (qualitative) overview of the flows, offering the user a substantial amount of information with a single image or animation. Non-homogenous volume rendering is used to represent multiple variables. Computer simulations can typically have over thirty variables, which describe properties whose understanding are useful to the scientist. Trying to understand each of these separately can be time consuming. Trying to understand any cause and effect relationships between different variables can be impossible. NoiseSplats is introduced to represent two or more properties in a single volume rendering of the data. This technique is also aimed at providing a qualitative overview of the flows.
Date: October 1, 1995
Creator: Crawfis, R.A.
Partner: UNT Libraries Government Documents Department

A numerical model of hydro-thermo-mechanical coupling in a fractured rock mass

Description: Coupled hydro-thermo-mechanical codes with the ability to model fractured materials are used for predicting groundwater flow behavior in fractured aquifers containing thermal sources. The potential applications of such a code include the analysis of groundwater behavior within a geothermal reservoir. The capability of modeling hydro-thermo systems with a dual porosity, fracture flow model has been previously developed in the finite element code, FEHM. FEHM has been modified to include stress coupling with the dual porosity feature. FEHM has been further developed to implicitly couple the dependence of fracture hydraulic conductivity on effective stress within two dimensional, saturated aquifers containing fracture systems. The cubic law for flow between parallel plates was used to model fracture permeability. The Bartin-Bandis relationship was used to determine the fracture aperture within the cubic law. The code used a Newton Raphson iteration to implicitly solve for six unknowns at each node. Results from a model of heat flow from a reservoir to the moving fluid in a single fracture compared well with analytic results. Results of a model showing the increase in fracture flow due to a single fracture opening under fluid pressure compared well with analytic results. A hot dry rock, geothermal reservoir was modeled with realistic time steps indicating that the modified FEHM code does successfully model coupled flow problems with no convergence problems.
Date: June 1, 1996
Creator: Bower, K.M.
Partner: UNT Libraries Government Documents Department

An Analytic Tool to Investigate the Effect of Binder on the Sensitivity of HMX-Based Plastic Bonded Explosives in the Skid Test

Description: This project will develop an analytical tool to calculate performance of HMX based PBXs in the skid test. The skid-test is used as a means to measure sensitivity for large charges in handling situations. Each series of skid tests requires dozens of drops of large billets. It is proposed that the reaction (or lack of one) of PBXs in the skid test is governed by the mechanical properties of the binder. If true, one might be able to develop an analytical tool to estimate skid test behavior for new PBX formulations. Others over the past 50 years have tried to develop similar models. This project will research and summarize the works of others and couple the work of 3 into an analytical tool that can be run on a PC to calculate drop height of HMX based PBXs. Detonation due to dropping a billet is argued to be a dynamic thermal event. To avoid detonation, the heat created due to friction at impact, must be conducted into the charge or the target faster than the chemical kinetics can create additional energy. The methodology will involve numerically solving the Frank-Kamenetskii equation in one dimension. The analytical problem needs to be bounded in terms of how much heat is introduced to the billet and for how long. Assuming an inelastic collision with no rebound, the billet will be in contact with the target for a short duration determined by the equations of motion. For the purposes of the calculations, it will be assumed that if a detonation is to occur, it will transpire within that time. The surface temperature will be raised according to the friction created using the equations of motion of dropping the billet on a rigid surface. The study will connect the works of Charles Anderson, Alan Randolph, Larry ...
Date: February 1, 2005
Creator: Hayden, D.W.
Partner: UNT Libraries Government Documents Department

Effects of Various Blowout Panel Configurations on the Structural Response of Los Alamos National Laboratory Building 16-340 to Internal Explosions

Description: Abstract: The risk of accidental detonation is present whenever any type of high explosives processing activity is performed. These activities are typically carried out indoors to protect processing equipment from the weather and to hide possibly secret processes from view. Often, highly strengthened reinforced concrete buildings are employed to house these activities. These buildings may incorporate several design features, including the use of lightweight frangible blowout panels, to help mitigate blast effects. These panels are used to construct walls that are durable enough to withstand the weather, but are of minimal weight to provide overpressure relief by quickly moving outwards and creating a vent area during an accidental explosion. In this study the behavior of blowout panels under various blast loading conditions was examined. External loadings from explosions occurring in nearby rooms were of primary interest. Several reinforcement systems were designed to help blowout panels resist failure from external blast loads while still allowing them to function as vents when subjected to internal explosions. The reinforcements were studied using two analytical techniques, yield-line analysis and modal analysis, and the hydrocode AUTODYN. A blowout panel reinforcement design was created that could prevent panels from being blown inward by external explosions. This design was found to increase the internal loading of the building by 20%, as compared with nonreinforced panels. Nonreinforced panels were found to increase the structural loads by 80% when compared to an open wall at the panel location.
Date: September 2005
Creator: Wilke, Jason P.; Pohs, Keith G. & Plumlee, Deidré A.
Partner: UNT Libraries Government Documents Department

The interaction of intense subpicosecond laser pulses with underdense plasmas

Description: Laser-plasma interactions have been of interest for many years not only from a basic physics standpoint, but also for their relevance to numerous applications. Advances in laser technology in recent years have resulted in compact laser systems capable of generating (psec), 10{sup 16} W/cm{sup 2} laser pulses. These lasers have provided a new regime in which to study laser-plasma interactions, a regime characterized by L{sub plasma} {ge} 2L{sub Rayleigh} > c{tau}. The goal of this dissertation is to experimentally characterize the interaction of a short pulse, high intensity laser with an underdense plasma (n{sub o} {le} 0.05n{sub cr}). Specifically, the parametric instability known as stimulated Raman scatter (SRS) is investigated to determine its behavior when driven by a short, intense laser pulse. Both the forward Raman scatter instability and backscattered Raman instability are studied. The coupled partial differential equations which describe the growth of SRS are reviewed and solved for typical experimental laser and plasma parameters. This solution shows the growth of the waves (electron plasma and scattered light) generated via stimulated Raman scatter. The dispersion relation is also derived and solved for experimentally accessible parameters. The solution of the dispersion relation is used to predict where (in k-space) and at what frequency (in {omega}-space) the instability will grow. Both the nonrelativistic and relativistic regimes of the instability are considered.
Date: May 11, 1995
Creator: Coverdale, C.A.
Partner: UNT Libraries Government Documents Department

Interactive graphical tools for three-dimensional mesh redistribution

Description: Three-dimensional meshes modeling nonlinear problems such as sheet metal forming, metal forging, heat transfer during welding, the propagation of microwaves through gases, and automobile crashes require highly refined meshes in local areas to accurately represent areas of high curvature, stress, and strain. These locally refined areas develop late in the simulation and/or move during the course of the simulation, thus making it difficult to predict their exact location. This thesis is a systematic study of new tools scientists can use with redistribution algorithms to enhance the solution results and reduce the time to build, solve, and analyze nonlinear finite element problems. Participatory design techniques including Contextual Inquiry and Design were used to study and analyze the process of solving such problems. This study and analysis led to the in-depth understanding of the types of interactions performed by FEM scientists. Based on this understanding, a prototype tool was designed to support these interactions. Scientists participated in evaluating the design as well as the implementation of the prototype tool. The study, analysis, prototype tool design, and the results of the evaluation of the prototype tool are described in this thesis.
Date: March 1, 1996
Creator: Dobbs, L.A.
Partner: UNT Libraries Government Documents Department

Large-eddy simulation of turbulent flow using the finite element method

Description: The equations of motion describing turbulent flows (in both the low and high Reynolds-number regimes) are well established. However, present day computers cannot meet the enormous computational requirement for numerically solving the governing equations for common engineering flows in the high Reynolds number turbulent regime. The characteristics that make turbulent, high Reynolds number flows difficult to simulate is the extreme range of time and space scales of motion. Most current engineering calculations are performed using semi-empirical equations, developed in terms of the flow mean (average) properties. These turbulence{open_quote} models{close_quote} (semi-empirical/analytical approximations) do not explicitly account for the eddy structures and thus, the temporal and spatial flow fluctuations are not resolved. In these averaging approaches, it is necessary to approximate all the turbulent structures using semi-empirical relations, and as a result, the turbulence models must be tailored for specific flow conditions and geometries with parameters obtained (usually) from physical experiments. The motivation for this research is the development of a finite element turbulence modeling approach which will ultimately be used to predict the wind flow around buildings. Accurate turbulence models of building flow are needed to predict the dispersion of airborne pollutants. The building flow turbulence models used today are not capable of predicting the three-dimensional separating and reattaching flows without the manipulation of many empirical parameters. These empirical parameters must be set by experimental data and they may vary unpredictably with building geometry, building orientation, and upstream flow conditions.
Date: February 15, 1995
Creator: McCallen, R. C.
Partner: UNT Libraries Government Documents Department

On the implementation of error handling in dynamic interfaces to scientific codes

Description: With the advent of powerful workstations with windowing systems, the scientific community has become interested in user friendly interfaces as a means of promoting the distribution of scientific codes to colleagues. Distributing scientific codes to a wider audience can, however, be problematic because scientists, who are familiar with the problem being addressed but not aware of necessary operational details, are encouraged to use the codes. A more friendly environment that not only guides user inputs, but also helps catch errors is needed. This thesis presents a dynamic graphical user interface (GUI) creation system with user controlled support for error detection and handling. The system checks a series of constraints defining a valid input set whenever the state of the system changes and notifies the user when an error has occurred. A naive checking scheme was implemented that checks every constraint every time the system changes. However, this method examines many constraints whose values have not changed. Therefore, a minimum evaluation scheme that only checks those constraints that may have been violated was implemented. This system was implemented in a prototype and user testing was used to determine if it was a success. Users examined both the GUI creation system and the end-user environment. The users found both to be easy to use and efficient enough for practical use. Moreover, they concluded that the system would promote distribution.
Date: November 1, 1993
Creator: Solomon, C.J.
Partner: UNT Libraries Government Documents Department

Efficient biased random bit generation for parallel processing

Description: A lattice gas automaton was implemented on a massively parallel machine (the BBN TC2000) and a vector supercomputer (the CRAY C90). The automaton models Burgers equation {rho}t + {rho}{rho}{sub x} = {nu}{rho}{sub xx} in 1 dimension. The lattice gas evolves by advecting and colliding pseudo-particles on a 1-dimensional, periodic grid. The specific rules for colliding particles are stochastic in nature and require the generation of many billions of random numbers to create the random bits necessary for the lattice gas. The goal of the thesis was to speed up the process of generating the random bits and thereby lessen the computational bottleneck of the automaton.
Date: September 28, 1994
Creator: Slone, D.M.
Partner: UNT Libraries Government Documents Department

High order harmonic generation in rare gases

Description: The process of high order harmonic generation in atomic gases has shown great promise as a method of generating extremely short wavelength radiation, extending far into the extreme ultraviolet (XUV). The process is conceptually simple. A very intense laser pulse (I {approximately}10{sup 13}-10{sup 14} W/cm{sup 2}) is focused into a dense ({approximately}10{sup l7} particles/cm{sup 3}) atomic medium, causing the atoms to become polarized. These atomic dipoles are then coherently driven by the laser field and begin to radiate at odd harmonics of the laser field. This dissertation is a study of both the physical mechanism of harmonic generation as well as its development as a source of coherent XUV radiation. Recently, a semiclassical theory has been proposed which provides a simple, intuitive description of harmonic generation. In this picture the process is treated in two steps. The atom ionizes via tunneling after which its classical motion in the laser field is studied. Electron trajectories which return to the vicinity of the nucleus may recombine and emit a harmonic photon, while those which do not return will ionize. An experiment was performed to test the validity of this model wherein the trajectory of the electron as it orbits the nucleus or ion core is perturbed by driving the process with elliptically, rather than linearly, polarized laser radiation. The semiclassical theory predicts a rapid turn-off of harmonic production as the ellipticity of the driving field is increased. This decrease in harmonic production is observed experimentally and a simple quantum mechanical theory is used to model the data. The second major focus of this work was on development of the harmonic {open_quotes}source{close_quotes}. A series of experiments were performed examining the spatial profiles of the harmonics. The quality of the spatial profile is crucial if the harmonics are to be used as the source ...
Date: May 1, 1994
Creator: Budil, K.S.
Partner: UNT Libraries Government Documents Department

Presentation of dynamically overlapping auditory messages in user interfaces

Description: This dissertation describes a methodology and example implementation for the dynamic regulation of temporally overlapping auditory messages in computer-user interfaces. The regulation mechanism exists to schedule numerous overlapping auditory messages in such a way that each individual message remains perceptually distinct from all others. The method is based on the research conducted in the area of auditory scene analysis. While numerous applications have been engineered to present the user with temporally overlapped auditory output, they have generally been designed without any structured method of controlling the perceptual aspects of the sound. The method of scheduling temporally overlapping sounds has been extended to function in an environment where numerous applications can present sound independently of each other. The Centralized Audio Presentation System is a global regulation mechanism that controls all audio output requests made from all currently running applications. The notion of multimodal objects is explored in this system as well. Each audio request that represents a particular message can include numerous auditory representations, such as musical motives and voice. The Presentation System scheduling algorithm selects the best representation according to the current global auditory system state, and presents it to the user within the request constraints of priority and maximum acceptable latency. The perceptual conflicts between temporally overlapping audio messages are examined in depth through the Computational Auditory Scene Synthesizer. At the heart of this system is a heuristic-based auditory scene synthesis scheduling method. Different schedules of overlapped sounds are evaluated and assigned penalty scores. High scores represent presentations that include perceptual conflicts between over-lapping sounds. Low scores indicate fewer and less serious conflicts. A user study was conducted to validate that the perceptual difficulties predicted by the set of heuristic algorithms actually existed in test subjects.
Date: September 1, 1997
Creator: Papp, A.L.
Partner: UNT Libraries Government Documents Department

A Reduced Grid Method for a Parallel Global Ocean General Circulation Model

Description: A limitation of many explicit finite-difference global climate models is the timestep restriction caused by the decrease in cell size associated with the convergence of meridians near the poles. A computational grid in which the number of cells in the longitudinal direction is reduced toward high-latitudes, keeping the longitudinal width of the resulting cells as uniform as possible and increasing the allowable timestep, is applied to a three-dimensional primitive equation ocean-climate model. This ''reduced'' grid consists of subgrids which interact at interfaces along their northern and southern boundaries, where the resolution changes by a factor of three. Algorithms are developed to extend the finite difference techniques to this interface, focusing on the conservation required to perform long time integrations, while preserving the staggered spatial arrangement of variables and the numerics used on subgrids. The reduced grid eliminates the common alternative of filtering high-frequency modes from the solution at high-latitudes to allow a larger timestep and reduces execution time per model step by roughly 20 percent. The reduced grid model is implemented for parallel computer architectures with two-dimensional domain decomposition and message passing, with speedup results comparable to those of the original model. Both idealized and realistic model runs are presented to show the effect of the interface numerics on the model solution. First, a rectangular, mid-latitude, at-bottomed basin with vertical walls at the boundaries is driven only by surface wind stress to compare three resolutions of the standard grid to reduced grid cases which use various interface conditions. Next, a similar basin with wind stress, heat, and fresh water forcing is used to compare the results of a reduced grid with those of a standard grid result while exercising the full set of model equations. Finally, global model runs, with topography, forcing, and physical parameters similar to those used for ...
Date: December 1, 1999
Creator: Wickett, M.E.
Partner: UNT Libraries Government Documents Department

Reflectivity of plasmas created by high-intensity, ultra-short laser pulses

Description: Experiments were performed to characterize the creation and evolution of high-temperature (T{sub e}{approximately}100eV), high-density (n{sub e}>10{sup 22}cm{sup {minus}3}) plasmas created with intense ({approximately}10{sup 12}-10{sup 16}W/cm{sup 2}), ultra-short (130fs) laser pulses. The principle diagnostic was plasma reflectivity at optical wavelengths (614nm). An array of target materials (Al, Au, Si, SiO{sub 2}) with widely differing electronic properties tested plasma behavior over a large set of initial states. Time-integrated plasma reflectivity was measured as a function of laser intensity. Space- and time-resolved reflectivity, transmission and scatter were measured with a spatial resolution of {approximately}3{mu}m and a temporal resolution of 130fs. An amplified, mode-locked dye laser system was designed to produce {approximately}3.5mJ, {approximately}130fs laser pulses to create and nonintrusively probe the plasmas. Laser prepulse was carefully controlled to suppress preionization and give unambiguous, high-density plasma results. In metals (Al and Au), it is shown analytically that linear and nonlinear inverse Bremsstrahlung absorption, resonance absorption, and vacuum heating explain time-integrated reflectivity at intensities near 10{sup 16}W/cm{sup 2}. In the insulator, SiO{sub 2}, a non-equilibrium plasma reflectivity model using tunneling ionization, Helmholtz equations, and Drude conductivity agrees with time-integrated reflectivity measurements. Moreover, a comparison of ionization and Saha equilibration rates shows that plasma formed by intense, ultra-short pulses can exist with a transient, non-equilibrium distribution of ionization states. All targets are shown to approach a common reflectivity at intensities {approximately}10{sup 16}W/cm{sup 2}, indicating a material-independent state insensitive to atomic or solid-state details.
Date: June 1, 1994
Creator: Gold, D.M.
Partner: UNT Libraries Government Documents Department

Predicting the transport properties of sedimentary rocks from microstructure

Description: Understanding transport properties of sedimentary rocks, including permeability, relative permeability, and electrical conductivity, is of great importance for petroleum engineering, waste isolation, environmental restoration, and other applications. These transport properties axe controlled to a great extent by the pore structure. How pore geometry, topology, and the physics and chemistry of mineral-fluid and fluid-fluid interactions affect the flow of fluids through consolidated/partially consolidated porous media are investigated analytically and experimentally. Hydraulic and electrical conductivity of sedimentary rocks are predicted from the microscopic geometry of the pore space. Cross-sectional areas and perimeters of individual pores are estimated from two-dimensional scanning electron microscope (SEM) photomicrographs of rock sections. Results, using Berea, Boise, Massilon, and Saint-Gilles sandstones show close agreement between the predicted and measured permeabilities. Good to fair agreement is found in the case of electrical conductivity. In particular, good agreement is found for a poorly cemented rock such as Saint-Gilles sandstone, whereas the agreement is not very good for well-cemented rocks. The possible reasons for this are investigated. The surface conductance contribution of clay minerals to the overall electrical conductivity is assessed. The effect of partial hydrocarbon saturation on overall rock conductivity, and on the Archie saturation exponent, is discussed. The region of validity of the well-known Kozeny-Carman permeability formulae for consolidated porous media and their relationship to the microscopic spatial variations of channel dimensions are established. It is found that the permeabilities predicted by the Kozeny-Carman equations are valid within a factor of three of the observed values methods.
Date: January 1, 1995
Creator: Schlueter, E.M.
Partner: UNT Libraries Government Documents Department

Evaluation of the {sup 4}I{sub 11/2} terminal level lifetime for several neodymium-doped laser crystals and glasses

Description: All models of lasing action require knowledge of the physical parameters involved, of which many can be measured or estimated. The value of the terminal level lifetime is an important parameter in modeling many high power laser systems since the terminal level lifetime can have a substantial impact on the extraction efficiency of the system. However, the values of the terminal level lifetimes for a number of important laser materials such as ND:YAG and ND:YLF are not well known. The terminal level lifetime, a measure of the time it takes for the population to drain out of the terminal (lower) lasing level, has values that can range from picoseconds to microseconds depending on the host medium, thus making it difficult to construct one definitive experiment for all materials. Until recently, many of the direct measurements of the terminal level lifetime employed complex energy extraction or gain recovery methods coupled with a numerical model which often resulted in large uncertainties in the measured lifetimes. In this report we demonstrate a novel and more accurate approach which employs a pump-probe technique to measure the terminal level lifetime of 16 neodymium-doped materials. An alternative yet indirect method, which is based on the ``Energy Gap Law,`` is to measure the nonradiative lifetime of another transition which has the same energy gap as the transition of the terminal level lifetime. Employing this simpler approach, we measured the lifetime for 30 neodymium-doped materials. We show for the first time a direct comparison between the two methods and determine that the indirect method can be used to infer the terminal level lifetime within a factor of two for most neodymium-doped glasses and crystals.
Date: April 25, 1995
Creator: Bibeau, C.
Partner: UNT Libraries Government Documents Department

Field investigation of keyblock stability

Description: Discontinuities in a rock mass can intersect an excavation surface to form discrete blocks (keyblocks) which can be unstable. This engineering problem is divided into two parts: block identification, and evaluation of block stability. One stable keyblock and thirteen fallen keyblocks were observed in field investigations at the Nevada Test Site. Nine blocks were measured in detail sufficient to allow back-analysis of their stability. Measurements included block geometry, and discontinuity roughness and compressive strength. Back-analysis correctly predicted stability or failure in all but two cases. These two exceptions involved situations that violated the stress assumptions of the stability calculations. Keyblock faces correlated well with known joint set orientations. The effect of tunnel orientation on keyblock frequency was apparent. Back-analysis of physical models successfully predicted block pullout force for two-dimensional models of unit thickness. Two-dimensional (2D) and three-dimensional (3D) analytic models for the stability of simple pyramidal keyblocks were examined. Calculated stability is greater for 3D analyses than for 2D analyses. Calculated keyblock stability increases with larger in situ stress magnitudes, larger lateral stress ratios, and larger shear strengths. Discontinuity stiffness controls block displacement more strongly than it does stability itself. Large keyblocks are less stable than small ones, and stability increases as blocks become more slender. Rock mass temperature decreases reduce the confining stress magnitudes and can lead to failure. The pattern of stresses affecting each block face explains conceptually the occurrence of pyramidal keyblocks that are truncated near their apex.
Date: April 1, 1985
Creator: Yow, J.L. Jr.
Partner: UNT Libraries Government Documents Department