Search Results

open access

Deposit Summary

Description: Deposit summary of $150.00 made on December 28, 2001.
Date: January 11, 2002
Partner: UNT Libraries Special Collections
open access

Reconciliation Report

Description: Reconciliation report with an ending account balance of $1,193.54 reconciled for the period ending on January 11, 2002.
Date: January 11, 2002
Partner: UNT Libraries Special Collections
open access

Time Reversal Signal Processing in Communications - A Feasibility Study

Description: A typical communications channel is subjected to a variety of signal distortions, including multipath, that corrupt the information being transmitted and reduce the effective channel capacity. The mitigation of the multipath interference component is an ongoing concern for communication systems operating in complex environments such as might be experienced inside buildings, urban environments, and hilly or heavily wooded areas. Communications between mobile units and distributed sensors, so important to national security, are dependent upon flawless conveyance of information in complex environments. The reduction of this multipath corruption necessitates better channel equalization, i.e., the removal of channel distortion to extract the transmitted information. But, the current state of the art in channel equalization either requires a priori knowledge of the channel or the use of a known training sequence and adaptive filtering. If the ''assumed'' model within the equalization processor does not at least capture the dominant characteristics of the channel, then the received information may still be highly distorted and possibly useless. Also, the processing required for classical equalization is demanding in computational resources. To remedy this situation, many techniques have been investigated to replace classical equalization. Such a technique, the subject of this feasibility study, is Time Reversal Signal Processing (TRSP). Multipath is particularly insidious and a major factor in the deterioration of communication channels. Unlike most other characteristics that corrupt a communications channel, the detrimental effects of multipath cannot be overcome by merely increasing the transmitted power. Although the power in a signal diminishes as a function of the distance between the transmitter and receiver, multipath further degrades a signal by creating destructive interference that results in a loss of received power in a very localized area, a loss often referred to as fading. Furthermore, multipath can reduce the effectiveness of a channel by increasing inter-symbol interference. Here, …
Date: January 30, 2002
Creator: Meyer, A W; Candy, J V & Poggio, A J
Partner: UNT Libraries Government Documents Department
open access

Exchange-Coupling in Magnetic Nanoparticles to Enhance Magnetostrictive Properties

Description: Spark erosion is a versatile and economical method for producing particles of virtually any type of material that has a nominal conductivity: particles can be prepared in sizes ranging from a few nm to tens of {micro}m. The purpose of this feasibility study was to demonstrate the capability of making spherical particles of specific magnetic materials. We chose (Tb Dy)Fe{sub 2} (Terfenol-D) due to its potential use as the magnetostrictive component in magneto-elastomer composites. We also chose to work with pure Ni as a model system. Improvements in the properties of magneto-elastomer composites have broad applications in the areas of sensor development, enhanced actuators and damping systems.
Date: January 31, 2002
Creator: Radousky, H; McElfresh, M; Berkowitz, A & Carman, G P
Partner: UNT Libraries Government Documents Department
open access

LDRD Final Report - 01-FS-004

Description: This report describes the results from an experimental program to investigate the feasibility of laser produced MeV protons as a diagnostic of electric fields or shock compressed materials. The experimental campaign was very successful, and has led to substantial advances in the characterization and optimization of proton sources from ultra-intense laser-solid interaction. This is a subject of the highest scientific interest [1] and is highly relevant to developing its use as a possible NIF implosion diagnostic and other applications relevant to stockpile stewardship.
Date: January 23, 2002
Creator: Mackinnon, A. J.
Partner: UNT Libraries Government Documents Department
open access

Declaration Patent for the Invention of Device for Pulling Halyard

Description: The device for the halyard stretching consists of a frame with coupler and clamping rollers mounted in pairs on it, the drive of the rotation of the coupler rollers and the clamping device with a clamping spring. The clamping device is remarkable that the clamping rollers are mounted in a separate movable bracket, which is connected with the frame with the help of a hinge. Between the frame and the movable bracket a releasing spring is inserted. The clamping device is equipped with a movable holder to clamp rollers kinematically connected with the coupler ones with the aid of cardan joints. It assures rotations of the movable bracket over the frame and synchronous rotation of the coupler and clamping rollers ill different directions. All rollers are connected with the drive of rotation via an electro magnet sleeve. A linear drive of spring pressing is mounted between the movable bracket and the clamping spring. A nip of the releasing spring is connected with the electromagnet rotor.
Date: January 31, 2002
Creator: Anuprienko, G. E.; Karpachov, Y. A.; Rowland, M. S.; Savenko, Y. M. & Smith, C. F.
Partner: UNT Libraries Government Documents Department
open access

White Paper on Institutional Capability Computing Requirements

Description: This paper documents the need for a rapid, order-of-magnitude increase in the computing infrastructure provided to scientists working in the unclassified domains at Lawrence Livermore National Laboratory. This proposed increase could be viewed as a step in a broader strategy linking hardware evolution to applications development that would take LLNL unclassified computational science to a position of distinction, if not preeminence, by 2006. We believe that it is possible for LLNL institutional scientists to gain access late this year to a new system with a capacity roughly 80% to 200% that of the 12-TF/s (twelve trillion floating-point operations per second) ASCI White system for a cost that is an order of magnitude lower than the White system. This platform could be used for first-class science-of-scale computing and for the development of aggressive, strategically chosen applications that can challenge the near PF/s (petaflop/s, a thousand trillion floating-point operations per second) scale systems ASCI is working to bring to the LLNL unclassified environment in 2005. As the distilled scientific requirements data presented in this document indicate, great computational science is being done at LLNL--the breadth of accomplishment is amazing. The computational efforts make it clear what a unique national treasure this Laboratory has become. While the projects cover a wide and varied application space, they share three elements--they represent truly great science, they have broad impact on the Laboratory's major technical programs, and they depend critically on big computers.
Date: January 29, 2002
Creator: Kissel, Lynn; McCoy, Michael G. & Seager, Mark K.
Partner: UNT Libraries Government Documents Department
open access

Carbon Nanotube Atomic Force Microscopy for Proteomics and Biological Forensics

Description: The Human Genome Project was focused on mapping the complete genome. Yet, understanding the structure and function of the proteins expressed by the genome is the real end game. But there are approximately 100,000 proteins in the human body and the atomic structure has been determined for less than 1% of them. Given the current rate at which structures are being solved, it will take more than one hundred years to complete this task. The rate-limiting step in protein structure determination is the growth of high-quality single crystals for X-ray diffraction. Synthesis of the protein stock solution as well as X-ray diffraction and analysis can now often be done in a matter of weeks, but developing a recipe for crystallization can take years and, especially in the case of membrane proteins, is often completely unsuccessful. Consequently, techniques that can either help to elucidate the factors controlling macromolecular crystallization, increase the amount of structural information obtained from crystallized macromolecules or eliminate the need for crystallization altogether are of enormous importance. In addition, potential applications for those techniques extend well beyond the challenges of proteomics. The global spread of modern technology has brought with it an increasing threat from biological agents such as viruses. As a result, developing techniques for identifying and understanding the operation of such agents is becoming a major area of forensic research for DOE. Previous to this project, we have shown that we can use in situ atomic force microscopy (AFM) to image the surfaces of growing macromolecular crystals with molecular resolution (1-5) In addition to providing unprecedented information about macromolecular nucleation, growth and defect structure, these results allowed us to obtain low-resolution phase information for a number of macromolecules, providing structural information that was not obtainable from X-ray diffraction(3). For some virus systems, we have shown that …
Date: January 1, 2002
Creator: Noy, A.; De Yoreo, J. J. & Malkin, A. J.
Partner: UNT Libraries Government Documents Department
open access

Effects of Introduced Materials in the Drift Scale Test

Description: Water samples previously acquired from superheated (>140 C) zones within hydrological test boreholes of the Drift Scale Test (DST) show relatively high fluoride concentrations (5-66 ppm) and low pH (3.1-3.5) values. In these high temperature regions of the rock, water is present superheated vapor only--liquid water for sampling purposes is obtained during the sampling process by cooling. Based on data collected to date, it is evident that the source of the fluoride and low pH is from introduced man-made materials (Teflon{trademark} and/or Viton{trademark} fluoroelastomer) used in the test. The test materials may contribute fluoride either by degassing hydrogen fluoride (HF) directly to produce trace concentrations of HF gas ({approx}0.1 ppm) in the high temperature steam, or by leaching fluoride in the sampling tubes after condensation of the superheated steam. HF gas is known to be released from Viton{trademark} at high temperatures (Dupont Dow Elastomers L.L.C., Elkton, MD, personal communication) and the sample water compositions indicate near stoichiometric balance of hydrogen ion and fluoride ion, indicating dissolution of HF gas into the aqueous phase. These conclusions are based on a series of water samples collected to determine if the source of the fluoride is from the degradation of materials originally installed to facilitate measurements. Analyses of these water samples show that the source of the fluoride is the introduced materials, that is the Viton{trademark} packers used to isolate test zones and/or Teflon{trademark} tubing used to draw water and steam from the test zones. In particular, water samples collected from borehole (BH) 72 high temperatures ({approx} 170 C) prior to introduction of any Viton{trademark} or Teflon{trademark} show pH Values (4.8 to 5.5) and fluoride concentrations well below 1 ppm over a period of six months. These characteristics are typical of condensing DST steam that contains only some dissolved carbon dioxide generated by …
Date: January 11, 2002
Creator: DeLoach, L & Jones, RL
Partner: UNT Libraries Government Documents Department
open access

Engineering Titanium for Improved Biological Response

Description: The human body and its aggressive environment challenge the survival of implanted foreign materials. Formidable biocompatibility issues arise from biological, chemical, electrical, and tribological origins. The body's electrolytic solution provides the first point of contact with any kind of implant, and is responsible for transport, healing, integration, or attack. Therefore, determining how to successfully control the integration of a biomaterial should begin with an analysis of the early interfacial dynamics involved. setting, a complicated feedback system of solution chemistry, pH, ions, and solubility exists. The introduction of a fixation device instantly confounds this system. The body is exposed to a range of voltages, and wear can bring about significant shifts in potentials across an implant. In the environment of a new implant the solution pH becomes acidic, ionic concentrations shift, cathodic currents can lead to corrosion, and oxygen levels can be depleted; all of these impact the ability of the implant to retain its protective oxide layer and to present a stable interface for the formation of a biolayer. Titanium has been used in orthopedic and maxilofacial surgery for many years due to its reputation as being biocompatible and its ability to osseointegrate. Osseointegration is defined as direct structural and functional connection between ordered, living bone, and the surface of a load carrying implant. Branemark discovered this phenomenon in the 60's while examining titanium juxtaposed to bone. The mechanism by which titanium and its passivating oxide encourage osseosynthetic activity remains unknown. However in general terms the oxide film serves two purposes: first to provide a kinetic barrier that prevents titanium from corroding and second to provide a substrate that allows the constituents of bone (calcium phosphate crystals, cells, proteins, and collagen) to bond to it. We believe that the electrochemical environment dictates the titanium dioxide surface atomic structure and the …
Date: January 23, 2002
Creator: Orme, C; Bearinger, J; Dimasi, E & Gilbert, J
Partner: UNT Libraries Government Documents Department
open access

A Laboratory Approach Relating Complex Resistivity Observations to Flow and Transport in Saturated and Unsaturated Hydrologic Regimes

Description: Subsurface imaging technology, such as electric resistance tomography (ERT), is rapidly improving as a means for characterizing some soil properties of the near-surface hydrologic regime. While this information can be potentially useful in developing hydrologic models of the subsurface that are required for contaminant transport investigations, an image alone of the subsurface soil regime gives little or no information about how the site will respond to groundwater flow or contaminant transport. In fact, there is some question that tomographic imaging of soils alone can even provide meaningful values of hydraulic properties, such as the permeability structure, which is critical to estimates of contaminant transport at a site. The main objective of this feasibility study was to initiate research on electrical imaging not just as a way to characterize the soil structure by mapping different soil types at a site but as a means of obtaining quantitative information about how a site will respond hydrologically to an infiltration event. To this end, a scaled system of electrode arrays was constructed that simulates the subsurface electrode distribution used at the LLNL Vadose Zone Observatory (VZO) where subsurface imaging of infiltration events has been investigated for several years. The electrode system was immersed in a 10,000-gallon tank to evaluate the fundamental relationship between ERT images and targets of a given volume that approximate infiltration-induced conductivity anomalies. With LDRD funds we have explored what can be initially learned about porous flow and transport using two important electrical imaging methods--electric resistance tomography (ERT) and electric impedance tomography (EIT). These tomographic methods involve passing currents (DC or AC) between two electrodes within or between electrode arrays while measuring the electric potential at the remaining electrodes. With the aid of a computer-based numerical inversion scheme, the potentials are used to solve for the electrical conductivity distribution in …
Date: January 31, 2002
Creator: Martins, S A; Daily, W D & Ramirez, A L
Partner: UNT Libraries Government Documents Department
open access

Adaptive Tracking of Atmospheric Releases

Description: When dangerous chemical or biological releases occur in the atmosphere, emergency responders and decision makers must assess exposure rates to the affected population, establish evacuation routes, and allocate medical resources We have been working to improve the scientific basis for making such decisions. We believe that future rapid response teams, from LLNI, and other centers of expertise, will use a variety of atmospheric sensors and atmospheric computer models to predict and characterize the movement of chemical or biological releases in urban environments, and that LLNL is likely to contribute expertise in this area. A key advance will be to merge the information and capabilities of computer models with real-time atmospheric data from sensors. The resulting product will dynamically interpolate and extrapolate the raw sensor data into a coordinated ''picture'' or interpretation of the developing flow scenario. The scientific focus of the project was the exploration and development of algorithms to fuse lidar data (which measure wind speed much as a police radar measures vehicle speed) and a dispersion model into a single system. Our goal was to provide the scientific foundation for a combined lidar/model approach capable of accurately tracking the evolution of atmospheric releases on distance scales of about 20 km. The fundamental idea is to create feedbacks, so that lidar data can be used for wind field inputs into a dispersion model, which would, in turn, guide lidar data acquisition by directing more intensive scanning to regions where more data are key to improving the modeling. We created a database of synthetic lidar data that can be used to test algorithms relating to a combined lidar/dispersion model. We obtained the data, which represent nocturnal atmospheric drainage flows in the Salt Lake City Basin, from calculations on the LLNL ASCI White supercomputer with a computational fluid dynamics model running …
Date: January 31, 2002
Creator: Larson, D & Calhoun, R
Partner: UNT Libraries Government Documents Department
open access

Novel Processing of 81-mm Cu Shaped Charge Liners

Description: A seven-step procedure was developed for producing shaped charge liner blanks by back extrusion at liquid nitrogen temperatures. Starting with a 38.1-mm diameter, 101.6-mm long cylinder at 77K, three forging steps with a flat-top die are required to produce the solid cone while maintaining low temperature. The solid cone is forged in four individual back extrusions at 77K to produce the rough liner blank. This procedure is capable of being run in batch processes to improve the time efficiency.
Date: January 16, 2002
Creator: Schwartz, A & Korzekwa, D
Partner: UNT Libraries Government Documents Department
open access

Macro-Scale Reactive Flow Model for High-Explosive Detonation in Support of ASCI Weapon Safety Milepost

Description: Explosive grain-scale simulations are not practical for weapon safety simulations. Indeed for nearly ideal explosives with reaction zones of order 500 {micro}m, even reactive flow models are not practical for weapon safety simulations. By design, reactive flow models must resolve the reaction zone, which implies computational cells with dimension of order 50 {micro}m for such explosives. The desired result for a simulation in which the reaction zone is not resolved is that the explosive behaves as an ideal one. The pressure at the shock front rises to the Chapman-Jouget (CJ) pressure with a reaction zone dimension that is like that of a shock propagating in an unreactive medium, on the order of a few computational cells. It should propagate with the detonation velocity that is determined by the equation of state of the products. In the past, this was achieved in one dimensional simulations with ''beta-burn'', a method in which the extent of conversion to final product is proportional to the approach of the specific volume in the shock front to the specific volume of the CJ state. One drawback with this method is that there is a relatively long build-up to steady detonation that is typically 50 to 100 computational cells. The need for relatively coarsely zoned simulations in two dimensions lead to ''program-burn'' by which the time to detonation can be determined by a simple ray-tracing algorithm when there are no barriers or shadows. Complications arise in two and three dimensions to the extent that some calculations of the lighting time in complex geometry can give incorrect results. We sought to develop a model based on reactive flow that might help the needs of the Weapon Safety Simulation milepost. Important features of the model are: (1) That it be useable with any equation of state description of the …
Date: January 3, 2002
Creator: Reaugh, J E
Partner: UNT Libraries Government Documents Department
open access

Chimeric Proteins to Detect DNA Damage and Mismatches

Description: The goal of this project was to develop chimeric proteins composed of a DNA mismatch or damage binding protein and a nuclease, as well as methods to detect DNA mismatches and damage. We accomplished this through protein engineering based on using polymerase chain reactions (PCRs) to create chimeras with novel functions for damage and mismatch detection. This project addressed fundamental questions relating to disease susceptibility and radiation-induced damage in cells. It also supported and enhanced LLNL's competency in the emerging field of proteomics. In nature, DNA is constantly being subjected to damaging agents such as exposure to ultraviolet (UV) radiation and various environmental and dietary carcinogens. If DNA damage is not repaired however, mutations in DNA result that can eventually manifest in cancer and other diseases. In addition to damage-induced DNA mutations, single nucleotide polymorphisms (SNPs), which are variations in the genetic sequence between individuals, may predispose some to disease. As a result of the Human Genome Project, the integrity of a person's DNA can now be monitored. Therefore, methods to detect DNA damage, mutations, and SNPs are useful not only in basic research but also in the health and biotechnology industries. Current methods of detection often use radioactive labeling and rely on expensive instrumentation that is not readily available in many research settings. Our methods to detect DNA damage and mismatches employ simple gel electrophoresis and flow cytometry, thereby alleviating the need for radioactive labeling and expensive equipment. In FY2001, we explored SNP detection by developing methods based on the ability of the chimeric proteins to detect mismatches. Using multiplex assays with flow cytometry and fluorescent beads to which the DNA substrates where attached, we showed that several of the chimeras possess greater affinity for damaged and mismatched DNA than for native DNA. This affinity was demonstrated in assays …
Date: January 14, 2002
Creator: McCutchen-Maloney, S.; Malfatti, M. & Robbins, K. M.
Partner: UNT Libraries Government Documents Department
open access

Unclassified Computing Capability: User Responses to a Multiprogrammatic and Institutional Computing Questionnaire

Description: We are experimenting with a new computing model to be applied to a new computer dedicated to that model. Several LLNL science teams now have computational requirements, evidenced by the mature scientific applications that have been developed over the past five plus years, that far exceed the capability of the institution's computing resources. Thus, there is increased demand for dedicated, powerful parallel computational systems. Computation can, in the coming year, potentially field a capability system that is low cost because it will be based on a model that employs open source software and because it will use PC (IA32-P4) hardware. This incurs significant computer science risk regarding stability and system features but also presents great opportunity. We believe the risks can be managed, but the existence of risk cannot be ignored. In order to justify the budget for this system, we need to make the case that it serves science and, through serving science, serves the institution. That is the point of the meeting and the White Paper that we are proposing to prepare. The questions are listed and the responses received are in this report.
Date: January 29, 2002
Creator: McCoy, M & Kissel, L
Partner: UNT Libraries Government Documents Department
open access

Strobe Light Deterrent Efficacy Test and Fish Behavior Determination at Grand Coulee Dam Third Powerplant Forebay

Description: This report describes the work conducted during the first year of a long-term study to assess the efficacy of a prototype strobe light system in eliciting a negative phototactic response in kokanee and rainbow trout. The strobe light system is being evaluated as a means to prevent entrainment (and subsequent loss) of fish at the entrance to the forebay adjacent to the third powerplant at Grand Coulee Dam. Pacific Northwest National Laboratory and the Colville Confederated Tribes are collaborating on the three-year study being conducted for the Bonneville Power Administration and the Northwest Power Planning Council.
Date: January 29, 2002
Creator: Simmons, Mary Ann; Johnson, Robert L.; McKinstry, Craig A.; Anglea, Steven M.; Simmons, Carver S.; Thorsten, Susan L. et al.
Partner: UNT Libraries Government Documents Department
open access

Laser Science and Technology Program Update 2001

Description: The Laser Science and Technology (LS&T) Program's mission is to develop advanced solid-state lasers, optics, materials technologies, and applications to solve problems and create new capabilities of importance to the Nation and the Laboratory. A top, near-term priority is to provide technical support to the National Ignition Facility (NIF) to ensure activation success. LS&T provides the NIF Programs with core competencies and supports its economic viability. The primary objectives of LS&T activities in fiscal year (FY) 2001 have been threefold: (1) to support deployment of hardware and to enhance lasers and optics performance for NIF, (2) to develop advanced solid-state laser systems and optical components for the Department of Energy (DOE) and the Department of Defense (DoD), and (3) to invent, develop, and deliver improved concepts and hardware for other government agencies and U.S. industry. Special efforts have also been devoted to building and maintaining our capabilities in three technology areas: high-power solid-state lasers, high-power optical materials, and applications of advanced lasers.
Date: January 1, 2002
Creator: Chen, H L & Hackel, L A
Partner: UNT Libraries Government Documents Department
open access

Spheromak Impedance and Current Amplification

Description: It is shown that high current amplification can be achieved only by injecting helicity on the timescale for reconnection, {tau}{sub REC}, which determines the effective impedance of the spheromak. An approximate equation for current amplification is: dI{sub TOR}{sup 2}/dt {approx} I{sup 2}/{tau}{sub REC} - I{sub TOR}{sup 2}/{tau}{sub closed} where I is the gun current, I{sub TOR} is the spheromak toroidal current and {tau}{sub CLOSED} is the ohmic decay time of the spheromak. Achieving high current amplification, I{sub TOR} >> I, requires {tau}{sub REC} <<{tau}{sub CLOSED}. For resistive reconnection, this requires reconnection in a cold zone feeding helicity into a hot zone. Here we propose an impedance model based on these ideas in a form that can be implemented in the Corsica-based helicity transport code. The most important feature of the model is the possibility that {tau}{sub REC} actually increases as the spheromak temperature increases, perhaps accounting for the ''voltage sag'' observed in some experiments, and a tendency toward a constant ratio of field to current, B {proportional_to} I, or I{sub TOR} {approx} I. Program implications are discussed.
Date: January 31, 2002
Creator: Fowler, T K; Hua, D D & Stallard, B W
Partner: UNT Libraries Government Documents Department
open access

Tank Leak Experiment at the Mock Tank Site, 200 East Area: Electrical Resistance Tomography-Preliminary Results

Description: Electrical resistance measurements were used to monitor several releases of brine from the Mock Tank Test site at the 200 East Area. Three different methods were used to analyze the data: (1) a simple average of the raw data was used as an indicator of the presence/absence of a leak, (2) tomography of the region beneath the tank using data from steel-cased borehole, and (3) tomography of the region beneath the tank using data from vertical electrode arrays. Each of these methods was able to detect the presence of what appeared to be conductive plumes forming beneath the tank. The results suggest the following: (1) The minimum detectable leak volume is of the Order of a few hundred gallons. (2) procedure involving the use of reciprocal data can be used to evaluate the reliability of the results and minimize the potential for false-positive and false-negative conclusions; (3) The dry wells may be used as long electrodes to obtain 2D images of the plume under the tank. (4) 3D electrical resistance tomography (ERT) images provide information that can be used to determine the released volume, the speed and direction of plume movement, the regions of the soil that are being contaminated, and the approximate location of the hole in the tank. (5) It may be possible to map pre-existing plumes when no pre-spill data exists. (6) A ''quick look'' calculation can be used in the field can reliably detect the occurrence of a leak.
Date: January 18, 2002
Creator: Ramirez, A; Daily, W & Binley, A
Partner: UNT Libraries Government Documents Department
open access

Metrology of Non-Rigid Objects

Description: Dimensional characterization of non-rigid parts presents many challenges. For example, when a non-rigid part is mounted in an inspection apparatus the effects of fixturing constraints are significant. If the part is not used in normal service with the same load conditions as during inspection, the dimensional characteristics will deviate from reported values. Further, the solution of designing specialized fixturing to duplicate ''as-installed'' conditions does not fully resolve the problem because each inspection requires its own methodology. The goal of this project is to formulate the research problem and propose a method of assessing the dimensional characteristics of non-rigid parts. The measured dimension of a rigid component is traceable at some level of confidence to a single source (NIST in the USA). Hence the measurement of one component of an assembly can be related to the measurement of another component of that assembly. There is no generalized analog to this pedigreed process for dimensionally characterizing non-rigid bodies. For example, a measurement made on a sheet-metal automobile fender is heavily influenced by how it is held during the measurement making it difficult to determine how well that fender will assemble to the rest of the (non-rigid) car body. This problem is often overcome for specific manufacturing problems by constructing rigid fixtures that over-constrain the non-rigid parts to be assembled and then performing the dimensional measurement of the contour of each component to check whether each meets specification. Note that such inspection measurements will yield only an approximation to the assembled shape, which is a function of both the geometry and the compliance of the component parts of the assembly. As a result, non-rigid components are more difficult to specify and inspect and therefore are more difficult to purchase from outside vendors compared to rigid components. The problems are compounded as the requirements …
Date: January 1, 2002
Creator: Blaedel, K; Swift, D; Claudet, A; Kasper, E & Patterson, S
Partner: UNT Libraries Government Documents Department
open access

Development of Tritium AMS for Biomedical Sciences Research

Description: Tritium ({sup 3}H) is a radioisotope that is extensively utilized in biological research. Normally in the biological sciences, {sup 3}H is quantified by liquid scintillation counting. For the most sensitive measurements, liquid scintillation counting requires large samples and counting times of several-hours. In contrast, provisional studies at LLNL's Center for Accelerator Mass Spectrometry have demonstrated that Accelerator Mass Spectrometry (AMS) can be-used to quantify {sup 3}H in milligram-sized biological samples with a 100 1000-fold improvement in detection limits when compared to scintillation counting. This increased sensitivity is expected to have great impact in the biological research community. However, before {sup 3}H AMS can be used routinely and successfully, two areas of concern needed to be addressed: (1) sample preparation methods needed to be refined and standardized, and (2) smaller and simpler AMS instrumentation needed to be developed. To address these concerns, the specific aims of this project were to: (1) characterize small dedicated {sup 3}H AMS spectrometer (2) develop routine and robust biological sample preparation methods, and (3) with the aid of our external collaborations, demonstrate the application of {sup 3}H AMS in the biomedical sciences. Towards these goals, the {sup 3}H AMS instrument was installed and optimized to enhance performance. The sample preparation methodology was established for standard materials (water and tributyrin) and biological samples. A number of biological and environmental studies which require {sup 3}H AMS were undertaken with university collaborators and our optimized analysis methods were employed to measure samples from these projects.
Date: January 1, 2002
Creator: Dingley, K H & Chiarappa-Zucca, M L
Partner: UNT Libraries Government Documents Department
open access

Development of Direct and Optical Polarized Nuclear Magnetic Resonance (NMR) Methods for Characterization and Engineering of Mesophased Molecular Structures

Description: The development of NMR methods for the characterization of structure and dynamics in mesophase composite systems was originally proposed in this LDRD. Mesophase systems are organic/inorganic hybrid materials whose size and motional properties span the definition of liquids and solids, such as highly viscous gels or colloidal suspensions. They are often composite, ill defined, macromolecular structures that prove difficult to characterize. Mesophase materials are of broad scientific and programmatic interest and include composite load bearing foams, aerogels, optical coatings, silicate oligomers, porous heterogeneous catalysts, and nanostructured materials such as semiconductor quantum dot superlattices. Since mesophased materials and precursors generally lack long-range order they have proven to be difficult to characterize beyond local, shortrange order. NMR methods are optimal for such a task since NMR observables are sensitive to wide ranges of length (0-30{angstrom}) and time (10{sup -9}-10{sup 0}sec) scales. We have developed a suit of NMR methods to measure local, intermediate, and long range structure in a series of mesophase systems and have constructed correlations between NMR observables and molecular size, topology, and network structure. The goal of this research was the development of a strong LLNL capability in the characterization of mesophased materials by NMR spectroscopy that will lead to a capability in rational synthesis of such materials and a fundamental understanding of their structure-property relationships. We demonstrate our progress towards attaining this goal by presenting NMR results on four mesophased model systems.
Date: January 29, 2002
Creator: Maxwell, R; Baumann, T & Taylor, B
Partner: UNT Libraries Government Documents Department
open access

Proceedings of the 3rd US-Japan Workshop on Plasma Polarization Spectroscopy

Description: The third US-Japan Workshop on Plasma Polarization Spectroscopy was held at the Lawrence Livermore National Laboratory in Livermore, California, on June 18-21, 2001. The talks presented at this workshop are summarized in these proceedings. The papers cover both experimental investigation and applications of plasma polarization spectroscopy as well as the theoretical foundation and formalisms to understand and describe the polarization phenomena. The papers give an overview of the history of plasma polarization spectroscopy, derive the formal aspects of polarization spectroscopy, including the effects of electric and magnetic fields, discuss spectra perturbed by intense microwave fields, charge exchange, and dielectronic recombination, and present calculations of various collisional excitation and ionization cross sections and the modeling of plasma polarization spectroscopy phenomena. Experimental results are given from the WT-3 tokamak, the MST reverse field pinch, the Large Helical Device, the GAMMA 10 mirror machine, the Nevada Terrawatt Facility, the Livermore EBIT-II electron beam ion trap, and beam-foil spectroscopy. In addition, results were presented from studies of several laser-produced plasma experiments and new instrumental techniques were demonstrated.
Date: January 2, 2002
Creator: Beiersdorfer, P & Flyimoto, T
Partner: UNT Libraries Government Documents Department
Back to Top of Screen