UNT Libraries Government Documents Department - 1,130 Matching Results

Search Results

The ^2H(e,e'p)n Reaction at High Four-Momentum Transfer
This dissertation presents the highest four-momentum transfer, Q^2,quasielastic (x_Bj = 1) results from Experiment E01-020 which systematically explored the 2He(e,e'p)n reaction ("Electro-disintegration" of the deuteron) at three different four-momentum transfers, Q^2 = 0.8, 2.1, and 3.5 GeV^2 and missing momenta, P_miss = 0, 100, 200, 300, 400, and 500 GeV including separations of the longitudinal-transverse interference response function, R_LT, and extractoin of the longitudinal-transverse asymmetry, A_LT. This systematic approach will help to understand the reaction mechanism and the deuteron structure down to the short range part of the nucleon-nucleon interaction which is one of the fundamental missions of nuclear physics. By studying the very short distance structure of the deuteron, one may also determine whether or to what extent the description of nuclei in terms of nucleon/meson degrees of freedom must be supplemented by inclusion of explicit quark effects. The unique combination of energy, current, duty factor, and control of systematics for Hall A at Jefferson Lab made Jefferson Lab the only facility in the world where these systematic studies of the deuteron can be undertaken. This is especially true when we want to understand the short range structure of the deuteron where high energies and high luminosity/duty factor are needed. All these features of Jefferson Lab allow us to examine large missing momenta (short range scales) at kinematics where the effects of final state interactions (FSI), meson exchange currents (MEC), and isobar currents (IC) are minimal, making the extraction of the deuteron structure less model-dependent. Jefferson Lab also provides the kinematical flexibility to perform the separation of R_LT over a broad range of missing momenta and momentum transfers. Experiment E01-020 use the standard Hall A equipment in coincidence configuration in addition to the cryogenic target system. The low and middle Q^2 kinematics were completed in June 2002 and the …
Absorption and emission properties of photonic crystals and metamaterials
We study the emission and absorption properties of photonic crystals and metamaterials using Comsol Multiphysics and Ansoft HFSS as simulation tools. We calculate the emission properties of metallic designs using drude model and the results illustrate that an appropriate termination of the surface of the metallic structure can significantly increase the absorption and therefore the thermal emissivity. We investigate the spontaneous emission rate modifications that occur for emitters inside two-dimensional photonic crystals and find the isotropic and directional emissions with respect to different frequencies as we have expected.
Accelerator Mass Spectrometry Measurements of Plutonium in Sediment and Seawater from the Marshall Islands
During the summer 2000, I was given the opportunity to work for about three months as a technical trainee at Lawrence Livermore National Laboratory, or LLNL as I will refer to it hereafter. University of California runs this Department of Energy laboratory, which is located 70 km east of San Francisco, in the small city of Livermore. This master thesis in Radioecology is based on the work I did here. LLNL, as a second U.S.-facility for development of nuclear weapons, was built in Livermore in the beginning of the 1950's (Los Alamos in New Mexico was the other one). It has since then also become a 'science center' for a number of areas like magnetic and laser fusion energy, non-nuclear energy, biomedicine, and environmental science. The Laboratory's mission has changed over the years to meet new national needs. The following two statements were found on the homepage of LLNL (http://www.llnl.gov), at 2001-03-05, where also information about the laboratory and the scientific projects that takes place there, can be found. 'Our primary mission is to ensure that the nation's nuclear weapons remain safe, secure, and reliable and to prevent the spread and use of nuclear weapons worldwide'. 'Our goal is to apply the best science and technology to enhance the security and well-being of the nation and to make the world a safer place.' The Marshall Islands Dose Assessment and Radioecology group at the Health and Ecological Assessments division employed me, and I also worked to some extent with the Centre for Accelerator Mass Spectrometry (CAMS) group. The work I did at LLNL can be divided into two parts. In the first part Plutonium (Pu) measurements in sediments from the Rongelap atoll in Marshall Islands, using Accelerator Mass Spectrometry (AMS) were done. The method for measuring these kinds of samples is …
Accelerator systems and instrumentation for the NuMI neutrino beam
No Description Available.
An Adaptive Landscape Classification Procedure using Geoinformatics and Artificial Neural Networks
The Adaptive Landscape Classification Procedure (ALCP), which links the advanced geospatial analysis capabilities of Geographic Information Systems (GISs) and Artificial Neural Networks (ANNs) and particularly Self-Organizing Maps (SOMs), is proposed as a method for establishing and reducing complex data relationships. Its adaptive and evolutionary capability is evaluated for situations where varying types of data can be combined to address different prediction and/or management needs such as hydrologic response, water quality, aquatic habitat, groundwater recharge, land use, instrumentation placement, and forecast scenarios. The research presented here documents and presents favorable results of a procedure that aims to be a powerful and flexible spatial data classifier that fuses the strengths of geoinformatics and the intelligence of SOMs to provide data patterns and spatial information for environmental managers and researchers. This research shows how evaluation and analysis of spatial and/or temporal patterns in the landscape can provide insight into complex ecological, hydrological, climatic, and other natural and anthropogenic-influenced processes. Certainly, environmental management and research within heterogeneous watersheds provide challenges for consistent evaluation and understanding of system functions. For instance, watersheds over a range of scales are likely to exhibit varying levels of diversity in their characteristics of climate, hydrology, physiography, ecology, and anthropogenic influence. Furthermore, it has become evident that understanding and analyzing these diverse systems can be difficult not only because of varying natural characteristics, but also because of the availability, quality, and variability of spatial and temporal data. Developments in geospatial technologies, however, are providing a wide range of relevant data, and in many cases, at a high temporal and spatial resolution. Such data resources can take the form of high-dimensional data arrays, which can difficult to fully use. Establishing relationships among high-dimensional datasets through neurocomputing based patterning methods can help 1) resolve large volumes of data into a meaningful form; …
Adsorbate structures and catalytic reactions studied in the torrpressure range by scanning tunneling microscopy
High-pressure, high-temperature scanning tunneling microscopy (HPHTSTM) was used to study adsorbate structures and reactions on single crystal model catalytic systems. Studies of the automobile catalytic converter reaction [CO + NO {yields} 1/2 N{sub 2} + CO{sub 2}] on Rh(111) and ethylene hydrogenation [C{sub 2}H{sub 4} + H{sub 2} {yields} C{sub 2}H{sub 6}] on Rh(111) and Pt(111) elucidated information on adsorbate structures in equilibrium with high-pressure gas and the relationship of atomic and molecular mobility to chemistry. STM studies of NO on Rh(111) showed that adsorbed NO forms two high-pressure structures, with the phase transformation from the (2 x 2) structure to the (3 x 3) structure occurring at 0.03 Torr. The (3 x 3) structure only exists when the surface is in equilibrium with the gas phase. The heat of adsorption of this new structure was determined by measuring the pressures and temperatures at which both (2 x 2) and (3 x 3) structures coexisted. The energy barrier between the two structures was calculated by observing the time necessary for the phase transformation to take place. High-pressure STM studies of the coadsorption of CO and NO on Rh(111) showed that CO and NO form a mixed (2 x 2) structure at low NO partial pressures. By comparing surface and gas compositions, the adsorption energy difference between topsite CO and NO was calculated. Occasionally there is exchange between top-site CO and NO, for which we have described a mechanism for. At high NO partial pressures, NO segregates into islands, where the phase transformation to the (3 x 3) structure occurs. The reaction of CO and NO on Rh(111) was monitored by mass spectrometry (MS) and HPHTSTM. From MS studies the apparent activation energy of the catalytic converter reaction was calculated and compared to theory. STM showed that under high-temperature reaction conditions, …
Advanced Branching Control and Characterization of Inorganic Semiconducting Nanocrystals
The ability to finely tune the size and shape of inorganic semiconducting nanocrystals is an area of great interest, as the more control one has, the more applications will be possible for their use. The first two basic shapes develped in nanocrystals were the sphere and the anistropic nanorod. the II_VI materials being used such as Cadmium Selenide (CdSe) and Cadmium Telluride (CdTe), exhibit polytypism, which allows them to form in either the hexagonally packed wurtzite or cubically packed zinc blende crystalline phase. The nanorods are wurtzite with the length of the rod growing along the c-axis. As this grows, stacking faults may form, which are layers of zinc blende in the otherwise wurtzite crystal. Using this polytypism, though, the first generation of branched crystals were developed in the form of the CdTe tetrapod. This is a nanocrystal that nucleates in the zincblend form, creating a tetrahedral core, on which four wurtzite arms are grown. This structure opened up the possibility of even more complex shapes and applications. This disseration investigates the advancement of branching control and further understanding the materials polytypism in the form of the stacking faults in nanorods.
Advanced pulse-shape analysis and implementation of gamma-ray tracking in a position-sensitive coaxial HPGe detector
No Description Available.
Aerosol Property Comparison Within and Above the ABL at the ARM Program SGP Site
This thesis determines what, if any, measurements of aerosol properties made at the Earth surface are representative of those within the entire air column. Data from the Atmospheric Radiation Measurement site at the Southern Great Plains, the only location in the world where ground-based and in situ airborne measurements are routinely made. Flight legs during the one-year period from March 2000 were categorized as either within or above the atmospheric boundary layer (ABL) by use of an objective mixing height determination technique. Correlations between aerosol properties measured at the surface and those within and above the ABL were computed. Aerosol extensive and intensive properties measured at the surface were found representative of values within the ABL, but not of within the free atmosphere.
Air Pollutant Penetration Through Airflow Leaks Into Buildings
The penetration of ambient air pollutants into the indoor environment is of concern owing to several factors: (1) epidemiological studies have shown a strong association between ambient fine particulate pollution and elevated risk of human mortality; (2) people spend most of their time in indoor environments; and (3) most information about air pollutant concentration is only available from ambient routine monitoring networks. A good understanding of ambient air pollutant transport from source to receptor requires knowledge about pollutant penetration across building envelopes. Therefore, it is essential to gain insight into particle penetration in infiltrating air and the factors that affect it in order to assess human exposure more accurately, and to further prevent adverse human health effects from ambient particulate pollution. In this dissertation, the understanding of air pollutant infiltration across leaks in the building envelope was advanced by performing modeling predictions as well as experimental investigations. The modeling analyses quantified the extent of airborne particle and reactive gas (e.g., ozone) penetration through building cracks and wall cavities using engineering analysis that incorporates existing information on building leakage characteristics, knowledge of pollutant transport processes, as well as pollutant-surface interactions. Particle penetration is primarily governed by particle diameter and by the smallest dimension of the building cracks. Particles of 0.1-1 {micro}m are predicted to have the highest penetration efficiency, nearly unity for crack heights of 0.25 mm or higher, assuming a pressure differential of 4 Pa or greater and a flow path length of 3 cm or less. Supermicron and ultrafine particles (less than 0.1 {micro}m) are readily deposited on crack surfaces by means of gravitational settling and Brownian diffusion, respectively. The fraction of ozone penetration through building leaks could vary widely, depending significantly on its reactivity with the adjacent surfaces, in addition to the crack geometry and pressure difference. Infiltrating …
Ambient and elevated temperature fracture and cyclic-fatigue properties in a series of Al-containing silicon carbides
A series of in situ toughened, Al, B and C containing, silicon carbide ceramics (ABC-SiC) has been examined with Al contents varying from 3 to 7 wt%. With increasing Al additions, the grain morphology in the as-processed microstructures varied from elongated to bimodal to equiaxed, with a change in the nature of the grain-boundary film from amorphous to partially crystalline to fully crystalline. Fracture toughness and cyclic fatigue tests on these microstructures revealed that although the 7 wt.% Al containing material (7ABC) was extremely brittle, the 3 and particularly 5 wt.% Al materials (3ABC and 5ABC, respectively) displayed excellent crack-growth resistance at both ambient (25 C) and elevated (1300 C) temperatures. Indeed, no evidence of creep damage, in the form of grain-boundary cavitation, was seen at temperatures at 1300 C or below. The enhanced toughness of the higher Al-containing materials was associated with extensive crack bridging from both interlocking grains (in 3ABC) and uncracked ligaments (in 5ABC); in contrast, the 7ABC SiC showed no such bridging, concomitant with a marked reduction in the volume fraction of elongated grains. Mechanistically, cyclic fatigue-crack growth in 3ABC and 5ABC SiC involved the progressive degradation of such bridging ligaments in the crack wake, with the difference in the degree of elastic vs. frictional bridging affecting the slope, i.e., Paris law exponent, of the crack-growth curve. In addition an investigation of fracture resistance in non-transforming ceramics toughened by grain bridging mechanism is presented using linear elastic fracture mechanics (LEFM). Linear superposition theorems are used for the superposition of crack opening displacements, as well as stress intensity factors, resulting from the external tractions and the internal compressive bridging stresses. Specifically weight functions are used to relate the CODs, stress intensity factors, and tractions and the bridging stress. Expressions are derived for apparent material resistance, the bridging …
Amorphous and nanocrystalline phase formation in highly-driven Al-based binary alloys
Remarkable advances have been made since rapid solidification was first introduced to the field of materials science and technology. New types of materials such as amorphous alloys and nanostructure materials have been developed as a result of rapid solidification techniques. While these advances are, in many respects, ground breaking, much remains to be discerned concerning the fundamental relationships that exist between a liquid and a rapidly solidified solid. The scope of the current dissertation involves an extensive set of experimental, analytical, and computational studies designed to increase the overall understanding of morphological selection, phase competition, and structural hierarchy that occurs under far-from equilibrium conditions. High pressure gas atomization and Cu-block melt-spinning are the two different rapid solidification techniques applied in this study. The research is mainly focused on Al-Si and Al-Sm alloy systems. Silicon and samarium produce different, yet favorable, systems for exploration when alloyed with aluminum under far-from equilibrium conditions. One of the main differences comes from the positions of their respective T{sub 0} curves, which makes Al-Si a good candidate for solubility extension while the plunging T{sub 0} line in Al-Sm promotes glass formation. The rapidly solidified gas-atomized Al-Si powders within a composition range of 15 to 50 wt% Si are examined using scanning and transmission electron microscopy. The non-equilibrium partitioning and morphological selection observed by examining powders at different size classes are described via a microstructure map. The interface velocities and the amount of undercooling present in the powders are estimated from measured eutectic spacings based on Jackson-Hunt (JH) and Trivedi-Magnin-Kurz (TMK) models, which permit a direct comparison of theoretical predictions. For an average particle size of 10 {micro}m with a Peclet number of {approx}0.2, JH and TMK deviate from each other. This deviation indicates an adiabatic type solidification path where heat of fusion is reabsorbed. It …
An Analysis Framework Addressing the Scale and Legibility of Large Scientific Data Sets
Much of the previous work in the large data visualization area has solely focused on handling the scale of the data. This task is clearly a great challenge and necessary, but it is not sufficient. Applying standard visualization techniques to large scale data sets often creates complicated pictures where meaningful trends are lost. A second challenge, then, is to also provide algorithms that simplify what an analyst must understand, using either visual or quantitative means. This challenge can be summarized as improving the legibility or reducing the complexity of massive data sets. Fully meeting both of these challenges is the work of many, many PhD dissertations. In this dissertation, we describe some new techniques to address both the scale and legibility challenges, in hope of contributing to the larger solution. In addition to our assumption of simultaneously addressing both scale and legibility, we add an additional requirement that the solutions considered fit well within an interoperable framework for diverse algorithms, because a large suite of algorithms is often necessary to fully understand complex data sets. For scale, we present a general architecture for handling large data, as well as details of a contract-based system for integrating advanced optimizations into a data flow network design. We also describe techniques for volume rendering and performing comparisons at the extreme scale. For legibility, we present several techniques. Most noteworthy are equivalence class functions, a technique to drive visualizations using statistical methods, and line-scan based techniques for characterizing shape.
Analysis of Bs flavor oscillations at CDF
The search for and study of flavor oscillations in the neutral B{sub s}B{sub s} meson system is an experimentally challenging task. It constitutes a flagship analysis of the Tevatron physics program. In this dissertation, they develop an analysis of the time-dependent B{sub s} flavor oscillations using data collected with the CDF detector. The data samples are formed of both fully and partially reconstructed B meson decays: B{sub s} {yields} D{sub s}{pi}({pi}{pi}) and B{sub s} {yields} D{sub s}lv. A likelihood fitting framework is implemented and appropriate models and techniques developed for describing the mass, proper decay time, and flavor tagging characteristics of the data samples. The analysis is extended to samples of B{sup +} and B{sup 0} mesons, which are further used for algorithm calibration and method validation. The B mesons lifetimes are extracted. The measurement of the B{sup 0} oscillation frequency yields {Delta}m{sub d} = 0.522 {+-} 0.017 ps{sup -1}. The search for B{sub s} oscillations is performed using an amplitude method based on a frequency scanning procedure. Applying a combination of lepton and jet charge flavor tagging algorithms, with a total tagging power {epsilon}'D{sup 2} of 1.6%, to a data sample of 355 pb{sup -1}, a sensitivity of 13.0 ps{sup -1} is achieved. They develop a preliminary same side kaon tagging algorithm, which is found to provide a superior tagging power of about 4.0% for the B{sub s} meson species. A study of the dilution systematic uncertainties is not reported. From its application as is to the B{sub s} samples the sensitivity is significantly increased to about 18 ps{sup -1} and a hint of a signal is seen at about 175. ps{sup -1}. They demonstrate that the extension of the analysis to the increasing data samples with the inclusion of the same side tagging algorithm is capable of providing …
Analysis of Gd5(Si2Ge2) Microstructure and Phase Transition
With the recent discovery of the giant magnetocaloric effect and the beginning of extensive research on the properties of Gd{sub 5}(Si{sub x}Ge{sub 1-x}){sub 4}, a necessity has developed for a better understanding of the microstructure and crystal structure of this family of rare earth compounds with startling phenomenological properties. The aim of this research is to characterize the microstructure of the Gd{sub 5}(Si{sub x}Ge{sub 1-x}){sub 4}, with X {approx_equal} 2 and its phase change by using both transmission and electron microscopes. A brief history of past work on Gd{sub 5}(Si{sub x}Ge{sub 1-x}){sub 4} is necessary to understand this research in its proper context.
Analysis of Hypothetical Promoter Domains of DKFZp564A1164, NPHS1 and HSPOX1 Genes
For this study, a high throughput method for identifying and testing regulatory elements was examined. In addition, the validity of promoters predicted by FirstEF was tested. It was found that by combining computer based promoter and first exon predictions from FirstEF (Davuluri et al., 2001) with PCR-based cloning to generate luciferase reporter constructs, and by testing reporter activity in cultured mammalian cells plated in a 96 well format one could identify promoter activity in a relatively high throughput manner. The data generated in this study suggest that FirstEF predictions are sometimes incorrect. Therefore, having a strategy for defining which FirstEF predicted promoters to test first may accelerate the process. Initially testing promoters that are at a confirmed transcription start site for a gene, at a possible alternate transcription start site or in a region of conserved sequence would be the best candidates, while promoters predicted in gene desert regions may not be as easy to confirm. The luciferase assay lent itself very well to the high throughput search, however the subcloning did not always go smoothly. The numerous steps that this traditional subcloning method requires were time consuming and increased the opportunities for errors. A faster method that skips many of the traditional subcloning steps, such as the Creator{trademark} system by Clontech is currently being investigated by our lab. The development and testing of substantially larger enhancer/silencer regulatory elements may not be possible at this time using these high throughput methods. These regulatory elements are generally GC rich making them more difficult to PCR and subclone. Additionally, confirming upstream untranslated first exons was not possible within this time scale using the SMART RACE protocol. It will be necessary to further explore the limitations within these procedures in order to confirm these and future regulatory elements. Alterations and modifications to these …
Analysis of Protein-RNA and Protein-Peptide Interactions in Equine Infectious Anemia
Macromolecular interactions are essential for virtually all cellular functions including signal transduction processes, metabolic processes, regulation of gene expression and immune responses. This dissertation focuses on the characterization of two important macromolecular interactions involved in the relationship between Equine Infectious Anemia Virus (EIAV) and its host cell in horse: (1) the interaction between the EIAV Rev protein and its binding site, the Rev-responsive element (RRE) and (2) interactions between equine MHC class I molecules and epitope peptides derived from EIAV proteins. EIAV, one of the most divergent members of the lentivirus family, has a single-stranded RNA genome and carries several regulatory and structural proteins within its viral particle. Rev is an essential EIAV regulatory encoded protein that interacts with the viral RRE, a specific binding site in the viral mRNA. Using a combination of experimental and computational methods, the interactions between EIAV Rev and RRE were characterized in detail. EIAV Rev was shown to have a bipartite RNA binding domain contain two arginine rich motifs (ARMs). The RRE secondary structure was determined and specific structural motifs that act as cis-regulatory elements for EIAV Rev-RRE interaction were identified. Interestingly, a structural motif located in the high affinity Rev binding site is well conserved in several diverse lentiviral genoes, including HIV-1. Macromolecular interactions involved in the immune response of the horse to EIAV infection were investigated by analyzing complexes between MHC class I proteins and epitope peptides derived from EIAV Rev, Env and Gag proteins. Computational modeling results provided a mechanistic explanation for the experimental finding that a single amino acid change in the peptide binding domain of the quine MHC class I molecule differentially affectes the recognitino of specific epitopes by EIAV-specific CTL. Together, the findings in this dissertation provide novel insights into the strategy used by EIAV to replicate itself, …
Analysis of the charmed semileptonic decay D+ ---> rho0 mu+ nu
The search for the fundamental constituents of matter has been pursued and studied since the dawn of civilization. As early as the fourth century BCE, Democritus, expanding the teachings of Leucippus, proposed small, indivisible entities called atoms, interacting with each other to form the Universe. Democritus was convinced of this by observing the environment around him. He observed, for example, how a collection of tiny grains of sand can make out smooth beaches. Today, following the lead set by Democritus more than 2500 years ago, at the heart of particle physics is the hypothesis that everything we can observe in the Universe is made of a small number of fundamental particles interacting with each other. In contrast to Democritus, for the last hundred years we have been able to perform experiments that probe deeper and deeper into matter in the search for the fundamental particles of nature. Today's knowledge is encapsulated in the Standard Model of particle physics, a model describing the fundamental particles and their interactions. It is within this model that the work in this thesis is presented. This work attempts to add to the understanding of the Standard Model by measuring the relative branching fraction of the charmed semileptonic decay D{sup +} {yields} {rho}{sup 0}{mu}{sup +}{nu} with respect to D{sup +} {yields} {bar K}*{sup 0} {mu}{sup +}{nu}. Many theoretical models that describe hadronic interactions predict the value of this relative branching fraction, but only a handful of experiments have been able to measure it with any precision. By making a precise measurement of this relative branching fraction theorists can distinguish between viable models as well as refine existing ones. In this thesis we presented the measurement of the branching fraction ratio of the Cabibbo suppressed semileptonic decay mode D{sup +} {yields} {rho}{sup 0}{mu}{sup +}{nu} with respect to …
Analysis of the Semileptonic Decay D0 --> anti-K0 pi- mu+ nu
This thesis describes the analysis of the semileptonic decay D{sup 0} {yields} {bar K}{sup 0} {pi}{sup -} {mu}{sup +}{nu} using FOCUS data. FOCUS is a fixed target experiment at Fermilab that studies the physics of the charm quark. Particles containing charm are produced by photon-gluon fusion from the collision of a photon beam on a BeO target. The experiment is characterized by excellent vertex resolution and particle identification. The spectrometer consists of three systems for track reconstruction (two silicon systems and one multiwire proportional chamber system) and two magnets of opposite polarity. The polarity of the magnet is such that the events of e{sup +}e{sup -} pairs produced in the target (which constitutes the main background) travel through a central opening in the detectors without interactions. Particle momentum is measured from the deflection angle in the magnets. Three multicell Cerenkov counters are used for charged particle identification (for e, {pi}, K, and p). Two different tracking systems located after several interaction lengths of shielding material are used for muon identification. The energy of neutral pions and electrons is measured in two electromagnetic calorimeters, while an hadron calorimeter is used for measuring the neutron energy. During the last four years the FOCUS collaboration provided results on several charm topics: CP violation, D{sup 0}-{bar D}{sup 0} mixing, rare and forbidden decays, precision measurements of semileptonic decays, baryon and meson lifetimes, fully hadronic baryon and meson branching ratios, charm spectroscopy, Dalitz analyses of resonant structures, charm anti-charm production, QCD studies involving double charm particles, and pentaquarks. Semileptonic decays, besides having a clear signature for experiments, provide crucial information for theoretical studies. These decays carry information on the weak coupling of quarks since they can be used for measuring Cabibbo-Kobayashi-Maskawa matrix elements. Although the decay occurs through weak interaction, QCD effects due to quark …
An Analytic Tool to Investigate the Effect of Binder on the Sensitivity of HMX-Based Plastic Bonded Explosives in the Skid Test
This project will develop an analytical tool to calculate performance of HMX based PBXs in the skid test. The skid-test is used as a means to measure sensitivity for large charges in handling situations. Each series of skid tests requires dozens of drops of large billets. It is proposed that the reaction (or lack of one) of PBXs in the skid test is governed by the mechanical properties of the binder. If true, one might be able to develop an analytical tool to estimate skid test behavior for new PBX formulations. Others over the past 50 years have tried to develop similar models. This project will research and summarize the works of others and couple the work of 3 into an analytical tool that can be run on a PC to calculate drop height of HMX based PBXs. Detonation due to dropping a billet is argued to be a dynamic thermal event. To avoid detonation, the heat created due to friction at impact, must be conducted into the charge or the target faster than the chemical kinetics can create additional energy. The methodology will involve numerically solving the Frank-Kamenetskii equation in one dimension. The analytical problem needs to be bounded in terms of how much heat is introduced to the billet and for how long. Assuming an inelastic collision with no rebound, the billet will be in contact with the target for a short duration determined by the equations of motion. For the purposes of the calculations, it will be assumed that if a detonation is to occur, it will transpire within that time. The surface temperature will be raised according to the friction created using the equations of motion of dropping the billet on a rigid surface. The study will connect the works of Charles Anderson, Alan Randolph, Larry …
Analytical Chemistry at the Interface Between Materials Science and Biology
No Description Available.
Angular correlations in beauty production at the Tevatron at sqrt(s) = 1.96 TeV
Measurements of the b quark production cross section at the Tevatron and at Hera in the final decades of the 20th century have consistently yielded higher values than predicted by Next-to-Leading Order (NLO) QCD. This discrepancy has led to a large efforts by theorists to improve theoretical calculations of the cross sections and simulations of b quark production. As a result, the difference between theory and experiment has been much reduced. New measurements are needed to test the developments in the calculations and in event simulation. In this thesis, a measurement of angular correlations between b jets produced in the same event is presented. The angular separation between two b jets is directly sensitive to higher order contributions. In addition, the measurement does not depend strongly on fragmentation models or on the experimental luminosity and efficiency, which lead to a large uncertainty in measurements of the inclusive cross section. At the Tevatron, b{bar b} quark pairs are predominantly produced through the strong interaction. In leading order QCD, the b quarks are produced back to back in phase space. Next-to-leading order contributions involving a third particle in the final state allow production of b pairs that are very close together in phase space. The Leading Order and NLO contributions can be separated into three different processes: flavour creation, gluon splitting and flavour excitation. While the separation based on Feynman diagrams is ambiguous and the three processes are not each separately gauge invariant in NLO QCD, the distinction can be made explicitly in terms of event generators using LO matrix elements. Direct production of a b{bar b} quark pair in the hard scatter interaction is known as flavour creation. The quarks emerge nearly back to back in azimuth. In gluon splitting processes, a gluon is produced in the hard scatter interaction. The …
Angular-momentum-dominated electron beams and flat-beam generation
In the absence of external forces, if the dynamics within an electron beam is dominated by its angular momentum rather than other effects such as random thermal motion or self Coulomb-repulsive force (i.e., space-charge force), the beam is said to be angular-momentum-dominated. Such a beam can be directly applied to the field of electron-cooling of heavy ions; or it can be manipulated into an electron beam with large transverse emittance ratio, i.e., a flat beam. A flat beam is of interest for high-energy electron-positron colliders or accelerator-based light sources. An angular-momentum-dominated beam is generated at the Fermilab/NICADD photoinjector Laboratory (FNPL) and is accelerated to an energy of 16 MeV. The properties of such a beam is investigated systematically in experiment. The experimental results are in very good agreement with analytical expectations and simulation results. This lays a good foundation for the transformation of an angular-momentum-dominated beam into a flat beam. The round-to-flat beam transformer is composed of three skew quadrupoles. Based on a good knowledge of the angular-momentum-dominated beam, the quadrupoles are set to the proper strengths in order to apply a total torque which removes the angular momentum, resulting in a flat beam. For bunch charge around 0.5 nC, an emittance ratio of 100 {+-} 5 was measured, with the smaller normalized root-mean-square emittance around 0.4 mm-mrad. Effects limiting the flat-beam emittance ratio are investigated, such as the chromatic effects in the round-to-flat beam transformer, asymmetry in the initial angular-momentum-dominated beam, and space-charge effects. The most important limiting factor turns out to be the uncorrelated emittance growth caused by space charge when the beam energy is low, for example, in the rf gun area. As a result of such emittance growth prior to the round-to-flat beam transformer, the emittance ratio achievable in simulation decreases from orders of thousands to …
Anisotropy in CdSe quantum rods
The size-dependent optical and electronic properties of semiconductor nanocrystals have drawn much attention in the past decade, and have been very well understood for spherical ones. The advent of the synthetic methods to make rod-like CdSe nanocrystals with wurtzite structure has offered us a new opportunity to study their properties as functions of their shape. This dissertation includes three main parts: synthesis of CdSe nanorods with tightly controlled widths and lengths, their optical and dielectric properties, and their large-scale assembly, all of which are either directly or indirectly caused by the uniaxial crystallographic structure of wurtzite CdSe. The hexagonal wurtzite structure is believed to be the primary reason for the growth of CdSe nanorods. It represents itself in the kinetic stabilization of the rod-like particles over the spherical ones in the presence of phosphonic acids. By varying the composition of the surfactant mixture used for synthesis we have achieved tight control of the widths and lengths of the nanorods. The synthesis of monodisperse CdSe nanorods enables us to systematically study their size-dependent properties. For example, room temperature single particle fluorescence spectroscopy has shown that nanorods emit linearly polarized photoluminescence. Theoretical calculations have shown that it is due to the crossing between the two highest occupied electronic levels with increasing aspect ratio. We also measured the permanent electric dipole moment of the nanorods with transient electric birefringence technique. Experimental results on nanorods with different sizes show that the dipole moment is linear to the particle volume, indicating that it originates from the non-centrosymmetric hexagonal lattice. The elongation of the nanocrystals also results in the anisotropic inter-particle interaction. One of the consequences is the formation of liquid crystalline phases when the nanorods are dispersed in solvent to a high enough concentration. The preparation of the stable liquid crystalline solution of CdSe nanorods …
t anti-t production cross section measurement using soft electron tagging in p anti-p collisions at s**(1/2) = 1.96-TeV
We measure the production cross section of t{bar t} events in p{bar p} collisions at {radical}s = 1.96 TeV. The data was collected by the CDF experiment in Run 2 of the Tevatron accelerator at the Fermi National Accelerator Laboratory between 2002 and 2007. 1.7 fb{sup -1} of data was recorded during this time period. We reconstruct t{bar t} events in the lepton+jets channel, whereby one W boson - resulting from the decay of the top quark pairs - decays leptonically and the other hadronically. The dominant background to this process is the production of W bosons in association with multiple jets. To distinguish t{bar t} from background, we identify soft electrons from the semileptonic decay of heavy flavor jets produced in t{bar t} events. We measure a cross section of {sigma}{sub p{bar p}} = 7.8 {+-} 2.4(stat) {+-} 1.6(syst) {+-} 0.5(lumi).
Antiproton Structure Function in P-Pbar Diffractive Interactions at Sqrt(s) = 1.96 Tev
No Description Available.
Application of optimal prediction to molecular dynamics
Optimal prediction is a general system reduction technique for large sets of differential equations. In this method, which was devised by Chorin, Hald, Kast, Kupferman, and Levy, a projection operator formalism is used to construct a smaller system of equations governing the dynamics of a subset of the original degrees of freedom. This reduced system consists of an effective Hamiltonian dynamics, augmented by an integral memory term and a random noise term. Molecular dynamics is a method for simulating large systems of interacting fluid particles. In this thesis, I construct a formalism for applying optimal prediction to molecular dynamics, producing reduced systems from which the properties of the original system can be recovered. These reduced systems require significantly less computational time than the original system. I initially consider first-order optimal prediction, in which the memory and noise terms are neglected. I construct a pair approximation to the renormalized potential, and ignore three-particle and higher interactions. This produces a reduced system that correctly reproduces static properties of the original system, such as energy and pressure, at low-to-moderate densities. However, it fails to capture dynamical quantities, such as autocorrelation functions. I next derive a short-memory approximation, in which the memory term is represented as a linear frictional force with configuration-dependent coefficients. This allows the use of a Fokker-Planck equation to show that, in this regime, the noise is {delta}-correlated in time. This linear friction model reproduces not only the static properties of the original system, but also the autocorrelation functions of dynamical variables.
THE APPLICATION OF SINGLE PARTICLE AEROSOL MASS SPECTROMETRY FOR THE DETECTION AND IDENTIFICATION OF HIGH EXPLOSIVES AND CHEMICAL WARFARE AGENTS
Single Particle Aerosol Mass Spectrometry (SPAMS) was evaluated as a real-time detection technique for single particles of high explosives. Dual-polarity time-of-flight mass spectra were obtained for samples of 2,4,6-trinitrotoluene (TNT), 1,3,5-trinitro-1,3,5-triazinane (RDX), and pentaerythritol tetranitrate (PETN); peaks indicative of each compound were identified. Composite explosives, Comp B, Semtex 1A, and Semtex 1H were also analyzed, and peaks due to the explosive components of each sample were present in each spectrum. Mass spectral variability with laser fluence is discussed. The ability of the SPAMS system to identify explosive components in a single complex explosive particle ({approx}1 pg) without the need for consumables is demonstrated. SPAMS was also applied to the detection of Chemical Warfare Agent (CWA) simulants in the liquid and vapor phases. Liquid simulants for sarin, cyclosarin, tabun, and VX were analyzed; peaks indicative of each simulant were identified. Vapor phase CWA simulants were adsorbed onto alumina, silica, Zeolite, activated carbon, and metal powders which were directly analyzed using SPAMS. The use of metal powders as adsorbent materials was especially useful in the analysis of triethyl phosphate (TEP), a VX stimulant, which was undetectable using SPAMS in the liquid phase. The capability of SPAMS to detect high explosives and CWA simulants using one set of operational conditions is established.
Application of the Scenario Planning Process - a Case Study: The Technical Information Department at the Lawrence Livermore National Laboratory
When the field of modern publishing was on a collision course with telecommunications, publishing organizations had to come up to speed in fields that were, heretofore, completely foreign and technologically forbidding to them. For generations, the technology of publishing centered on offset lithography, typesetting, and photography--fields that saw evolutionary and incremental change from the time of Guttenberg. But publishing now includes making information available over the World Wide Web--Internet publishing--with its ever-accelerating rate of technological change and dependence on computers and networks. Clearly, we need a methodology to help anyone in the field of Internet publishing plan for the future, and there is a well-known, well-tested technique for just this purpose--Scenario Planning. Scenario Planning is an excellent tool to help organizations make better decisions in the present based on what they identify as possible and plausible scenarios of the future. Never was decision making more difficult or more crucial than during the years of this study, 1996-1999. This thesis takes the position that, by applying Scenario Planning, the Technical Information Department at LLNL, a large government laboratory (and organizations similar to it), could be confident that moving into the telecommunications business of Internet publishing stood a very good chance of success.
Applications of a single-molecule detection in early disease diagnosis and enzymatic reaction study
Various single-molecule techniques were utilized for ultra-sensitive early diagnosis of viral DNA and antigen and basic mechanism study of enzymatic reactions. DNA of human papilloma virus (HPV) served as the screening target in a flow system. Alexa Fluor 532 (AF532) labeled single-stranded DNA probes were hybridized to the target HPV-16 DNA in solution. The individual hybridized molecules were imaged with an intensified charge-coupled device (ICCD) in two ways. In the single-color mode, target molecules were detected via fluorescence from hybridized probes only. This system could detect HPV-16 DNA in the presence of human genomic DNA down to 0.7 copy/cell and had a linear dynamic range of over 6 orders of magnitude. In the dual-color mode, fluorescence resonance energy transfer (FRET) was employed to achieve zero false-positive count. We also showed that DNA extracts from Pap test specimens did not interfere with the system. A surface-based method was used to improve the throughput of the flow system. HPV-16 DNA was hybridized to probes on a glass surface and detected with a total internal reflection fluorescence (TIRF) microscope. In the single-probe mode, the whole genome and target DNA were fluorescently labeled before hybridization, and the detection limit is similar to the flow system. In the dual-probe mode, a second probe was introduced. The linear dynamic range covers 1.44-7000 copies/cell, which is typical of early infection to near-cancer stages. The dual-probe method was tested with a crudely prepared sample. Even with reduced hybridization efficiency caused by the interference of cellular materials, we were still able to differentiate infected cells from healthy cells. Detection and quantification of viral antigen with a novel single-molecule immunosorbent assay (SMISA) was achieved. Antigen from human immunodeficiency virus type 1(HIV-1) was chosen to be the target in this study. The target was sandwiched between a monoclonal capture antibody and …
Aspects of the SrO-CuO-TiO2 Ternary System Related to the Deposition of SrTiO3 and Copper-Doped SrTiO3 Thin-Film Buffer Layers
YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} (YBCO) coated conductors are promising materials for large-scale superconductivity applications. One version of a YBCO coated conductor is based on ion beam assisted deposition (IBAD) of magnesium oxide (MgO) onto polycrystalline metal substrates. SrTiO{sub 3} (STO) is often deposited by physical vapor deposition (PVD) methods as a buffer layer between the YBCO and IBAD MgO due to its chemical stability and lattice mismatch of only {approx}1.5% with YBCO. In this work, some aspects of the stability of STO with respect to copper (Cu) and chemical solution deposition of STO on IBAD MgO templates were examined. Solubility limits of Cu in STO were established by processing Cu-doped STO powders by conventional bulk preparation techniques. The maximum solubility of Cu in STO was {approx}1% as determined by transmission electron microscopy (TEM) and Rietveld refinements of x-ray diffraction (XRD) data. XRD analysis, performed in collaboration with NIST, on powder compositions on the STO/SrCuO{sub 2} tie line did not identify any ternary phases. SrCu{sub 0.10}Ti{sub 0.90}O{sub y} buffer layers were prepared by pulsed laser deposition (PLD) and CSD on IBAD MgO flexible metallic textured tapes. TEM analysis of a {approx}100 nm thick SrCu{sub 0.10}Ti{sub 0.90}O{sub y} buffer layer deposited by PLD showed a smooth Cu-doped STO/MgO interface. A {approx}600 nm thick YBCO film, deposited onto the SrCu{sub 0.10}Ti{sub 0.90}O{sub y} buffer by PLD, exhibited a T{sub c} of 87 K and critical current density (J{sub c}) of {approx}1 MA/cm{sup 2}. STO and Cu-doped STO thin films by CSD were {approx}30 nm thick. The in plane alignment (FWHM) after deposition of the STO improved by {approx}1{sup o} while it degraded by {approx}2{sup o} with the SrCu{sub 0.05}TiO{sub y} buffer. YBCO was deposited by PLD on the STO and SrCu{sub 0.05}TiO{sub y} buffers. The in plane alignment (FWHM) of the YBCO with …
Aspherical supernovae
Although we know that many supernovae are aspherical, the exact nature of their geometry is undetermined. Because all the supernovae we observe are too distant to be resolved, the ejecta structure can't be directly imaged, and asymmetry must be inferred from signatures in the spectral features and polarization of the supernova light. The empirical interpretation of this data, however, is rather limited--to learn more about the detailed supernova geometry, theoretical modeling must been undertaken. One expects the geometry to be closely tied to the explosion mechanism and the progenitor star system, both of which are still under debate. Studying the 3-dimensional structure of supernovae should therefore provide new break throughs in our understanding. The goal of this thesis is to advance new techniques for calculating radiative transfer in 3-dimensional expanding atmospheres, and use them to study the flux and polarization signatures of aspherical supernovae. We develop a 3-D Monte Carlo transfer code and use it to directly fit recent spectropolarimetric observations, as well as calculate the observable properties of detailed multi-dimensional hydrodynamical explosion simulations. While previous theoretical efforts have been restricted to ellipsoidal models, we study several more complicated configurations that are tied to specific physical scenarios. We explore clumpy and toroidal geometries in fitting the spectropolarimetry of the Type Ia supernova SN 2001el. We then calculate the observable consequences of a supernova that has been rendered asymmetric by crashing into a nearby companion star. Finally, we fit the spectrum of a peculiar and extraordinarily luminous Type Ic supernova. The results are brought to bear on three broader astrophysical questions: (1) What are the progenitors and the explosion processes of Type Ia supernovae? (2) What effect does asymmetry have on the observational diversity of Type Ia supernovae, and hence their use in cosmology? (3) And , what are some of …
ATCOM: Automatically Tuned Collective Communication System for SMP Clusters
Conventional implementations of collective communications are based on point-to-point communications, and their optimizations have been focused on efficiency of those communication algorithms. However, point-to-point communications are not the optimal choice for modern computing clusters of SMPs due to their two-level communication structure. In recent years, a few research efforts have investigated efficient collective communications for SMP clusters. This dissertation is focused on platform-independent algorithms and implementations in this area. There are two main approaches to implementing efficient collective communications for clusters of SMPs: using shared memory operations for intra-node communications, and overlapping inter-node/intra-node communications. The former fully utilizes the hardware based shared memory of an SMP, and the latter takes advantage of the inherent hierarchy of the communications within a cluster of SMPs. Previous studies focused on clusters of SMP from certain vendors. However, the previously proposed methods are not portable to other systems. Because the performance optimization issue is very complicated and the developing process is very time consuming, it is highly desired to have self-tuning, platform-independent implementations. As proven in this dissertation, such an implementation can significantly out-perform the other point-to-point based portable implementations and some platform-specific implementations. The dissertation describes in detail the architecture of the platform-independent implementation. There are four system components: shared memory-based collective communications, overlapping mechanisms for inter-node and intra-node communications, a prediction-based tuning module and a micro-benchmark based tuning module. Each component is carefully designed with the goal of automatic tuning in mind.
Atmospheric electron neutrinos in the MINOS far detector
Neutrinos produced as a result of cosmic-ray interactions in the earth's atmosphere offer a powerful probe into the nature of this three-membered family of low-mass, weakly-interacting particles. Ten years ago, the Super-Kamiokande Experiment has confirmed earlier indications that neutrinos undergo lepton-flavor oscillations during propagation, proving that they are massive contrary to the previous Standard Model assumptions. The Soudan Underground Laboratory, located in northern Minnesota, was host to the Soudan2 Experiment, which has made important contributions to atmospheric neutrino research. This same lab has more recently been host to the MINOS far detector, a neutrino detector which serves as the downstream element of an accelerator-based long-baseline neutrino-oscillation experiment. This thesis has examined 418.5 live days of atmospheric neutrino data (fiducial exposure of 4.18 kton-years) collected in the MINOS far detector prior to the activation of the NuMI neutrino beam, with a specific emphasis on the investigation of electron-type neutrino interactions. Atmospheric neutrino interaction candidates have been selected and separated into showering or track-like events. The showering sample consists of 89 observed events, while the track-like sample consists of 112 observed events. Based on the Bartol atmospheric neutrino flux model of Barr et al. plus a Monte Carlo (MC) simulation of interactions in the MINOS detector, the expected yields of showering and track-like events in the absence of neutrino oscillations are 88.0 {+-} 1.0 and 149.1 {+-} 1.0 respectively (where the uncertainties reflect only the limited MC statistics). Major systematic uncertainties, especially those associated with the flux model, are cancelled by forming a double ratio of these observed and expected yields: R{sup data}{sub trk/shw}/R{sup MC}{sub trk/shw} = 0.74{sup +0.12}{sub -01.0}(stat.) {+-} 0.04 (syst.) This double ratio should be equal to unity in the absence of oscillations, and the value above disfavors null oscillation with 96.0% confidence. In addition, the showering sample can …
An atmospheric muon neutrino disappearance measurement with the MINOS far detector
It is now widely accepted that the Standard Model assumption of massless neutrinos is wrong, due primarily to the observation of solar and atmospheric neutrino flavor oscillations by a small number of convincing experiments. The MINOS Far Detector, capable of observing both the outgoing lepton and associated showering products of a neutrino interaction, provides an excellent opportunity to independently search for an oscillation signature in atmospheric neutrinos. To this end, a MINOS data set from an 883 live day, 13.1 kt-yr exposure collected between July, 2003 and April, 2007 has been analyzed. 105 candidate charged current muon neutrino interactions were observed, with 120.5 {+-} 1.3 (statistical error only) expected in the absence of oscillation. A maximum likelihood analysis of the observed log(L/E) spectrum shows that the null oscillation hypothesis is excluded at over 96% confidence and that the best fit oscillation parameters are sin{sup 2} 2{theta}{sub 23} = 0.95{sub -0.32} and {Delta}m{sub 23}{sup 2} = 0.93{sub -0.44}{sup +3.94} x 10{sup -3} eV{sup 2}. This measurement of oscillation parameters is consistent with the best fit values from the Super-Kamiokande experiment at 68% confidence.
Atmospheric Neutrino Induced Muons in the MINOS Far Detector
The Main Injector Neutrino Oscillation Search (MINOS) is a long baseline neutrino oscillation experiment. The MINOS Far Detector, located in the Soudan Underground Laboratory in Soudan MN, has been collecting data since August 2003. The scope of this dissertation involves identifying the atmospheric neutrino induced muons that are created by the neutrinos interacting with the rock surrounding the detector cavern, performing a neutrino oscillation search by measuring the oscillation parameter values of {Delta}m{sub 23}{sup 2} and sin{sup 2} 2{theta}{sub 23}, and searching for CPT violation by measuring the charge ratio for the atmospheric neutrino induced muons. A series of selection cuts are applied to the data set in order to extract the neutrino induced muons. As a result, a total of 148 candidate events are selected. The oscillation search is performed by measuring the low to high muon momentum ratio in the data sample and comparing it to the same ratio in the Monte Carlo simulation in the absence of neutrino oscillation. The measured double ratios for the ''all events'' (A) and high resolution (HR) samples are R{sub A} = R{sub low/high}{sup data}/R{sub low/high}{sup MC} = 0.60{sub -0.10}{sup +0.11}(stat) {+-} 0.08(syst) and R{sub HR} = R{sub low/high}{sup data}/R{sub low/high}{sup MC} = 0.58{sub -0.11}{sup +0.14}(stat) {+-} 0.05(syst), respectively. Both event samples show a significant deviation from unity giving a strong indication of neutrino oscillation. A combined momentum and zenith angle oscillation fit is performed using the method of maximum log-likelihood with a grid search in the parameter space of {Delta}m{sup 2} and sin{sup 2} 2{theta}. The best fit point for both event samples occurs at {Delta}m{sub 23}{sup 2} = 1.3 x 10{sup -3} eV{sup 2}, and sin{sup 2} 2{theta}{sub 23} = 1. This result is compatible with previous measurements from the Super Kamiokande experiment and Soudan 2 experiments. The MINOS Far …
Atmospheric neutrino observations in the MINOS far detector
This thesis presents the results of atmospheric neutrino observations from a 12.23 ktyr exposure of the 5.42 kt MINOS Far Detector between 1st August 2003 until 1st March 2006. The separation of atmospheric neutrino events from the large background of cosmic muon events is discussed. A total of 277 candidate contained vertex {nu}/{bar {nu}}{sub {mu}} CC data events are observed, with an expectation of 354.4{+-}47.4 events in the absence of neutrino oscillations. A total of 182 events have clearly identified directions, 77 data events are identified as upward going, 105 data events are identified as downward going. The ratio between the measured and expected up/down ratio is: R{sup data}{sub u/d}/R{sup MC}{sub u/d} = 0.72{sup +0.13}{sub -0.11}(stat.){+-} 0.04 (sys.). This is 2.1{sigma} away from the expectation for no oscillations. A total of 167 data events have clearly identified charge, 112 are identified as {nu}{sub {mu}} events, 55 are identified as {bar {nu}}{sub {mu}} events. This is the largest sample of charge-separated contained-vertex atmospheric neutrino interactions so far observed. The ratio between the measured and expected {bar {nu}}{sub {mu}}/{nu}{sub {mu}} ratio is: R{sup data}{sub {bar {nu}}{nu}} / R{sup MC}{sub {bar {nu}}{nu}} = 0.93 {sup +0.19}{sub -0.15} (stat.) {+-} 0.12 (sys.). This is consistent with {nu}{sub {mu}} and {bar {nu}}{sub {mu}} having the same oscillation parameters. Bayesian methods were used to generate a log(L/E) value for each event. A maximum likelihood analysis is used to determine the allowed regions for the oscillation parameters {Delta}m{sub 32}{sup 2} and sin{sup 2}2{theta}{sub 23}. The likelihood function uses the uncertainty in log(L/E) to bin events in order to extract as much information from the data as possible. This fit rejects the null oscillations hypothesis at the 98% confidence level. A fit to independent {nu}{sub {mu}} and {bar {nu}}{sub {mu}} oscillation assuming maximal mixing for both is also …
Atmospheric Neutrinos in the MINOS Far Detector
The phenomenon of flavour oscillations of neutrinos created in the atmosphere was first reported by the Super-Kamiokande collaboration in 1998 and since then has been confirmed by Soudan 2 and MACRO. The MINOS Far Detector is the first magnetized neutrino detector able to study atmospheric neutrino oscillations. Although it was designed to detect neutrinos from the NuMI beam, it provides a unique opportunity to measure the oscillation parameters for neutrinos and anti-neutrinos independently. The MINOS Far Detector was completed in August 2003 and since then has collected 2.52 kton-years of atmospheric data. Atmospheric neutrino interactions contained within the volume of the detector are separated from the dominant background from cosmic ray muons. Thirty seven events are selected with an estimated background contamination of less than 10%. Using the detector's magnetic field, 17 neutrino events and 6 anti-neutrino events are identified, 14 events have ambiguous charge. The neutrino oscillation parameters for {nu}{sub {mu}} and {bar {nu}}{sub {mu}} are studied using a maximum likelihood analysis. The measurement does not place constraining limits on the neutrino oscillation parameters due to the limited statistics of the data set analysed. However, this thesis represents the first observation of charge separated atmospheric neutrino interactions. It also details the techniques developed to perform atmospheric neutrino analyses in the MINOS Far Detector.
Attainment of Electron Beam Suitable for Medium Energy Electron Cooling
No Description Available.
Authenticated group Diffie-Hellman key exchange: theory and practice
Authenticated two-party Diffie-Hellman key exchange allows two principals A and B, communicating over a public network, and each holding a pair of matching public/private keys to agree on a session key. Protocols designed to deal with this problem ensure A (B resp.)that no other principals aside from B (A resp.) can learn any information about this value. These protocols additionally often ensure A and B that their respective partner has actually computed the shared secret value. A natural extension to the above cryptographic protocol problem is to consider a pool of principals agreeing on a session key. Over the years several papers have extended the two-party Diffie-Hellman key exchange to the multi-party setting but no formal treatments were carried out till recently. In light of recent developments in the formalization of the authenticated two-party Diffie-Hellman key exchange we have in this thesis laid out the authenticated group Diffie-Hellman key exchange on firmer foundations.
Automated High Throughput Protein Crystallization Screening at Nanoliter Scale and Protein Structural Study on Lactate Dehydrogenase
The purposes of our research were: (1) To develop an economical, easy to use, automated, high throughput system for large scale protein crystallization screening. (2) To develop a new protein crystallization method with high screening efficiency, low protein consumption and complete compatibility with high throughput screening system. (3) To determine the structure of lactate dehydrogenase complexed with NADH by x-ray protein crystallography to study its inherent structural properties. Firstly, we demonstrated large scale protein crystallization screening can be performed in a high throughput manner with low cost, easy operation. The overall system integrates liquid dispensing, crystallization and detection and serves as a whole solution to protein crystallization screening. The system can dispense protein and multiple different precipitants in nanoliter scale and in parallel. A new detection scheme, native fluorescence, has been developed in this system to form a two-detector system with a visible light detector for detecting protein crystallization screening results. This detection scheme has capability of eliminating common false positives by distinguishing protein crystals from inorganic crystals in a high throughput and non-destructive manner. The entire system from liquid dispensing, crystallization to crystal detection is essentially parallel, high throughput and compatible with automation. The system was successfully demonstrated by lysozyme crystallization screening. Secondly, we developed a new crystallization method with high screening efficiency, low protein consumption and compatibility with automation and high throughput. In this crystallization method, a gas permeable membrane is employed to achieve the gentle evaporation required by protein crystallization. Protein consumption is significantly reduced to nanoliter scale for each condition and thus permits exploring more conditions in a phase diagram for given amount of protein. In addition, evaporation rate can be controlled or adjusted in this method during the crystallization process to favor either nucleation or growing processes for optimizing crystallization process. The protein crystals gotten …
B Flavor Tagging Calibration and Search for B(s) Oscillations in Semileptonic Decays with the CDF Detector at Fermilab
In this thesis we present a search for oscillations of B{sub s}{sup 0} mesons using semileptonic B{sub s}{sup 0} {yields} D{sub s}{sup -}{ell}{sup +}{nu} decays. Data were collected with the upgraded Collider Detector at Fermilab (CDFII) from events produced in collisions of 980 GeV protons and antiprotons accelerated in the Tevatron ring. The total proton-antiproton center-of-mass energy is 1.96 TeV. The Tevatron is the unique source in the world for B{sub s}{sup 0} mesons, to be joined by the Large Hadron Collider at CERN after 2007. We establish a lower limit on the B{sub s}{sup 0} oscillation frequency {Delta}m{sub s} > 7.7 ps{sup -1} at 95% Confidence Level. We also present a multivariate tagging algorithm that identifies semileptonic B {yields} {mu}X decays of the other B mesons in the event. Using this muon tagging algorithm as well as opposite side electron and jet charge tagging algorithms, we infer the B{sub s}{sup 0} flavor at production. The tagging algorithms are calibrated using high statistics samples of B{sup 0} and B{sup +} semileptonic B{sup 0/+} {yields} D{ell}{nu} decays. The oscillation frequency {Delta}m{sub d} in semileptonic B{sup 0} {yields} D{ell}{nu} decays is measured to be {Delta}m{sub d} = (0.501 {+-} 0.029(stat.) {+-} 0.017(syst.)) ps{sup -1}.
The b Quark Fragmentation Function, From LEP to TeVatron
The b quark fragmentation distribution has been measured, using data registered by the DELPHI experiment at the Z pole, in the years 1994-1995. The measurement made use of 176000 inclusively reconstructed B meson candidates. The errors of this measurement are dominated by systematic effects, the principal ones being related to the energy calibration. The distribution has been established in a nine bin histogram. Its mean value has been found to be <x{sub E}> = 0.704 {+-} 0.001(stat.) {+-} 0.008(syst.). Using this measurement, and other available analyses of the b-quark fragmentation distribution in e{sup +}e{sup -} collisions, the non-perturbative QCD component of the distribution has been extracted independently of any hadronic physics modeling. This distribution depends only on the way the perturbative QCD component has been defined. When the perturbative QCD component is taken from a parton shower Monte-Carlo, the non-perturbative QCD component is rather similar with those obtained from the Lund or Bowler models. When the perturbative QCD component is the result of an analytic NLL computation, the non-perturbative QCD component has to be extended in a non-physical region and thus cannot be described by any hadronic modeling. In the two examples, used to characterize these two situations, which are studied at present, it happens that the extracted non-perturbative QCD distribution has the same shape, being simply translated to higher-x values in the second approach, illustrating the ability of the analytic perturbative QCD approach to account for softer gluon radiation than with a parton shower generator. Using all the available analyses of the b-quark fragmentation distribution in e{sup +}e{sup -} collisions, together with the result from DELPHI presented in this thesis, a combined world average b fragmentation distribution has been obtained. Its mean value has been found to be <x{sub E}> = 0.714 {+-} 0.002. An analysis of the B …
B Quark Tagging and Cross-Section Measurement in Quark Pair Production at d0
No Description Available.
B-tagging and the search for neutral supersymmetric Higgs bosons at D0
A search for neutral supersymmetric Higgs bosons and work relating to the improvement of the b-tagging and trigger capabilities at the D0 detector during Run II of the Fermilab Tevatron collider is presented. The search for evidence of the Higgs sector in the Standard Model (SM) and supersymmetric extensions of the SM are a high priority for the D0 collaboration, and b-tagging and good triggers are a vital component of these searches. The development and commissioning of the first triggers at D0 which use b-tagging is outlined, along with the development of a new secondary vertex b-tagging tool for use in the Level 3 trigger. Upgrades to the Level 3 trigger hit finding code, which have led to significant improvements in the quality and efficiency of the tracking code, and by extension the b-tagging tools, are also presented. An offline Neural Network (NN) b-tagging tool was developed, trained on Monte Carlo and extensively tested and measured on data. The new b-tagging tool significantly improves the b-tagging performance at D0, for a fixed fake rate relative improvements in signal efficiency range from {approx} 40% to {approx} 15%. Fake rates, for a fixed signal efficiency, are typically reduced to between a quarter and a third of their value. Finally, three versions of the search for neutral supersymmetric Higgs bosons are presented. The latest version of the analysis makes use of almost 1 fb{sup -1} of data, the new NN b-tagger and the new b-tagging triggers, and has set one of the world's best limits on the supersymmetric parameter tan{beta} in the mass range 90 to 150 GeV.
B to (rho/omega) gamma at BaBar
This document describes the measurements of the branching fractions and isospin violations of the radiative electroweak penguin decays B {yields} ({rho}/{omega}){gamma} at the asymmetric-energy e{sup +}e{sup -} PEP-II collider with the BABAR detector. Together with the previously measured branching fractions of the decays B {yields} K*{gamma} the ratio of CKM-matrix elements |V{sub td}/V{sub ts}| are extracted and the length of the far side of the unitarity triangle is determined.
Balancing a U-Shaped Assembly Line by Applying Nested Partitions Method
In this study, we applied the Nested Partitions method to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions method provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment of tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions method in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO method is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion …
Band anticrossing effects in highly mismatched semiconductor alloys
The first five chapters of this thesis focus on studies of band anticrossing (BAC) effects in highly electronegativity- mismatched semiconductor alloys. The concept of bandgap bowing has been used to describe the deviation of the alloy bandgap from a linear interpolation. Bowing parameters as large as 2.5 eV (for ZnSTe) and close to zero (for AlGaAs and ZnSSe) have been observed experimentally. Recent advances in thin film deposition techniques have allowed the growth of semiconductor alloys composed of significantly different constituents with ever- improving crystalline quality (e.g., GaAs{sub 1-x}N{sub x} and GaP{sub 1-x}N{sub x} with x {approx}< 0.05). These alloys exhibit many novel and interesting properties including, in particular, a giant bandgap bowing (bowing parameters > 14 eV). A band anticrossing model has been developed to explain these properties. The model shows that the predominant bowing mechanism in these systems is driven by the anticrossing interaction between the localized level associated with the minority component and the band states of the host. In this thesis I discuss my studies of the BAC effects in these highly mismatched semiconductors. It will be shown that the results of the physically intuitive BAC model can be derived from the Hamiltonian of the many-impurity Anderson model. The band restructuring caused by the BAC interaction is responsible for a series of experimental observations such as a large bandgap reduction, an enhancement of the electron effective mass, and a decrease in the pressure coefficient of the fundamental gap energy. Results of further experimental investigations of the optical properties of quantum wells based on these materials will be also presented. It will be shown that the BAC interaction occurs not only between localized states and conduction band states at the Brillouin zone center, but also exists over all of k-space. Finally, taking ZnSTe and ZnSeTe as examples, …
Baryon stopping and hadronic spectra in Pb-Pb collisions at 158 GeV/nucleon
Baryon stopping and particle production in Pb+Pb collisions at 158 GeV/nucleon are studied as a function of the collision centrality using new proton, antiproton, charged kaon and charged pion production data measured with the NA49 experiment at the CERN Super Proton Synchrotron (SPS). Stopping, which is measured by the shift in rapidity of net protons or baryons from the initial beam rapidity, increases in more central collisions. This is expected from a geometrical picture of the collisions. The stopping data are quantitatively compared to models incorporating various mechanisms for stopping. In general, microscopic transport calculations which incorporate current theoretical models of baryon stopping or use phenomenological extrapolations from simpler systems overestimate the dependence of stopping on centrality. Approximately, the yield of produced pions scales with the number of nucleons participating in the collision. A small increase in yield beyond this scaling, accompanied by a small suppression in the yield of the fastest pions, reflects the variation in stopping with centrality. Consistent with the observations from central collisions of light and heavy nuclei at the SPS, the transverse momentum distributions of all particles are observed to become harder with increasing centrality. This effect is most pronounced for the heaviest particles. This hardening is discussed in terms of multiple scattering of the incident nucleons of one colliding nucleus as they traverse the other nucleus and in terms of rescattering within the system of produced particles.
Bayesian based design of real-time sensor systems for high-risk indoor contaminants
The sudden release of toxic contaminants that reach indoor spaces can be hazardousto building occupants. To respond effectively, the contaminant release must be quicklydetected and characterized to determine unobserved parameters, such as release locationand strength. Characterizing the release requires solving an inverse problem. Designinga robust real-time sensor system that solves the inverse problem is challenging becausethe fate and transport of contaminants is complex, sensor information is limited andimperfect, and real-time estimation is computationally constrained.This dissertation uses a system-level approach, based on a Bayes Monte Carloframework, to develop sensor-system design concepts and methods. I describe threeinvestigations that explore complex relationships among sensors, network architecture,interpretation algorithms, and system performance. The investigations use data obtainedfrom tracer gas experiments conducted in a real building. The influence of individual sensor characteristics on the sensor-system performance for binary-type contaminant sensors is analyzed. Performance tradeoffs among sensor accuracy, threshold level and response time are identified; these attributes could not be inferred without a system-level analysis. For example, more accurate but slower sensors are found to outperform less accurate but faster sensors. Secondly, I investigate how the sensor-system performance can be understood in terms of contaminant transport processes and the model representation that is used to solve the inverse problem. The determination of release location and mass are shown to be related to and constrained by transport and mixing time scales. These time scales explain performance differences among different sensor networks. For example, the effect of longer sensor response times is comparably less for releases with longer mixing time scales. The third investigation explores how information fusion from heterogeneous sensors may improve the sensor-system performance and offset the need for more contaminant sensors. Physics- and algorithm-based frameworks are presented for selecting and fusing information from noncontaminant sensors. The frameworks are demonstrated with door-position sensors, which are found to …
Back to Top of Screen