486 Matching Results

Search Results

Advanced search parameters have been applied.

Passive Detection of Narrowband Sources Using a Sensor Array

Description: In this report we derive a model for a highly scattering medium, implemented as a set of MATLAB functions. This model is used to analyze an approach for using time-reversal to enhance the detection of a single frequency source in a highly scattering medium. The basic approach is to apply the singular value decomposition to the multistatic response matrix for a time-reversal array system. We then use the array in a purely passive mode, measuring the response to the presence of a source. The measured response is projected onto the singular vectors, creating a time-reversal pseudo-spectrum. We can then apply standard detection techniques to the pseudo-spectrum to determine the presence of a source. If the source is close to a particular scatterer in the medium, then we would expect an enhancement of the inner product between the array response to the source with the singular vector associated with that scatterer. In this note we begin by deriving the Foldy-Lax model of a highly scattering medium, calculate both the field emitted by the source and the multistatic response matrix of a time-reversal array system in the medium, then describe the initial analysis approach.
Date: October 24, 2007
Creator: Chambers, D H; Candy, J V & Guidry, B L
Partner: UNT Libraries Government Documents Department

Horizons and plane waves: A review

Description: We review the attempts to construct black hole/string solutions in asymptotically plane wave spacetimes. First, we demonstrate that geometries admitting a covariantly constant null Killing vector cannot admit event horizons, which implies that pp-waves can't describe black holes. However, relaxing the symmetry requirements allows us to generate solutions which do possess regular event horizons while retaining the requisite asymptotic properties. In particular, we present two solution generating techniques and use them to construct asymptotically plane wave black string/brane geometries.
Date: November 6, 2003
Creator: Hubeny, Veronika E. & Rangamani, Mukund
Partner: UNT Libraries Government Documents Department

BASIS AND DIMENSION IN ABSTRACT MODULE THEORY

Description: A classical linear vector space is a unitary DELTA module, where DELTA is a division ring. Properties of linear spaces are given. The approach used is a module theoretic one, that is, a sequence of progressively stronger impositions on the module structure of a unitary Rmodule M is studied, without regard to the nature of the operator ring R. (W.D.M.)
Date: August 1, 1959
Creator: Chichester, R. Jr.
Partner: UNT Libraries Government Documents Department

Beam diagnostics via model independent analysis of the turn-by-turn BPM data

Description: Model independent analysis (MIA) can be used to obtain all the eigen modes included in the turn-by-turn BPM data. Not only the synchrotron tune and betatron tune can be obtained from the fast Fourier transforms (FFT) of the temporal eigen vector of the corresponding mode, but also the error mode, which could be caused by the different gain of a BPM, can be observed in both the temporal and spatial eigen vectors of the error mode. It can be applied as a diagnostic tool for Booster.
Date: August 11, 2004
Creator: Yang, Xi
Partner: UNT Libraries Government Documents Department

Marmousi-2: An Updated Model for the Investigation of AVO in Structurally Complex Areas

Description: We have created an elastic version of the IFF Marmousi model for use in AVO analysis in the presence of complex structure. The model is larger, includes larger offsets, lies in deeper water, includes surface streamer, multicomponent OBC and VSP acquisition, and contains more hydrocarbons than its predecessor. In addition to AVO analysis, we believe these data will be suitable for calibrating emerging technologies including converted wave tomography and vector seismic processing.
Date: April 30, 2002
Creator: Martin, G.S.; Marfurt, K.J. & Larsen, S.
Partner: UNT Libraries Government Documents Department

Book Review Geostatistical Analysis of Compositional Data

Description: Compositional data are represented as vector variables with individual vector components ranging between zero and a positive maximum value representing a constant sum constraint, usually unity (or 100 percent). The earth sciences are flooded with spatial distributions of compositional data, such as concentrations of major ion constituents in natural waters (e.g. mole, mass, or volume fractions), mineral percentages, ore grades, or proportions of mutually exclusive categories (e.g. a water-oil-rock system). While geostatistical techniques have become popular in earth science applications since the 1970s, very little attention has been paid to the unique mathematical properties of geostatistical formulations involving compositional variables. The book 'Geostatistical Analysis of Compositional Data' by Vera Pawlowsky-Glahn and Ricardo Olea (Oxford University Press, 2004), unlike any previous book on geostatistics, directly confronts the mathematical difficulties inherent to applying geostatistics to compositional variables. The book righteously justifies itself with prodigious referencing to previous work addressing nonsensical ranges of estimated values and error, spurious correlation, and singular cross-covariance matrices.
Date: March 26, 2007
Creator: Carle, S F
Partner: UNT Libraries Government Documents Department

Subspace Detectors: Theory

Description: Broadband subspace detectors are introduced for seismological applications that require the detection of repetitive sources that produce similar, yet significantly variable seismic signals. Like correlation detectors, of which they are a generalization, subspace detectors often permit remarkably sensitive detection of small events. The subspace detector derives its name from the fact that it projects a sliding window of data drawn from a continuous stream onto a vector signal subspace spanning the collection of signals expected to be generated by a particular source. Empirical procedures are presented for designing subspaces from clusters of events characterizing a source. Furthermore, a solution is presented for the problem of selecting the dimension of the subspace to maximize the probability of detecting repetitive events at a fixed false alarm rate. An example illustrates subspace design and detection using events in the 2002 San Ramon, California earthquake swarm.
Date: July 11, 2006
Creator: Harris, D B
Partner: UNT Libraries Government Documents Department

Training SVMs without offset

Description: We develop, analyze, and test a training algorithm for support vector machine cla.'>sifiers without offset. Key features of this algorithm are a new stopping criterion and a set of working set selection strategies that, although inexpensive, do not lead to substantially more iterations than the optimal working set selection strategy. For these working set strategies, we establish convergence rates that coincide with the best known rates for SYMs with offset. We further conduct various experiments that investigate both the run time behavior and the performed iterations of the new training algorithm. It turns out, that the new algorithm needs less iterations and run-time than standard training algorithms for SYMs with offset.
Date: January 1, 2009
Creator: Steinwart, Ingo; Hush, Don & Scovel, Clint
Partner: UNT Libraries Government Documents Department

The use of bulk states to accelerate the band edge statecalculation of a semiconductor quantum dot

Description: We present a new technique to accelerate the convergence of the folded spectrum method in empirical pseudopotential band edge state calculations for colloidal quantum dots. We use bulk band states of the materials constituent of the quantum dot to construct initial vectors and a preconditioner. We apply these to accelerate the convergence of the folded spectrum method for the interior states at the top of the valence and the bottom of the conduction band. For large CdSe quantum dots, the number of iteration steps until convergence decreases by about a factor of 4 compared to previous calculations.
Date: May 10, 2006
Creator: Vomel, Christof; Tomov, Stanimire Z.; Wang, Lin-Wang; Marques,Osni A. & Dongarra, Jack J.
Partner: UNT Libraries Government Documents Department

Construction of a Cloning Vector Based upon a Rhizobium Plasmid Origin of Replication and its Application to Genetic Engineering of Rhizobium Strains

Description: Rhizobia are Gram-negative, rod-shaped, soil bacteria with the ability to fix atmospheric nitrogen into ammonia as symbiont bacteroids within nodules of leguminous plant roots. Here, resident Rhizobium plasmids were studied as possible sources of components for the construction of a cloning vector for Rhizobium species.
Date: May 1992
Creator: Jeong, Pyengsoo
Partner: UNT Libraries

Evaluation of leading scalar and vector architectures for scientific computations

Description: The growing gap between sustained and peak performance for scientific applications is a well-known problem in high performance computing. The recent development of parallel vector systems offers the potential to reduce this gap for many computational science codes and deliver a substantial increase in computing capabilities. This project examines the performance of the cacheless vector Earth Simulator (ES) and compares it to superscalar cache-based IBM Power3 system. Results demonstrate that the ES is significantly faster than the Power3 architecture, highlighting the tremendous potential advantage of the ES for numerical simulation. However, vectorization of a particle-in-cell application (GTC) greatly increased the memory footprint preventing loop-level parallelism and limiting scalability potential.
Date: April 20, 2004
Creator: Simon, Horst D.; Oliker, Leonid; Canning, Andrew; Carter, Jonathan; Ethier, Stephane & Shalf, John
Partner: UNT Libraries Government Documents Department

Evaluation of cache-based superscalar and cacheless vector architectures for scientific computations

Description: The growing gap between sustained and peak performance for scientific applications is a well-known problem in high end computing. The recent development of parallel vector systems offers the potential to bridge this gap for many computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX-6 vector processor and the cache-based IBM Power3/4 superscalar architectures across a number of scientific computing areas. First, we present the performance of a microbenchmark suite that examines low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks. Finally, we evaluate the performance of several scientific computing codes. Results demonstrate that the SX-6 achieves high performance on a large fraction of our applications and often significantly out performs the cache-based architectures. However, certain applications are not easily amenable to vectorization and would re quire extensive algorithm and implementation reengineering to utilize the SX-6 effectively.
Date: May 1, 2003
Creator: Oliker, Leonid; Canning, Andrew; Carter, Jonathan; Shalf, John; Skinner, David; Ethier, Stephane et al.
Partner: UNT Libraries Government Documents Department

Modal Analysis Using the Singular Value Decomposition

Description: Many methods exist for identifying modal parameters from experimental transfer function measurements. For frequency domain calculations, rational fraction polynomials have become the method of choice, although it generally requires the user to identify frequency bands of interest along with the number of modes in each band. This process can be tedious, especially for systems with a large number of modes, and it assumes the user can accurately assess the number of modes present in each band from frequency response plots of the transfer functions. When the modal density is high, better results can be obtained by using the singular value decomposition to help separate the modes before the modal identification process begins. In a typical calculation, the transfer function data for a single frequency is arranged in matrix form with each column representing a different drive point. The matrix is input to the singular value decomposition algorithm and left- and right-singular vectors and a diagonal singular value matrix are computed. The calculation is repeated at each analysis frequency and the resulting data is used to identify the modal parameters. In the optimal situation, the singular value decomposition will completely separate the modes from each other, so that a single transfer function is produced for each mode with no residual effects. A graphical method has been developed to simplify the process of identifying the modes, yielding a relatively simple method for computing mode shapes and resonance frequencies from experimental data.
Date: February 5, 2004
Creator: Fahnline, JB; Campbell, RL & Hambric, SA
Partner: UNT Libraries Government Documents Department

Library Resources for Bac End Sequencing. Final Technical Report

Description: Studies directed towards the specific aims outlined for this research award are summarized. The RPCI II Human Bac Library has been expanded by the addition of 6.9-fold genomic coverage. This segment has been generated from a MBOI partial digest of the same anonymous donor DNA used for the rest of the library. A new cloning vector, pTARBAC1, has been constructed and used in the construction of RPCI-II segment 5. This new cloning vector provides a new strategy in identifying targeted genomic regions and will greatly facilitate a large-scale analysis for positional cloning. A new maleCS7BC/6J mouse BAC library has been constructed. RPCI-23 contain 576 plates (approx 210,000 clones) and represents approximately 11-fold coverage of the mouse genome.
Date: October 1, 2000
Creator: Jong, Pieter J. de
Partner: UNT Libraries Government Documents Department

Direct observation of imprinted antiferromagnetic vortex state in CoO/Fe/Ag(001) disks

Description: In magnetic thin films, a magnetic vortex is a state in which the magnetization vector curls around the center of a confined structure. A vortex state in a thin film disk, for example, is a topological object characterized by the vortex polarity and the winding number. In ferromagnetic (FM) disks, these parameters govern many fundamental properties of the vortex such as its gyroscopic rotation, polarity reversal, core motion, and vortex pair excitation. However, in antiferromagnetic (AFM) disks, though there has been indirect evidence of the vortex state through observations of the induced FM-ordered spins in the AFM disk, they have never been observed directly in experiment. By fabricating single crystalline NiO/Fe/Ag(001) and CoO/Fe/Ag(001) disks and using X-ray Magnetic Linear Dichroism (XMLD), we show direct observation of the vortex state in an AFM disk of AFM/FM bilayer system. We observe that there are two types of AFM vortices, one of which has no analog in FM structures. Finally, we show that a frozen AFM vortex can bias a FM vortex at low temperature.
Date: December 21, 2010
Creator: Wu, J.; Carlton, D.; Park, J. S.; Meng, Y.; Arenholz, E.; Doran, A. et al.
Partner: UNT Libraries Government Documents Department

Terminator Detection by Support Vector Machine Utilizing aStochastic Context-Free Grammar

Description: A 2-stage detector was designed to find rho-independent transcription terminators in the Escherichia coli genome. The detector includes a Stochastic Context Free Grammar (SCFG) component and a Support Vector Machine (SVM) component. To find terminators, the SCFG searches the intergenic regions of nucleotide sequence for local matches to a terminator grammar that was designed and trained utilizing examples of known terminators. The grammar selects sequences that are the best candidates for terminators and assigns them a prefix, stem-loop, suffix structure using the Cocke-Younger-Kasaami (CYK) algorithm, modified to incorporate energy affects of base pairing. The parameters from this inferred structure are passed to the SVM classifier, which distinguishes terminators from non-terminators that score high according to the terminator grammar. The SVM was trained with negative examples drawn from intergenic sequences that include both featureless and RNA gene regions (which were assigned prefix, stem-loop, suffix structure by the SCFG), so that it successfully distinguishes terminators from either of these. The classifier was found to be 96.4% successful during testing.
Date: December 30, 2006
Creator: Francis-Lyon, Patricia; Cristianini, Nello & Holbrook, Stephen
Partner: UNT Libraries Government Documents Department

The HIBEAM Manual

Description: HIBEAM is a 2 1/2D particle-in-cell (PIC) simulation code developed in the late 1990's in the Heavy-Ion Fusion research program at Lawrence Berkeley National Laboratory. The major purpose of HIBEAM is to simulate the transverse (i.e., X-Y) dynamics of a space-charge-dominated, non-relativistic heavy-ion beam being transported in a static accelerator focusing lattice. HIBEAM has been used to study beam combining systems, effective dynamic apertures in electrostatic quadrupole lattices, and emittance growth due to transverse misalignments. At present, HIBEAM runs on the CRAY vector machines (C90 and J90's) at NERSC, although it would be relatively simple to port the code to UNIX workstations so long as IMSL math routines were available.
Date: February 1, 2000
Creator: Fawley, William M.
Partner: UNT Libraries Government Documents Department

A texture-based frameowrk for improving CFD data visualization in a virtual environment

Description: In the field of computational fluid dynamics (CFD) accurate representations of fluid phenomena can be simulated but require large amounts of data to represent the flow domain. Most datasets generated from a CFD simulation can be coarse, {approx} 10,000 nodes or cells, or very fine with node counts on the order of 1,000,000. A typical dataset solution can also contain multiple solutions for each node, pertaining to various properties of the flow at a particular node. Scalar properties such as density, temperature, pressure, and velocity magnitude are properties that are typically calculated and stored in a dataset solution. Solutions are not limited to just scalar properties. Vector quantities, such as velocity, are also often calculated and stored for a CFD simulation. Accessing all of this data efficiently during runtime is a key problem for visualization in an interactive application. Understanding simulation solutions requires a post-processing tool to convert the data into something more meaningful. Ideally, the application would present an interactive visual representation of the numerical data for any dataset that was simulated while maintaining the accuracy of the calculated solution. Most CFD applications currently sacrifice interactivity for accuracy, yielding highly detailed flow descriptions but limiting interaction for investigating the field.
Date: May 1, 2005
Creator: Biveins, Gerrick O'Ron
Partner: UNT Libraries Government Documents Department

A texture-based framework for improving CFD data visualization in a virtual environment

Description: In the field of computational fluid dynamics (CFD) accurate representations of fluid phenomena can be simulated hut require large amounts of data to represent the flow domain. Most datasets generated from a CFD simulation can be coarse, {approx}10,000 nodes or cells, or very fine with node counts on the order of 1,000,000. A typical dataset solution can also contain multiple solutions for each node, pertaining to various properties of the flow at a particular node. Scalar properties such as density, temperature, pressure, and velocity magnitude are properties that are typically calculated and stored in a dataset solution. Solutions are not limited to just scalar properties. Vector quantities, such as velocity, are also often calculated and stored for a CFD simulation. Accessing all of this data efficiently during runtime is a key problem for visualization in an interactive application. Understanding simulation solutions requires a post-processing tool to convert the data into something more meaningful. Ideally, the application would present an interactive visual representation of the numerical data for any dataset that was simulated while maintaining the accuracy of the calculated solution. Most CFD applications currently sacrifice interactivity for accuracy, yielding highly detailed flow descriptions hut limiting interaction for investigating the field.
Date: May 5, 2005
Creator: Bivins, Gerrick O'Ron
Partner: UNT Libraries Government Documents Department