2,492 Matching Results

Search Results

Advanced search parameters have been applied.

Two-stage Framework for a Topology-Based Projection and Visualization of Classified Document Collections

Description: During the last decades, electronic textual information has become the world's largest and most important information source available. People have added a variety of daily newspapers, books, scientific and governmental publications, blogs and private messages to this wellspring of endless information and knowledge. Since neither the existing nor the new information can be read in its entirety, computers are used to extract and visualize meaningful or interesting topics and documents from this huge information clutter. In this paper, we extend, improve and combine existing individual approaches into an overall framework that supports topological analysis of high dimensional document point clouds given by the well-known tf-idf document-term weighting method. We show that traditional distance-based approaches fail in very high dimensional spaces, and we describe an improved two-stage method for topology-based projections from the original high dimensional information space to both two dimensional (2-D) and three dimensional (3-D) visualizations. To show the accuracy and usability of this framework, we compare it to methods introduced recently and apply it to complex document and patent collections.
Date: July 19, 2010
Creator: Oesterling, Patrick; Scheuermann, Gerik; Teresniak, Sven; Heyer, Gerhard; Koch, Steffen; Ertl, Thomas et al.
Partner: UNT Libraries Government Documents Department

Long-Range Corrected Hybrid Density Functionals with Damped Atom-Atom Dispersion Corrections

Description: We report re-optimization of a recently proposed long-range corrected (LC) hybrid density functionals [J.-D. Chai and M. Head-Gordon, J. Chem. Phys. 128, 084106 (2008)] to include empirical atom-atom dispersion corrections. The resulting functional, {omega}B97X-D yields satisfactory accuracy for thermochemistry, kinetics, and non-covalent interactions. Tests show that for non-covalent systems, {omega}B97X-D shows slight improvement over other empirical dispersion-corrected density functionals, while for covalent systems and kinetics, it performs noticeably better. Relative to our previous functionals, such as {omega}B97X, the new functional is significantly superior for non-bonded interactions, and very similar in performance for bonded interactions.
Date: June 14, 2008
Creator: Chai, Jeng-Da & Head-Gordon, Martin
Partner: UNT Libraries Government Documents Department

Modeling Renewable Penetration Using a Network Economic Model

Description: This paper evaluates the accuracy of a network economic modeling approach in designing energy systems having renewable and conventional generators. The network approach models the system as a network of processes such as demands, generators, markets, and resources. The model reaches a solution by exchanging prices and quantity information between the nodes of the system. This formulation is very flexible and takes very little time to build and modify models. This paper reports an experiment designing a system with photovoltaic and base and peak fossil generators. The level of PV penetration as a function of its price and the capacities of the fossil generators were determined using the network approach and using an exact, analytic approach. It is found that the two methods agree very closely in terms of the optimal capacities and are nearly identical in terms of annual system costs.
Date: March 6, 2001
Creator: Lamont, A.
Partner: UNT Libraries Government Documents Department

Interfacial Widths of Conjugated Polymer Bilayers

Description: The interfaces of conjugated polyelectrolyte (CPE)/poly[2-methoxy-5-(2{prime}-ethylhexyloxy)-p-phenylene vinylene] (MEH-PPV) bilayers cast from differential solvents are shown by resonant soft X-ray reflectivity (RSoXR) to be very smooth and sharp. The chemical interdiffusion due to casting is limited to less than 0.6 nm, and the interface created is thus nearly 'molecularly' sharp. These results demonstrate for the first time and with high precision that the nonpolar MEH-PPV layer is not much disturbed by casting the CPE layer from a polar solvent. A baseline is established for understanding the role of interfacial structure in determining the performance of CPE-based polymer light-emitting diodes. More broadly, we anticipate further applications of RSoXR as an important tool in achieving a deeper understanding of other multilayer organic optoelectronic devices, including multilayer photovoltaic devices.
Date: August 13, 2009
Creator: NCSU; Berkeley, UC; UCSB; Source, Advanced Light; Garcia, Andres; Yan, Hongping et al.
Partner: UNT Libraries Government Documents Department

Evaluation of flow capture techniques for measuring HVAC grilleairflows

Description: This paper discusses the accuracy of commercially available flow hoods for residential applications. Results of laboratory and field tests indicate these hoods can be inadequate to measure airflows in residential systems, and there can be large measurement discrepancies between different flow hoods. The errors are due to poor calibrations, sensitivity of the hoods to grille airflow non-uniformities, and flow changes from added flow resistance. It is possible to obtain reasonable results using some flow hoods if the field tests are carefully done, the grilles are appropriate, and grille location does not restrict flow hood placement. We also evaluated several simple flow capture techniques for measuring grille airflows that could be adopted by the HVAC industry and homeowners as simple diagnostics. These simple techniques can be as accurate as commercially available devices. Our test results also show that current calibration procedures for flow hoods do not account for field application problems. As a result, agencies such as ASHRAE or ASTM need to develop a new standard for flow hood calibration, along with a new measurement standard to address field use of flow capture techniques.
Date: November 1, 2002
Creator: Walker, Iain S. & Wray, Craig P.
Partner: UNT Libraries Government Documents Department

Analysis and algorithms for a regularized Cauchy problem arising from a non-linear elliptic PDE for seismic velocity estimation

Description: In the present work we derive and study a nonlinear elliptic PDE coming from the problem of estimation of sound speed inside the Earth. The physical setting of the PDE allows us to pose only a Cauchy problem, and hence is ill-posed. However we are still able to solve it numerically on a long enough time interval to be of practical use. We used two approaches. The first approach is a finite difference time-marching numerical scheme inspired by the Lax-Friedrichs method. The key features of this scheme is the Lax-Friedrichs averaging and the wide stencil in space. The second approach is a spectral Chebyshev method with truncated series. We show that our schemes work because of (1) the special input corresponding to a positive finite seismic velocity, (2) special initial conditions corresponding to the image rays, (3) the fact that our finite-difference scheme contains small error terms which damp the high harmonics; truncation of the Chebyshev series, and (4) the need to compute the solution only for a short interval of time. We test our numerical scheme on a collection of analytic examples and demonstrate a dramatic improvement in accuracy in the estimation of the sound speed inside the Earth in comparison with the conventional Dix inversion. Our test on the Marmousi example confirms the effectiveness of the proposed approach.
Date: January 1, 2009
Creator: Cameron, M.K.; Fomel, S.B. & Sethian, J.A.
Partner: UNT Libraries Government Documents Department

Experimental and model-based study of the robustness of line-edgeroughness metric extraction in the presence of noise

Description: As critical dimensions shrink, line edge and width roughness (LER and LWR) become of increasing concern. Crucial to the goal of reducing LER is its accurate characterization. LER has traditionally been represented as a single rms value. More recently the use of power spectral density (PSD), height-height correlation (HHCF), and {sigma} versus length plots has been proposed in order to extract the additional spatial descriptors of correlation length and roughness exponent. Here we perform a modeling-based noise-sensitivity study on the extraction of spatial descriptors from line-edge data as well as an experimental study of the robustness of these various descriptors using a large dataset of recent extreme-ultraviolet exposure data. The results show that in the presence of noise and in the large dataset limit, the PSD method provides higher accuracy in the extraction of the roughness exponent, whereas the HHCF method provides higher accuracy for the correlation length. On the other hand, when considering precision, the HHCF method is superior for both metrics.
Date: June 1, 2007
Creator: Naulleau, Patrick P. & Cain, Jason P.
Partner: UNT Libraries Government Documents Department

Fourth-Order Method for Numerical Integration of Age- and Size-Structured Population Models

Description: In many applications of age- and size-structured population models, there is an interest in obtaining good approximations of total population numbers rather than of their densities. Therefore, it is reasonable in such cases to solve numerically not the PDE model equations themselves, but rather their integral equivalents. For this purpose quadrature formulae are used in place of the integrals. Because quadratures can be designed with any order of accuracy, one can obtain numerical approximations of the solutions with very fast convergence. In this article, we present a general framework and a specific example of a fourth-order method based on composite Newton-Cotes quadratures for a size-structured population model.
Date: January 8, 2008
Creator: Iannelli, M; Kostova, T & Milner, F A
Partner: UNT Libraries Government Documents Department

FY06 Engineering Research and Technology Report

Description: This report summarizes the core research, development, and technology accomplishments in Lawrence Livermore National Laboratory's Engineering Directorate for FY2006. These efforts exemplify Engineering's more than 50-year history of developing and applying the technologies needed to support the Laboratory's national security missions. A partner in every major program and project at the Laboratory throughout its existence, Engineering has prepared for this role with a skilled workforce and technical resources developed through both internal and external venues. These accomplishments embody Engineering's mission: ''Enable program success today and ensure the Laboratory's vitality tomorrow''. Engineering's investment in technologies is carried out primarily through two internal programs: the Laboratory Directed Research and Development (LDRD) program and the technology base, or ''Tech Base'', program. LDRD is the vehicle for creating technologies and competencies that are cutting-edge, or require discovery-class research to be fully understood. Tech Base is used to prepare those technologies to be more broadly applicable to a variety of Laboratory needs. The term commonly used for Tech Base projects is ''reduction to practice''. Thus, LDRD reports have a strong research emphasis, while Tech Base reports document discipline-oriented, core competency activities. This report combines the LDRD and Tech Base summaries into one volume, organized into six thematic technical areas: Engineering Modeling and Simulation; Measurement Technologies; Micro/Nano-Devices and Structures; Precision Engineering; Engineering Systems for Knowledge and Inference; and Energy Manipulation.
Date: January 22, 2007
Creator: Minichino, C; Alves, S W; Anderson, A T; Bennett, C V; Brown, C G; Brown, W D et al.
Partner: UNT Libraries Government Documents Department

Coupling Through Tortuous Path Narrow Slot Apertures into Complex Cavitivies

Description: A hybrid FEM/MoM model has been implemented to compute the coupling of fields into a cavity through narrow slot apertures having depth. The model utilizes the slot model of Warne and Chen [23]-[29] which takes into account the depth of the slot, wall losses, and inhomogeneous dielectrics in the slot region. The cavity interior is modeled with the mixed-order, covariant-projection hexahedral elements of Crowley [32]. Results are given showing the accuracy and generality of the method for modeling geometrically complex slot-cavity combinations.
Date: July 26, 1999
Creator: Jedlicka, Russell P.; Castillo, Steven P. & Warne, Larry K.
Partner: UNT Libraries Government Documents Department

A comparison of drive mechanisms for precision motion controlled stages

Description: This abstract presents a comparison of two drive mechanisms, a Rohlix{reg_sign} drive and a polymer nut drive, for precision motion controlled stages. A single-axis long-range stage with a 50 mm traverse combined with a short-range stage with a 16 {micro}m traverse at a operational bandwidth of 2.2 kHz were developed to evaluate the performance of the drives. The polymer nut and Rohlix{reg_sign} drives showed 4 nm RMS and 7 nm RMS positioning capabilities respectively, with traverses of 5 mm at a maximum velocity of 0.15 mm{sup -}s{sup -1} with the short range stage operating at a 2.2 kHz bandwidth. Further results will be presented in the subsequent sections.
Date: March 22, 2006
Creator: Buice, E S; Yang, H; Otten, D; Smith, S T; Hocken, R J; Trumper, D L et al.
Partner: UNT Libraries Government Documents Department

Experimental Mathematics and Mathematical Physics

Description: One of the most effective techniques of experimental mathematics is to compute mathematical entities such as integrals, series or limits to high precision, then attempt to recognize the resulting numerical values. Recently these techniques have been applied with great success to problems in mathematical physics. Notable among these applications are the identification of some key multi-dimensional integrals that arise in Ising theory, quantum field theory and in magnetic spin theory.
Date: June 26, 2009
Creator: Bailey, David H.; Borwein, Jonathan M.; Broadhurst, David & Zudilin, Wadim
Partner: UNT Libraries Government Documents Department

Experimental computation with oscillatory integrals

Description: A previous study by one of the present authors, together with D. Borwein and I. Leonard [8], studied the asymptotic behavior of the p-norm of the sinc function: sinc(x) = (sin x)/x and along the way looked at closed forms for integer values of p. In this study we address these integrals with the tools of experimental mathematics, namely by computing their numerical values to high precision, both as a challenge in itself, and also in an attempt to recognize the numerical values as closed-form constants. With this approach, we are able to reproduce several of the results of [8] and to find new results, both numeric and analytic, that go beyond the previous study.
Date: June 26, 2009
Creator: Bailey, David H. & Borwein, Jonathan M.
Partner: UNT Libraries Government Documents Department

Z' Bosons, the NuTeV Anomaly, and the Higgs Boson Mass

Description: Fits to the precision electroweak data that include the NuTeV measurement are considered in family universal, anomaly free U(1) extensions of the Standard Model. In data sets from which the hadronic asymmetries are excluded, some of the Z{prime} models can double the predicted value of the Higgs boson mass, from {approx} 60 to {approx} 120 GeV, removing the tension with the LEP II lower bound, while also modestly improving the {chi}{sup 2} confidence level. The effect of the Z{prime} models on both m{sub H} and the {chi}{sup 2} confidence level is increased when the NuTeV measurement is included in the fit. Both the original NuTeV data and a revised estimate by the PDG are considered.
Date: March 3, 2009
Creator: Chanowitz, Michael S
Partner: UNT Libraries Government Documents Department

Automated coregistration of MTI spectral bands.

Description: In the focal plane of a pushbroom imager, a linear array of pixels is scanned across the scene, building up the image one row at a time. For the Multispectral Thermal Imager (MTI), each of fifteen different spectral bands has its own linear array. These arrays are pushed across the scene together, but since each band's array is at a different position on the focal plane, a separate image is produced for each band. The standard MTI data products resample these separate images to a common grid and produce coregistered multispectral image cubes. The coregistration software employs a direct 'dead reckoning' approach. Every pixel in the calibrated image is mapped to an absolute position on the surface of the earth, and these are resampled to produce an undistorted coregistered image of the scene. To do this requires extensive information regarding the satellite position and pointing as a function of time, the precise configuration of the focal plane, and the distortion due to the optics. These must be combined with knowledge about the position and altitude of the target on the rotating ellipsoidal earth. We will discuss the direct approach to MTI coregistration, as well as more recent attempts to 'tweak' the precision of the band-to-band registration using correlations in the imagery itself.
Date: January 1, 2002
Creator: Theiler, J. P. (James P.); Galbraith, A. E. (Amy E.); Pope, P. A. (Paul A.); Ramsey, K. A. (Keri A.) & Szymanski, J. J. (John J.)
Partner: UNT Libraries Government Documents Department

Advanced Lost Foam Casting Technology - Phase V

Description: Previous research, conducted under DOE Contracts DE-FC07-89ID12869, DE-FC07-93ID12230 and DE-FC07-95ID113358 made significant advances in understanding the Lost Foam Casting (LFC) Process and clearly identified areas where additional developments were needed to improve the process and make it more functional in industrial environments. The current project focused on eight tasks listed as follows: Task 1--Computational Model for the Process and Data Base to Support the Model; Task 2--Casting Dimensional Accuracy; Task 3--Pattern Production; Task 4--Improved Pattern Materials; Task 5--Coating Control; Task 6--In-Plant Case Studies; Task 7--Energy and the Environmental Data; and Task 8--Technology Transfer. This report summarizes the work done on all tasks in the period of October 1, 1999 through September 30, 2004. The results obtained in each task and subtask are summarized in this Executive Summary and details are provided in subsequent sections of the report.
Date: April 29, 2004
Creator: Sun, Wanliang; Littleton, Harry E. & Bates, Charles E.
Partner: UNT Libraries Government Documents Department

Bounding CKM Mixing with a Fourth Family

Description: CKM mixing between third family quarks and a possible fourth family is constrained by global fits to the precision electroweak data. The dominant constraint is from nondecoupling oblique corrections rather than the vertex correction to Z {yields} {bar b}b used in previous analyses. The possibility of large mixing suggested by some recent analyses of FCNC processes is excluded, but 3-4 mixing of the same order as the Cabbibo mixing of the first two families is allowed.
Date: April 22, 2009
Creator: Chanowitz, Michael S.
Partner: UNT Libraries Government Documents Department

Genome Sequence Databases (Overview): Sequencing and Assembly

Description: From the date its role in heredity was discovered, DNA has been generating interest among scientists from different fields of knowledge: physicists have studied the three dimensional structure of the DNA molecule, biologists tried to decode the secrets of life hidden within these long molecules, and technologists invent and improve methods of DNA analysis. The analysis of the nucleotide sequence of DNA occupies a special place among the methods developed. Thanks to the variety of sequencing technologies available, the process of decoding the sequence of genomic DNA (or whole genome sequencing) has become robust and inexpensive. Meanwhile the assembly of whole genome sequences remains a challenging task. In addition to the need to assemble millions of DNA fragments of different length (from 35 bp (Solexa) to 800 bp (Sanger)), great interest in analysis of microbial communities (metagenomes) of different complexities raises new problems and pushes some new requirements for sequence assembly tools to the forefront. The genome assembly process can be divided into two steps: draft assembly and assembly improvement (finishing). Despite the fact that automatically performed assembly (or draft assembly) is capable of covering up to 98% of the genome, in most cases, it still contains incorrectly assembled reads. The error rate of the consensus sequence produced at this stage is about 1/2000 bp. A finished genome represents the genome assembly of much higher accuracy (with no gaps or incorrectly assembled areas) and quality ({approx}1 error/10,000 bp), validated through a number of computer and laboratory experiments.
Date: January 1, 2009
Creator: Lapidus, Alla L.
Partner: UNT Libraries Government Documents Department

Isoelectronic trends of line strength data in the Li and Be isoelectronic sequences

Description: The decays of the 2p J = 1/2 and J = 3/2 levels of Li-like ions and of the 2s2p {sup 1,3}P{sub 1}{sup 0} levels of Be-like ions can be used as simple-atom test beds for experimental lifetime measurements and for the development of accurate calculations of the transition rates. They have summarized and filtered the experimental data in order to obtain consistent data sets and isoelectronic trends that can be compared to theoretical predictions. The graphical presentation of line strength data enables direct comparison and evaluation of the merit of data along extended isoelectronic sequences. From this, the precision that is necessary in future meaningful experiments can be deduced.
Date: March 1, 2006
Creator: Trabert, E & Curtis, L J
Partner: UNT Libraries Government Documents Department

Normal-reflection image

Description: Common-angle wave-equation migration using the double-square-root is generally less accurate than the common-shot migration because the wavefield continuation equation for thc former involves additional approximations compared to that for the latter. We present a common-angle wave-equation migration that has the same accuracy as common-shot wave-equation migration. An image obtained from common-angle migration is a four- to five-dimensional output volume for 3D cases. We propose a normal-reflection imaging condition for common-angle migration to produce a 3D output volume for 3D migration. The image is closely related to the normal-reflection coefficients at interfaces. This imaging condition will allow amplitude-preserving migration to generate an image with clear physical meaning.
Date: January 1, 2003
Creator: Huang, L. (Lian-Jie) & Fehler, Michael C.
Partner: UNT Libraries Government Documents Department

Design of Flexure-based Precision Transmission Mechanisms using Screw Theory

Description: This paper enables the synthesis of flexure-based transmission mechanisms that possess multiple decoupled inputs and outputs of any type (e.g. rotations, translations, and/or screw motions), which are linked by designer-specified transmission ratios. A comprehensive library of geometric shapes is utilized from which every feasible concept that possesses the desired transmission characteristics may be rapidly conceptualized and compared before an optimal concept is selected. These geometric shapes represent the rigorous mathematics of screw theory and uniquely link a body's desired motions to the flexible constraints that enable those motions. This paper's impact is most significant to the design of nano-positioners, microscopy stages, optical mounts, and sensors. A flexure-based microscopy stage was designed, fabricated, and tested to demonstrate the utility of the theory.
Date: February 7, 2011
Creator: Hopkins, J B & Panas, R M
Partner: UNT Libraries Government Documents Department

Higgs Mass Constraints on a Fourth Family: Upper and Lower Limits on CKM Mixing

Description: Theoretical and experimental limits on the Higgs boson mass restrict CKM mixing of a possible fourth family beyond the constraints previously obtained from precision electroweak data alone. Existing experimental and theoretical bounds on m{sub H} already significantly restrict the allowed parameter space. Zero CKM mixing is excluded and mixing of order {theta}{sub Cabbibo} is allowed. Upper and lower limits on 3-4 CKM mixing are exhibited as a function of m{sub H}. We use the default inputs of the Electroweak Working Group and also explore the sensitivity of both the three and four family fits to alternative inputs.
Date: June 25, 2010
Creator: Chanowitz, Michael S.
Partner: UNT Libraries Government Documents Department

The BBP Algorithm for Pi

Description: The 'Bailey-Borwein-Plouffe' (BBP) algorithm for {pi} is based on the BBP formula for {pi}, which was discovered in 1995 and published in 1996 [3]: {pi} = {summation}{sub k=0}{sup {infinity}} 1/16{sup k} (4/8k+1 - 2/8k+4 - 1/8k+5 - 1/8k+6). This formula as it stands permits {pi} to be computed fairly rapidly to any given precision (although it is not as efficient for that purpose as some other formulas that are now known [4, pg. 108-112]). But its remarkable property is that it permits one to calculate (after a fairly simple manipulation) hexadecimal or binary digits of {pi} beginning at an arbitrary starting position. For example, ten hexadecimal digits {pi} beginning at position one million can be computed in only five seconds on a 2006-era personal computer. The formula itself was found by a computer program, and almost certainly constitutes the first instance of a computer program finding a significant new formula for {pi}. It turns out that the existence of this formula has implications for the long-standing unsolved question of whether {pi} is normal to commonly used number bases (a real number x is said to be b-normal if every m-long string of digits in the base-b expansion appears, in the limit, with frequency b{sup -m}). Extending this line of reasoning recently yielded a proof of normality for class of explicit real numbers (although not yet including {pi}) [4, pg. 148-156].
Date: September 17, 2006
Creator: Bailey, David H.
Partner: UNT Libraries Government Documents Department

Thin Silicon MEMS Contact-Stress Sensor

Description: This thin, MEMS contact-stress sensor continuously and accurately measures time-varying, solid interface loads over tens of thousands of load cycles. The contact-stress sensor is extremely thin (150 {mu}m) and has a linear output with an accuracy of {+-} 1.5% FSO.
Date: May 28, 2010
Creator: Kotovksy, J; Tooker, A & Horsley, D
Partner: UNT Libraries Government Documents Department