1,615 Matching Results

Search Results

DAiSES: Dynamic Adaptivity in Support of Extreme Scale Department of Energy Project No. ER25622 Prime Contract No. DE-FG02-04ER25622 Final Report for September 15, 2004-September 14, 2008

Description: The DAiSES project [Te04] was focused on enabling conventional operating systems, in particular, those running on extreme scale systems, to dynamically customize system resource management in order to offer applications the best possible environment in which to execute. Such dynamic adaptation allows operating systems to modify the execution environment in response to changes in workload behavior and system state. The main challenges of this project included determination of what operating system (OS) algorithms, policies, and parameters should be adapted, when to adapt them, and how to adapt them. We addressed these challenges by using a combination of static analysis and runtime monitoring and adaptation to identify a priori profitable targets of adaptation and effective heuristics that can be used to dynamically trigger adaptation. Dynamic monitoring and adaptation of the OS was provided by either kernel modifications or the use of KernInst and Kperfmon [Wm04]. Since Linux, an open source OS, was our target OS, patches submitted by kernel developers and researchers often facilitated kernel modifications. KernInst operates on unmodified commodity operating systems, i.e., Solaris and Linux; it is fine-grained, thus, there were few constraints on how the underlying OS can be modified. Dynamically adaptive functionality of operating systems, both in terms of policies and parameters, is intended to deliver the maximum attainable performance of a computational environment and meet, as best as possible, the needs of high-performance applications running on extreme scale systems, while meeting system constraints. DAiSES research endeavored to reach this goal by developing methodologies for dynamic adaptation of OS parameters and policies to manage stateful and stateless resources [Te06] and pursuing the following two objectives: (1) Development of mechanisms to dynamically sense, analyze, and adjust common performance metrics, fluctuating workload situations, and overall system environment conditions. (2) Demonstration, via Linux prototypes and experiments, of dynamic self-tuning and ...
Date: May 5, 2009
Creator: PI: Patricia J. Teller, Ph.D. University of Texas-El Paso Department of Computer Science
Partner: UNT Libraries Government Documents Department

Integrating Data Clustering and Visualization for the Analysis of 3D Gene Expression Data

Description: The recent development of methods for extracting precise measurements of spatial gene expression patterns from three-dimensional (3D) image data opens the way for new analyses of the complex gene regulatory networks controlling animal development. We present an integrated visualization and analysis framework that supports user-guided data clustering to aid exploration of these new complex datasets. The interplay of data visualization and clustering-based data classification leads to improved visualization and enables a more detailed analysis than previously possible. We discuss (i) integration of data clustering and visualization into one framework; (ii) application of data clustering to 3D gene expression data; (iii) evaluation of the number of clusters k in the context of 3D gene expression clustering; and (iv) improvement of overall analysis quality via dedicated post-processing of clustering results based on visualization. We discuss the use of this framework to objectively define spatial pattern boundaries and temporal profiles of genes and to analyze how mRNA patterns are controlled by their regulatory transcription factors.
Date: May 12, 2008
Creator: Data Analysis and Visualization (IDAV) and the Department of Computer Science, University of California, Davis, One Shields Avenue, Davis CA 95616, USA,; nternational Research Training Group ``Visualization of Large and Unstructured Data Sets,'' University of Kaiserslautern, Germany; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA; Genomics Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA; Life Sciences Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA,; Computer Science Division,University of California, Berkeley, CA, USA, et al.
Partner: UNT Libraries Government Documents Department

The Association between Attitudes toward Computers and Understanding of Ethical Issues Affecting Their Use

Description: This study examines the association between the attitudes of students toward computers and their knowledge of the ethical uses of computers. The focus for this research was undergraduate students in the Colleges of Arts and Sciences (Department of Computer Science), Business and Education at the University of North Texas in Denton, Texas.
Date: May 1992
Creator: Gottleber, Timothy Theodore
Partner: UNT Libraries

Final Technical Report

Description: This project is conducted under the leadership and guidance of Sandia National Laboratory as part of the DOE Office of Science FAST-OS Program. It was initiated at the California Institute of Technology February 1, 2005. The Principal Investigator (PI), Dr. Thomas Sterling, accepted a position of Full Professor at the Department of Computer Science at Louisiana State University (LSU) on August 15, 2005, while retaining his position of Faculty Associate at California Institute of Technology’s Center for Advanced Computing Research. To take better advantage of the resources, research staff, and students at LSU, the award was transferred by DOE to LSU where research on the FAST-OS Config-OS project continues in accord with the original proposal. This brief report summarizes the accomplishments of this project during its initial phase at the California Institute of Technology (Caltech).
Date: May 29, 2007
Creator: Sterling, Thomas
Partner: UNT Libraries Government Documents Department

Toward a multi-sensor neural net approach to automatic text classification

Description: Many automatic text indexing and retrieval methods use a term-document matrix that is automatically derived from the text in question. Latent Semantic Indexing, a recent method for approximating large term-document matrices, appears to be quite useful in the problem of text information retrieval, rather than text classification. Here we outline a method that attempts to combine the strength of the LSI method with that of neural networks, in addressing the problem of text classification. In doing so, we also indicate ways to improve performance by adding additional {open_quotes}logical sensors{close_quotes} to the neural network, something that is hard to do with the LSI method when employed by itself. Preliminary results are summarized, but much work remains to be done.
Date: January 26, 1996
Creator: Dasigi, V. & Mann, R.
Partner: UNT Libraries Government Documents Department

Information fusion for automatic text classification

Description: Analysis and classification of free text documents encompass decision-making processes that rely on several clues derived from text and other contextual information. When using multiple clues, it is generally not known a priori how these should be integrated into a decision. An algorithmic sensor based on Latent Semantic Indexing (LSI) (a recent successful method for text retrieval rather than classification) is the primary sensor used in our work, but its utility is limited by the {ital reference}{ital library} of documents. Thus, there is an important need to complement or at least supplement this sensor. We have developed a system that uses a neural network to integrate the LSI-based sensor with other clues derived from the text. This approach allows for systematic fusion of several information sources in order to determine a combined best decision about the category to which a document belongs.
Date: August 1, 1996
Creator: Dasigi, V.; Mann, R.C. & Protopopescu, V.A.
Partner: UNT Libraries Government Documents Department

Knowledge-based analysis of microarray gene expression data by using support vector machines

Description: The authors introduce a method of functionally classifying genes by using gene expression data from DNA microarray hybridization experiments. The method is based on the theory of support vector machines (SVMs). SVMs are considered a supervised computer learning method because they exploit prior knowledge of gene function to identify unknown genes of similar function from expression data. SVMs avoid several problems associated with unsupervised clustering methods, such as hierarchical clustering and self-organizing maps. SVMs have many mathematical features that make them attractive for gene expression analysis, including their flexibility in choosing a similarity function, sparseness of solution when dealing with large data sets, the ability to handle large feature spaces, and the ability to identify outliers. They test several SVMs that use different similarity metrics, as well as some other supervised learning methods, and find that the SVMs best identify sets of genes with a common function using expression data. Finally, they use SVMs to predict functional roles for uncharacterized yeast ORFs based on their expression data.
Date: June 18, 2001
Creator: Grundy, William; Manuel Ares, Jr. & Haussler, David
Partner: UNT Libraries Government Documents Department

Graduate Automotive Technology Education (GATE) Program: Center of Automotive Technology Excellence in Advanced Hybrid Vehicle Technology at West Virginia University

Description: This report summarizes the technical and educational achievements of the Graduate Automotive Technology Education (GATE) Center at West Virginia University (WVU), which was created to emphasize Advanced Hybrid Vehicle Technology. The Center has supported the graduate studies of 17 students in the Department of Mechanical and Aerospace Engineering and the Lane Department of Computer Science and Electrical Engineering. These students have addressed topics such as hybrid modeling, construction of a hybrid sport utility vehicle (in conjunction with the FutureTruck program), a MEMS-based sensor, on-board data acquisition for hybrid design optimization, linear engine design and engine emissions. Courses have been developed in Hybrid Vehicle Design, Mobile Source Powerplants, Advanced Vehicle Propulsion, Power Electronics for Automotive Applications and Sensors for Automotive Applications, and have been responsible for 396 hours of graduate student coursework. The GATE program also enhanced the WVU participation in the U.S. Department of Energy Student Design Competitions, in particular FutureTruck and Challenge X. The GATE support for hybrid vehicle technology enhanced understanding of hybrid vehicle design and testing at WVU and encouraged the development of a research agenda in heavy-duty hybrid vehicles. As a result, WVU has now completed three programs in hybrid transit bus emissions characterization, and WVU faculty are leading the Transportation Research Board effort to define life cycle costs for hybrid transit buses. Research and enrollment records show that approximately 100 graduate students have benefited substantially from the hybrid vehicle GATE program at WVU.
Date: December 31, 2006
Creator: Clark, Nigle N.
Partner: UNT Libraries Government Documents Department

Combinatorial methods for gene recognition

Description: The major result of the project is the development of a new approach to gene recognition called spliced alignment algorithm. They have developed an algorithm and implemented a software tool (for both IBM PC and UNIX platforms) which explores all possible exon assemblies in polynomial time and finds the multi-exon structure with the best fit to a related protein. Unlike other existing methods, the algorithm successfully performs exons assemblies even in the case of short exons or exons with unusual codon usage; they also report correct assemblies for the genes with more than 10 exons provided a homologous protein is already known. On a test sample of human genes with known mammalian relatives the average overlap between the predicted and the actual genes was 99%, which is remarkably well as compared to other existing methods. At that, the algorithm absolute correctly reconstructed 87% of genes. The rare discrepancies between the predicted and real axon-intron structures were restricted either to extremely short initial or terminal exons or proved to be results of alternative splicing. Moreover, the algorithm performs reasonably well with non-vertebrate and even prokaryote targets. The spliced alignment software PROCRUSTES has been in extensive use by the academic community since its announcement in August, 1996 via the WWW server (www-hto.usc.edu/software/procrustes) and by biotech companies via the in-house UNIX version.
Date: October 29, 1997
Creator: Pevzner, P.A.
Partner: UNT Libraries Government Documents Department

Technologies and tools for high-performance distributed computing. Final report

Description: In this project we studied the practical use of the MPI message-passing interface in advanced distributed computing environments. We built on the existing software infrastructure provided by the Globus Toolkit{trademark}, the MPICH portable implementation of MPI, and the MPICH-G integration of MPICH with Globus. As a result of this project we have replaced MPICH-G with its successor MPICH-G2, which is also an integration of MPICH with Globus. MPICH-G2 delivers significant improvements in message passing performance when compared to its predecessor MPICH-G and was based on superior software design principles resulting in a software base that was much easier to make the functional extensions and improvements we did. Using Globus services we replaced the default implementation of MPI's collective operations in MPICH-G2 with more efficient multilevel topology-aware collective operations which, in turn, led to the development of a new timing methodology for broadcasts [8]. MPICH-G2 was extended to include client/server functionality from the MPI-2 standard [23] to facilitate remote visualization applications and, through the use of MPI idioms, MPICH-G2 provided application-level control of quality-of-service parameters as well as application-level discovery of underlying Grid-topology information. Finally, MPICH-G2 was successfully used in a number of applications including an award-winning record-setting computation in numerical relativity. In the sections that follow we describe in detail the accomplishments of this project, we present experimental results quantifying the performance improvements, and conclude with a discussion of our applications experiences. This project resulted in a significant increase in the utility of MPICH-G2.
Date: May 1, 2000
Creator: Karonis, Nicholas T.
Partner: UNT Libraries Government Documents Department

Scaffolding for Digital Curation Education: A One Week Unix Fundamentals Course

Description: This poster discusses scaffolding for digital curation eductaion. The authors will teach students how to perform commands such as changing directories, moving and copying files, compressing folders, and altering permissions in the Unix environment. This will give students some basic preparation for digital curation work and for the (4) intermediate and advanced courses in digital curation and data management offered by the iCamp Project at the University of North Texas.
Date: January 2013
Creator: Helsing, Joseph; Lewis, Paulette; Moen, William E. & Salter, Jacqueline
Partner: UNT College of Information

Computing connection coefficients of compactly supported wavelets on bounded intervals

Description: Daubechies wavelet basis functions have many properties that make them desirable as a basis for a Galerkin approach to solving PDEs: they are orthogonal, with compact support, and their connection coefficients can be computed. The method developed by Latto et al. to compute connection coefficients does not provide the correct inner product near the endpoints of a bounded interval, making the implementation of boundary conditions problematic. Moreover, the highly oscillatory nature of the wavelet basis functions makes standard numerical quadrature of integrals near the boundary impractical. The authors extend the method of Latto et al. to construct and solve a linear system of equations whose solution provides the exact computation of the integrals at the boundaries. As a consequence, they provide the correct inner product for wavelet basis functions on a bounded interval.
Date: April 1, 1997
Creator: Romine, C. H. & Peyton, B. W.
Partner: UNT Libraries Government Documents Department

Coefficient adaptive triangulation for strongly anisotropic problems

Description: Second order elliptic partial differential equations arise in many important applications, including flow through porous media, heat conduction, the distribution of electrical or magnetic potential. The prototype is the Laplace problem, which in discrete form produces a coefficient matrix that is relatively easy to solve in a regular domain. However, the presence of anisotropy produces a matrix whose condition number is increased, making the resulting linear system more difficult to solve. In this work, we take the anisotropy into account in the discretization by mapping each anisotropic region into a ``stretched`` coordinate space in which the anisotropy is removed. The region is then uniformly triangulated, and the resulting triangulation mapped back to the original space. The effect is to generate long slender triangles that are oriented in the direction of ``preferred flow.`` Slender triangles are generally regarded as numerically undesirable since they tend to cause poor conditioning; however, our triangulation has the effect of producing effective isotropy, thus improving the condition number of the resulting coefficient matrix.
Date: January 1, 1996
Creator: D`Azevedo, E.F.; Romine, C.H. & Donato, J.M.
Partner: UNT Libraries Government Documents Department

EDONIO: Extended distributed object network I/O library

Description: This report describes EDONIO (Extended Distributed Object Network I/O), an enhanced version of DONIO (Distributed Object Network I/O Library) optimized for the Intel Paragon Systems using the new M-ASYNC access mode. DONIO provided fast file I/O capabilities in the Intel iPSC/860 and Paragon distributed memory parallel environments by caching a copy of the entire file in memory distributed across all processors. EDONIO is more memory efficient by caching only a subset of the disk file at a time. DONIO was restricted by the high memory requirements and use of 32-bit integer indexing to handle files no larger than 2 Gigabytes. EDONIO overcomes this barrier by using the extended integer library routines provided by Intel`s NX operating system. For certain applications, EDONIO may show a ten-fold improvement in performance over the native NX I/O routines.
Date: March 1, 1995
Creator: D`Azevedo, E.F. & Romine, C.H.
Partner: UNT Libraries Government Documents Department

Final Report: Task Force on requirements for HPC software: Guidelines for specifying HPC software, June 1, 1997 - August 31, 1999

Description: This document describes the results of a task force convened to determine what types of system software and tools were sufficiently important to warrant implementation across multiple vendors and machine types. The group included representatives from a wide range of user sites, as well as from the software development groups at vendor sites. Together, they established key software requirements, identified priorities for different types of user organizations, and formulated the requirements into language suitable for direct inclusion in procurements and requests-for-bids. The report is structured into four sections. The first discusses the formation and objectives of the task force and the processes used to arrive at consensus. Part 2 outlines the group's assumptions about how software will be specified on RFPs for parallel and clustered computers. In the next section, a tabular summary describes the requirements and the priority rank that was assigned to each. Part 4 presents the wording that is recommended for specifying each software element. Examples of how the requirements might be applied for various RFP scenarios, are in the appendices, which also provide vendor estimations of the level-of-effort required to develop and supply each requirement. The task force was sponsored by the Parallel Tools Consortium and the National Coordination Office for Computing, Information, and Communication. It was funded by grants from DoD HPC Modernization Program (NAVO Major Shared Resource Center), US Department of Energy, National Science Foundation, NASAS Ames Research Center, and Defense Advanced Research Projects Agency.
Date: March 31, 1999
Creator: Pancake, Cherri M.
Partner: UNT Libraries Government Documents Department

Beta testing the Intel Paragon MP

Description: This report summarizes the third phase of a Cooperative Research and Development Agreement between Oak Ridge National Laboratory and Intel in evaluating a 28-node Intel Paragon MP system. An MP node consists of three 50-MHz i860XP`s sharing a common bus to memory and to the mesh communications interface. The performance of the shared-memory MP node is measured and compared with other shared-memory multiprocessors. Bus contention is measured between processors and with message passing. Recent improvements in message passing and I/O are also reported.
Date: June 1, 1995
Creator: Dunigan, T.H.
Partner: UNT Libraries Government Documents Department

Performance of ATM/OC-12 on the Intel Paragon

Description: This report summarizes communication performance of GigaNet`s OC12 ATM interface for the Intel Paragon. One-way latency of 41 {micro}s and bandwidth of 68 MB/s (full OC12) are measured using GigaNet`s AAL5 API between two Paragons. Performance is compared with Ethernet, HiPPI, and the Paragon`s native message-passing facility.
Date: May 1, 1996
Creator: Dunigan, T.H.
Partner: UNT Libraries Government Documents Department

Terascale Optimal PDE Simulations (TOPS) Center

Description: Our work has focused on the development and analysis of domain decomposition algorithms for a variety of problems arising in continuum mechanics modeling. In particular, we have extended and analyzed FETI-DP and BDDC algorithms; these iterative solvers were first introduced and studied by Charbel Farhat and his collaborators, see [11, 45, 12], and by Clark Dohrmann of SANDIA, Albuquerque, see [43, 2, 1], respectively. These two closely related families of methods are of particular interest since they are used more extensively than other iterative substructuring methods to solve very large and difficult problems. Thus, the FETI algorithms are part of the SALINAS system developed by the SANDIA National Laboratories for very large scale computations, and as already noted, BDDC was first developed by a SANDIA scientist, Dr. Clark Dohrmann. The FETI algorithms are also making inroads in commercial engineering software systems. We also note that the analysis of these algorithms poses very real mathematical challenges. The success in developing this theory has, in several instances, led to significant improvements in the performance of these algorithms. A very desirable feature of these iterative substructuring and other domain decomposition algorithms is that they respect the memory hierarchy of modern parallel and distributed computing systems, which is essential for approaching peak floating point performance. The development of improved methods, together with more powerful computer systems, is making it possible to carry out simulations in three dimensions, with quite high resolution, relatively easily. This work is supported by high quality software systems, such as Argonne's PETSc library, which facilitates code development as well as the access to a variety of parallel and distributed computer systems. The success in finding scalable and robust domain decomposition algorithms for very large number of processors and very large finite element problems is, e.g., illustrated in [24, 25, 26]. ...
Date: July 9, 2007
Creator: Widlund, Professor Olof B.
Partner: UNT Libraries Government Documents Department

A new shared-memory programming paradigm for molecular dynamics simulations on the Intel Paragon

Description: This report describes the use of shared memory emulation with DOLIB (Distributed Object Library) to simplify parallel programming on the Intel Paragon. A molecular dynamics application is used as an example to illustrate the use of the DOLIB shared memory library. SOTON-PAR, a parallel molecular dynamics code with explicit message-passing using a Lennard-Jones 6-12 potential, is rewritten using DOLIB primitives. The resulting code has no explicit message primitives and resembles a serial code. The new code can perform dynamic load balancing and achieves better performance than the original parallel code with explicit message-passing.
Date: December 1, 1994
Creator: D`Azevedo, E.F. & Romine, C.H.
Partner: UNT Libraries Government Documents Department