137 Matching Results

Search Results

Advanced search parameters have been applied.

Gen IV Materials Handbook Functionalities and Operation (4A) Handbook Version 4.0

Description: This document is prepared for navigation and operation of the Gen IV Materials Handbook, with architecture description and new user access initiation instructions. Development rationale and history of the Handbook is summarized. The major development aspects, architecture, and design principles of the Handbook are briefly introduced to provide an overview of its past evolution and future prospects. Detailed instructions are given with examples for navigating the constructed Handbook components and using the main functionalities. Procedures are provided in a step-by-step fashion for Data Upload Managers to upload reports and data files, as well as for new users to initiate Handbook access.
Date: September 1, 2013
Creator: Ren, Weiju
Partner: UNT Libraries Government Documents Department

Establishment of Northwest Building Testbeds: Final Progress Report

Description: This document provides a short summary of a project jointly funded by the DOE Building Technologies Program and the Northwest Energy Efficiency Alliance. The report the outcomes achieved in the jointly-funded project, describes major project activities, discusses future plans for the homes and data, and provides details on project costs and schedule performance.
Date: August 1, 2012
Creator: Stiles, Dennis L.
Partner: UNT Libraries Government Documents Department

Developing a Collection Digitization Workflow for the Elm Fork Natural Heritage Museum

Description: Natural history collections house immense amounts of data, but the majority of data is only accessible by locating the collection label, which is usually attached to the physical specimen. This method of data retrieval is time consuming and can be very damaging to fragile specimens. Digitizing the collections is the one way to reduce the time and potential damage related to finding the collection objects. The Elm Fork Natural Heritage Museum is a natural history museum located at the University of North Texas and contains collections of both vertebrate and invertebrate taxa, as well as plants. This project designed a collection digitization workflow for Elm Fork by working through digitizing the Benjamin B. Harris Herbarium. The collection was cataloged in Specify 6, a database program designed for natural history collection management. By working through one of the museum’s collections, the project was able to identify and address challenges related to digitizing the museum’s holdings in order to create robust workflows. The project also produced a series of documents explaining common processes in Specify and a data management plan.
Date: August 2013
Creator: Evans, Colleen R.
Partner: UNT Libraries

Attack methodology Analysis: SQL Injection Attacks and Their Applicability to Control Systems

Description: Database applications have become a core component in control systems and their associated record keeping utilities. Traditional security models attempt to secure systems by isolating core software components and concentrating security efforts against threats specific to those computers or software components. Database security within control systems follows these models by using generally independent systems that rely on one another for proper functionality. The high level of reliance between the two systems creates an expanded threat surface. To understand the scope of a threat surface, all segments of the control system, with an emphasis on entry points, must be examined. The communication link between data and decision layers is the primary attack surface for SQL injection. This paper facilitates understanding what SQL injection is and why it is a significant threat to control system environments.
Date: September 1, 2005
Creator: Rolston, Bri
Partner: UNT Libraries Government Documents Department

The Object-Oriented Database Editor

Description: Because of an interest in object-oriented database systems, designers have created systems to store and manipulate specific sets of abstract data types that belong to the real world environment they represent. Unfortunately, the advantage of these systems is also a disadvantage since no single object-oriented database system can be used for all applications. This paper describes an object-oriented database management system called the Object-oriented Database Editor (ODE) which overcomes this disadvantage by allowing designers to create and execute an object-oriented database that represents any type of environment and then to store it and simulate that environment. As conditions within the environment change, the designer can use ODE to alter that environment without loss of data. ODE provides a flexible environment for the user; it is efficient; and it can run on a personal computer.
Date: December 1989
Creator: Coats, Sidney M. (Sidney Mark)
Partner: UNT Libraries

A Netcentric Scientific Research Repository

Description: The Internet and networks in general have become essential tools for disseminating in-formation. Search engines have become the predominant means of finding information on the Web and all other data repositories, including local resources. Domain scientists regularly acquire and analyze images generated by equipment such as microscopes and cameras, resulting in complex image files that need to be managed in a convenient manner. This type of integrated environment has been recently termed a netcentric sci-entific research repository. I developed a number of data manipulation tools that allow researchers to manage their information more effectively in a netcentric environment. The specific contributions are: (1) A unique interface for management of data including files and relational databases. A wrapper for relational databases was developed so that the data can be indexed and searched using traditional search engines. This approach allows data in databases to be searched with the same interface as other data. Fur-thermore, this approach makes it easier for scientists to work with their data if they are not familiar with SQL. (2) A Web services based architecture for integrating analysis op-erations into a repository. This technique allows the system to leverage the large num-ber of existing tools by wrapping them with a Web service and registering the service with the repository. Metadata associated with Web services was enhanced to allow this feature to be included. In addition, an improved binary to text encoding scheme was de-veloped to reduce the size overhead for sending large scientific data files via XML mes-sages used in Web services. (3) Integrated image analysis operations with SQL. This technique allows for images to be stored and managed conveniently in a relational da-tabase. SQL supplemented with map algebra operations is used to select and perform operations on sets of images.
Access: This item is restricted to the UNT Community Members at a UNT Libraries Location.
Date: December 2006
Creator: Harrington, Brian
Partner: UNT Libraries

Shifts of focus among dimensions of user information problems as represented during interactive information retrieval

Description: The goal of this study is to increase understanding of information problems as they are revealed in interactions among users and search intermediaries during information retrieval. Specifically, this study seeks to investigate: (a) how interaction between users and search intermediaries reveals aspects of user information problems; (b) to explore the concept of representation with respect to information problems in interactive information retrieval; and (c) how user and search intermediaries focus on aspects of user information problems during the course of searches.
Date: May 1998
Creator: Robins, David B. (David Bruce)
Partner: UNT Libraries

Toward a Data-Type-Based Real Time Geospatial Data Stream Management System

Description: The advent of sensory and communication technologies enables the generation and consumption of large volumes of streaming data. Many of these data streams are geo-referenced. Existing spatio-temporal databases and data stream management systems are not capable of handling real time queries on spatial extents. In this thesis, we investigated several fundamental research issues toward building a data-type-based real time geospatial data stream management system. The thesis makes contributions in the following areas: geo-stream data models, aggregation, window-based nearest neighbor operators, and query optimization strategies. The proposed geo-stream data model is based on second-order logic and multi-typed algebra. Both abstract and discrete data models are proposed and exemplified. I further propose two useful geo-stream operators, namely Region By and WNN, which abstract common aggregation and nearest neighbor queries as generalized data model constructs. Finally, I propose three query optimization algorithms based on spatial, temporal, and spatio-temporal constraints of geo-streams. I show the effectiveness of the data model through many query examples. The effectiveness and the efficiency of the algorithms are validated through extensive experiments on both synthetic and real data sets. This work established the fundamental building blocks toward a full-fledged geo-stream database management system and has potential impact in many applications such as hazard weather alerting and monitoring, traffic analysis, and environmental modeling.
Date: May 2011
Creator: Zhang, Chengyang
Partner: UNT Libraries

Key Components of a Comprehensive Visual Information System for College-Level Design Education Curriculum Analysis

Description: Electronic and computer technology have advanced and transformed graphic design. New technologies are forcing design educators to constantly monitor and update their programs, creating a need for a system to be adopted by college-level institutions to better investigate, evaluate and plan art and design curriculum. The author identifies metaphorical approaches to designing a two-part solution, which includes a Comprehensive Visual Information System (CVIS) and Three-Dimensional Virtual Database (3DVDb), which assign volumetric form to education components based on the form, structure and content of a discipline. Research and development of the conceptual design for the CVIS and 3DVDb are intended to aid in the development of an electronic media solution to be made accessible to students, faculty and administrators.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2004
Creator: Short, Scott Allen
Partner: UNT Libraries

Archive and database architecture and usability

Description: Presentation for the 2017 Symposium on Developing Infrastructure for Computational Resources on South Asian Languages. This presentation discusses the use of a database for language archiving through the example of Himalayan Tibeto-Burman languages.
Date: November 17, 2017
Creator: Caplow, Nancy J.; Khular, Sumshot & Willis Oko, Christina
Partner: UNT College of Information

Commission on Systemic Interoperability

Description: The Commission on Systemic Interoperability was authorized by the Medicare Modernization Act and established by the Secretary of Health and Human Services. Its members were appointed by the President of the United States of America and the leaders of the 108th United States Congress, and it held its first meeting on January 10, 2005.
Date: 2005
Creator: Commission on Systemic Interoperability
Partner: UNT Libraries Government Documents Department

The Tariff Analysis Project: A database and analysis platform forelectricity tariffs

Description: Much of the work done in energy research involves ananalysis of the costs and benefits of energy-saving technologies andother measures from the perspective of the consumer. The economic valuein particular depends on the price of energy (electricity, gas or otherfuel), which varies significantly both for different types of consumers,and for different regions of the country. Ideally, to provide accurateinformation about the economic value of energy savings, prices should becomputed directly from real tariffs as defined by utility companies. Alarge number of utility tariffs are now available freely over the web,but the complexity and diversity of tariff structures presents aconsiderable barrier to using them in practice. The goal of the TariffAnalysis Project (TAP) is to collect andarchive a statistically completesample of real utility tariffs, and build a set of database and web toolsthat make this information relatively easy to use in cost-benefitanalysis. This report presentsa detailed picture of the current TAPdatabase structure and web interface. While TAP has been designed tohandle tariffs for any kind of utility service, the focus here is onelectric utilities withinthe United States. Electricity tariffs can bevery complicated, so the database structures that have been built toaccommodate them are quite flexible and can be easily generalized toother commodities.
Date: May 12, 2006
Creator: Coughlin, K.; White, R.; Bolduc, C.; Fisher, D. & Rosenquist, G.
Partner: UNT Libraries Government Documents Department

A Proposed Data Fusion Architecture for Micro-Zone Analysis and Data Mining

Description: Data Fusion requires the ability to combine or “fuse” date from multiple data sources. Time Series Analysis is a data mining technique used to predict future values from a data set based upon past values. Unlike other data mining techniques, however, Time Series places special emphasis on periodicity and how seasonal and other time-based factors tend to affect trends over time. One of the difficulties encountered in developing generic time series techniques is the wide variability of the data sets available for analysis. This presents challenges all the way from the data gathering stage to results presentation. This paper presents an architecture designed and used to facilitate the collection of disparate data sets well suited to Time Series analysis as well as other predictive data mining techniques. Results show this architecture provides a flexible, dynamic framework for the capture and storage of a myriad of dissimilar data sets and can serve as a foundation from which to build a complete data fusion architecture.
Date: August 1, 2012
Creator: McCarthy, Kevin & Manic, Milos
Partner: UNT Libraries Government Documents Department


Description: While current production of ethanol as a biofuel relies on starch and sugar inputs, it is anticipated that sustainable production of ethanol for biofuel use will utilize lignocellulosic feedstocks. Candidate plant species to be used for lignocellulosic ethanol production include a large number of species within the Grass, Pine and Birch plant families. For these biofuel feedstock species, there are variable amounts of genome sequence resources available, ranging from complete genome sequences (e.g. sorghum, poplar) to transcriptome data sets (e.g. switchgrass, pine). These data sets are not only dispersed in location but also disparate in content. It will be essential to leverage and improve these genomic data sets for the improvement of biofuel feedstock production. The objectives of this project were to provide computational tools and resources for data-mining genome sequence/annotation and large-scale functional genomic datasets available for biofuel feedstock species. We have created a Bioenergy Feedstock Genomics Resource that provides a web-based portal or “clearing house” for genomic data for plant species relevant to biofuel feedstock production. Sequence data from a total of 54 plant species are included in the Bioenergy Feedstock Genomics Resource including model plant species that permit leveraging of knowledge across taxa to biofuel feedstock species.We have generated additional computational analyses of these data, including uniform annotation, to facilitate genomic approaches to improved biofuel feedstock production. These data have been centralized in the publicly available Bioenergy Feedstock Genomics Resource (http://bfgr.plantbiology.msu.edu/).
Date: May 7, 2013
Creator: Buell, Carol Robin & Childs, Kevin L
Partner: UNT Libraries Government Documents Department

Temporally Correct Algorithms for Transaction Concurrency Control in Distributed Databases

Description: Many activities are comprised of temporally dependent events that must be executed in a specific chronological order. Supportive software applications must preserve these temporal dependencies. Whenever the processing of this type of an application includes transactions submitted to a database that is shared with other such applications, the transaction concurrency control mechanisms within the database must also preserve the temporal dependencies. A basis for preserving temporal dependencies is established by using (within the applications and databases) real-time timestamps to identify and order events and transactions. The use of optimistic approaches to transaction concurrency control can be undesirable in such situations, as they allow incorrect results for database read operations. Although the incorrectness is detected prior to transaction committal and the corresponding transaction(s) restarted, the impact on the application or entity that submitted the transaction can be too costly. Three transaction concurrency control algorithms are proposed in this dissertation. These algorithms are based on timestamp ordering, and are designed to preserve temporal dependencies existing among data-dependent transactions. The algorithms produce execution schedules that are equivalent to temporally ordered serial schedules, where the temporal order is established by the transactions' start times. The algorithms provide this equivalence while supporting currency to the extent out-of-order commits and reads. With respect to the stated concern with optimistic approaches, two of the proposed algorithms are risk-free and return to read operations only committed data-item values. Risk with the third algorithm is greatly reduced by its conservative bias. All three algorithms avoid deadlock while providing risk-free or reduced-risk operation. The performance of the algorithms is determined analytically and with experimentation. Experiments are performed using functional database management system models that implement the proposed algorithms and the well-known Conservative Multiversion Timestamp Ordering algorithm.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2001
Creator: Tuck, Terry W.
Partner: UNT Libraries

Novel transcriptome assembly and comparative toxicity pathway analysis in mahi-mahi (Coryphaena hippurus) embryos and larvae exposed to Deepwater Horizon oil

Description: This article establishes the de novo transcriptomic database from the embryos and larvae of mahi-mahi exposed to water accommodated fractions (HEWAFs) of two Deepwater Horizon oil types (weathered and source oil) in an effort to advance understanding of the molecular aspects involved during specific toxicity responses.
Date: March 15, 2017
Creator: Xu, Elvis Genbo; Mager, Edward M.; Grosell, Martin; Hazard, E. Starr; Hardiman, Gary & Schlenk, Daniel
Partner: UNT College of Arts and Sciences

Integrating GIS, Archeology, and the Internet.

Description: At the Idaho National Engineering and Environmental Laboratory's (INEEL) Cultural Resource Management Office, a newly developed Data Management Tool (DMT) is improving management and long-term stewardship of cultural resources. The fully integrated system links an archaeological database, a historical database, and a research database to spatial data through a customized user interface using ArcIMS and Active Server Pages. Components of the new DMT are tailored specifically to the INEEL and include automated data entry forms for historic and prehistoric archaeological sites, specialized queries and reports that address both yearly and project-specific documentation requirements, and unique field recording forms. The predictive modeling component increases the DMT’s value for land use planning and long-term stewardship. The DMT enhances the efficiency of archive searches, improving customer service, oversight, and management of the large INEEL cultural resource inventory. In the future, the DMT will facilitate data sharing with regulatory agencies, tribal organizations, and the general public.
Date: August 1, 2004
Creator: White, Sera; Pace, Brenda Ringe & Lee, Randy
Partner: UNT Libraries Government Documents Department

The Cluster Hypothesis: A Visual/Statistical Analysis

Description: By allowing judgments based on a small number of exemplar documents to be applied to a larger number of unexamined documents, clustered presentation of search results represents an intuitively attractive possibility for reducing the cognitive resource demands on human users of information retrieval systems. However, clustered presentation of search results is sensible only to the extent that naturally occurring similarity relationships among documents correspond to topically coherent clusters. The Cluster Hypothesis posits just such a systematic relationship between document similarity and topical relevance. To date, experimental validation of the Cluster Hypothesis has proved problematic, with collection-specific results both supporting and failing to support this fundamental theoretical postulate. The present study consists of two computational information visualization experiments, representing a two-tiered test of the Cluster Hypothesis under adverse conditions. Both experiments rely on multidimensionally scaled representations of interdocument similarity matrices. Experiment 1 is a term-reduction condition, in which descriptive titles are extracted from Associated Press news stories drawn from the TREC information retrieval test collection. The clustering behavior of these titles is compared to the behavior of the corresponding full text via statistical analysis of the visual characteristics of a two-dimensional similarity map. Experiment 2 is a dimensionality reduction condition, in which inter-item similarity coefficients for full text documents are scaled into a single dimension and then rendered as a two-dimensional visualization; the clustering behavior of relevant documents within these unidimensionally scaled representations is examined via visual and statistical methods. Taken as a whole, results of both experiments lend strong though not unqualified support to the Cluster Hypothesis. In Experiment 1, semantically meaningful 6.6-word document surrogates systematically conform to the predictions of the Cluster Hypothesis. In Experiment 2, the majority of the unidimensionally scaled datasets exhibit a marked nonuniformity of distribution of relevant documents, further supporting the Cluster Hypothesis. Results of ...
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2000
Creator: Sullivan, Terry
Partner: UNT Libraries

Characteristics of Actors Involved in Social Protest: An Extension of the Social Conflict Analysis Database (SCAD)

Description: Paper accompanying a presentation for the 2015 University of North Texas (UNT) Student and Faculty Research Symposium on African Studies. This paper discusses characteristics of actors involved in social protest and an extension of the social conflict analysis database.
Date: April 11, 2015
Creator: Naughton, Kelsey Ann
Partner: UNT College of Arts and Sciences

Characteristics of Actors Involved in Social Protest: An Extension of the Social Conflict Analysis Database (SCAD) [Presentation]

Description: Presentation for the 2015 University of North Texas (UNT) Student and Faculty Research Symposium on African Studies. This presentation discusses characteristics of actors involved in social protest and an extension of the social conflict analysis database (SCAD).
Date: April 11, 2015
Creator: Naughton, Kelsey Ann
Partner: UNT College of Arts and Sciences

A Tutorial on the Construction of High-Performance Resolution/Paramodulation Systems

Description: Over the past 25 years, researchers have written numerous deduction systems based on resolution and paramodulation. Of these systems, a very few have been capable of generating and maintaining a formula database "containing more than just a few thousand clauses. These few systems were used to explore mechanisms for rapidly extracting limited subsets of relevant" clauses. We have written this tutorial to reflect some of the best ideas that have emerged and to cast them in a form that makes them easily accessible to students wishing to write their own high-performance systems.
Date: September 1990
Creator: Butler, R. & Overbeek, R.
Partner: UNT Libraries Government Documents Department

Users Guide and Tutorial for PC-GenoGraphics: Version 1

Description: PC-GenoGraphics is a visual database/query facility designed for reasoning with genomic data. Data are represented to reflect variously accurate notions of the location of their sites, etc., along the length of the genome. Sequence data are efficiently stored and queried via a rather versatile language so that entire sequences of organisms will be treatable as they emerge. Other classes of information, such as function descriptions, are stored in a relational form, and joint queries relating these to sequence properties are supported. All queries result in visual responses that indicate locations along the genome. The results of queries can themselves be promoted to be queryable objects against which further queries can be launched.
Date: December 1992
Creator: Hagstrom, Ray; Overbeek, Ross & Price, Morgan
Partner: UNT Libraries Government Documents Department

A Unifying Version Model for Objects and Schema in Object-Oriented Database System

Description: There have been a number of different versioning models proposed. The research in this area can be divided into two categories: object versioning and schema versioning. In this dissertation, both problem domains are considered as a single unit. This dissertation describes a unifying version model (UVM) for maintaining changes to both objects and schema. UVM handles schema versioning operations by using object versioning techniques. The result is that the UVM allows the OODBMS to be much smaller than previous systems. Also, programmers need know only one set of versioning operations; thus, reducing the learning time by half. This dissertation shows that UVM is a simple but semantically sound and powerful version model for both objects and schema.
Date: August 1997
Creator: Shin, Dongil
Partner: UNT Libraries