You limited your search to:

  Partner: UNT Libraries
 Degree Discipline: Computer Science
 Degree Level: Doctoral
 Collection: UNT Theses and Dissertations
Temporally Correct Algorithms for Transaction Concurrency Control in Distributed Databases

Temporally Correct Algorithms for Transaction Concurrency Control in Distributed Databases

Access: Use of this item is restricted to the UNT Community.
Date: May 2001
Creator: Tuck, Terry W.
Description: Many activities are comprised of temporally dependent events that must be executed in a specific chronological order. Supportive software applications must preserve these temporal dependencies. Whenever the processing of this type of an application includes transactions submitted to a database that is shared with other such applications, the transaction concurrency control mechanisms within the database must also preserve the temporal dependencies. A basis for preserving temporal dependencies is established by using (within the applications and databases) real-time timestamps to identify and order events and transactions. The use of optimistic approaches to transaction concurrency control can be undesirable in such situations, as they allow incorrect results for database read operations. Although the incorrectness is detected prior to transaction committal and the corresponding transaction(s) restarted, the impact on the application or entity that submitted the transaction can be too costly. Three transaction concurrency control algorithms are proposed in this dissertation. These algorithms are based on timestamp ordering, and are designed to preserve temporal dependencies existing among data-dependent transactions. The algorithms produce execution schedules that are equivalent to temporally ordered serial schedules, where the temporal order is established by the transactions' start times. The algorithms provide this equivalence while supporting currency to the ...
Contributing Partner: UNT Libraries
Computational Complexity of Hopfield Networks

Computational Complexity of Hopfield Networks

Date: August 1998
Creator: Tseng, Hung-Li
Description: There are three main results in this dissertation. They are PLS-completeness of discrete Hopfield network convergence with eight different restrictions, (degree 3, bipartite and degree 3, 8-neighbor mesh, dual of the knight's graph, hypercube, butterfly, cube-connected cycles and shuffle-exchange), exponential convergence behavior of discrete Hopfield network, and simulation of Turing machines by discrete Hopfield Network.
Contributing Partner: UNT Libraries
Optimizing Non-pharmaceutical Interventions Using Multi-coaffiliation Networks

Optimizing Non-pharmaceutical Interventions Using Multi-coaffiliation Networks

Date: May 2013
Creator: Loza, Olivia G.
Description: Computational modeling is of fundamental significance in mapping possible disease spread, and designing strategies for its mitigation. Conventional contact networks implement the simulation of interactions as random occurrences, presenting public health bodies with a difficult trade off between a realistic model granularity and robust design of intervention strategies. Recently, researchers have been investigating the use of agent-based models (ABMs) to embrace the complexity of real world interactions. At the same time, theoretical approaches provide epidemiologists with general optimization models in which demographics are intrinsically simplified. The emerging study of affiliation networks and co-affiliation networks provide an alternative to such trade off. Co-affiliation networks maintain the realism innate to ABMs while reducing the complexity of contact networks into distinctively smaller k-partite graphs, were each partition represent a dimension of the social model. This dissertation studies the optimization of intervention strategies for infectious diseases, mainly distributed in school systems. First, concepts of synthetic populations and affiliation networks are extended to propose a modified algorithm for the synthetic reconstruction of populations. Second, the definition of multi-coaffiliation networks is presented as the main social model in which risk is quantified and evaluated, thereby obtaining vulnerability indications for each school in the system. Finally, maximization ...
Contributing Partner: UNT Libraries
Exon/Intron Discrimination Using the Finite Induction Pattern Matching Technique

Exon/Intron Discrimination Using the Finite Induction Pattern Matching Technique

Date: December 1997
Creator: Taylor, Pamela A., 1941-
Description: DNA sequence analysis involves precise discrimination of two of the sequence's most important components: exons and introns. Exons encode the proteins that are responsible for almost all the functions in a living organism. Introns interrupt the sequence coding for a protein and must be removed from primary RNA transcripts before translation to protein can occur. A pattern recognition technique called Finite Induction (FI) is utilized to study the language of exons and introns. FI is especially suited for analyzing and classifying large amounts of data representing sequences of interest. It requires no biological information and employs no statistical functions. Finite Induction is applied to the exon and intron components of DNA by building a collection of rules based upon what it finds in the sequences it examines. It then attempts to match the known rule patterns with new rules formed as a result of analyzing a new sequence. A high number of matches predict a probable close relationship between the two sequences; a low number of matches signifies a large amount of difference between the two. This research demonstrates FI to be a viable tool for measurement when known patterns are available for the formation of rule sets.
Contributing Partner: UNT Libraries
Toward a Data-Type-Based Real Time Geospatial Data Stream Management System

Toward a Data-Type-Based Real Time Geospatial Data Stream Management System

Date: May 2011
Creator: Zhang, Chengyang
Description: The advent of sensory and communication technologies enables the generation and consumption of large volumes of streaming data. Many of these data streams are geo-referenced. Existing spatio-temporal databases and data stream management systems are not capable of handling real time queries on spatial extents. In this thesis, we investigated several fundamental research issues toward building a data-type-based real time geospatial data stream management system. The thesis makes contributions in the following areas: geo-stream data models, aggregation, window-based nearest neighbor operators, and query optimization strategies. The proposed geo-stream data model is based on second-order logic and multi-typed algebra. Both abstract and discrete data models are proposed and exemplified. I further propose two useful geo-stream operators, namely Region By and WNN, which abstract common aggregation and nearest neighbor queries as generalized data model constructs. Finally, I propose three query optimization algorithms based on spatial, temporal, and spatio-temporal constraints of geo-streams. I show the effectiveness of the data model through many query examples. The effectiveness and the efficiency of the algorithms are validated through extensive experiments on both synthetic and real data sets. This work established the fundamental building blocks toward a full-fledged geo-stream database management system and has potential impact in many ...
Contributing Partner: UNT Libraries
Higher Compression from the Burrows-Wheeler Transform with New Algorithms for the List Update Problem

Higher Compression from the Burrows-Wheeler Transform with New Algorithms for the List Update Problem

Date: August 2001
Creator: Chapin, Brenton
Description: Burrows-Wheeler compression is a three stage process in which the data is transformed with the Burrows-Wheeler Transform, then transformed with Move-To-Front, and finally encoded with an entropy coder. Move-To-Front, Transpose, and Frequency Count are some of the many algorithms used on the List Update problem. In 1985, Competitive Analysis first showed the superiority of Move-To-Front over Transpose and Frequency Count for the List Update problem with arbitrary data. Earlier studies due to Bitner assumed independent identically distributed data, and showed that while Move-To-Front adapts to a distribution faster, incurring less overwork, the asymptotic costs of Frequency Count and Transpose are less. The improvements to Burrows-Wheeler compression this work covers are increases in the amount, not speed, of compression. Best x of 2x-1 is a new family of algorithms created to improve on Move-To-Front's processing of the output of the Burrows-Wheeler Transform which is like piecewise independent identically distributed data. Other algorithms for both the middle stage of Burrows-Wheeler compression and the List Update problem for which overwork, asymptotic cost, and competitive ratios are also analyzed are several variations of Move One From Front and part of the randomized algorithm Timestamp. The Best x of 2x - 1 family includes Move-To-Front, ...
Contributing Partner: UNT Libraries
Efficient Algorithms and Framework for Bandwidth Allocation, Quality-of-Service Provisioning and Location Management in Mobile Wireless Computing

Efficient Algorithms and Framework for Bandwidth Allocation, Quality-of-Service Provisioning and Location Management in Mobile Wireless Computing

Date: December 1997
Creator: Sen, Sanjoy Kumar
Description: The fusion of computers and communications has promised to herald the age of information super-highway over high speed communication networks where the ultimate goal is to enable a multitude of users at any place, access information from anywhere and at any time. This, in a nutshell, is the goal envisioned by the Personal Communication Services (PCS) and Xerox's ubiquitous computing. In view of the remarkable growth of the mobile communication users in the last few years, the radio frequency spectrum allocated by the FCC (Federal Communications Commission) to this service is still very limited and the usable bandwidth is by far much less than the expected demand, particularly in view of the emergence of the next generation wireless multimedia applications like video-on-demand, WWW browsing, traveler information systems etc. Proper management of available spectrum is necessary not only to accommodate these high bandwidth applications, but also to alleviate problems due to sudden explosion of traffic in so called hot cells. In this dissertation, we have developed simple load balancing techniques to cope with the problem of tele-traffic overloads in one or more hot cells in the system. The objective is to ease out the high channel demand in hot cells by ...
Contributing Partner: UNT Libraries
A Unifying Version Model for Objects and Schema in Object-Oriented Database System

A Unifying Version Model for Objects and Schema in Object-Oriented Database System

Date: August 1997
Creator: Shin, Dongil
Description: There have been a number of different versioning models proposed. The research in this area can be divided into two categories: object versioning and schema versioning. In this dissertation, both problem domains are considered as a single unit. This dissertation describes a unifying version model (UVM) for maintaining changes to both objects and schema. UVM handles schema versioning operations by using object versioning techniques. The result is that the UVM allows the OODBMS to be much smaller than previous systems. Also, programmers need know only one set of versioning operations; thus, reducing the learning time by half. This dissertation shows that UVM is a simple but semantically sound and powerful version model for both objects and schema.
Contributing Partner: UNT Libraries
Exploring Trusted Platform Module Capabilities: A Theoretical and Experimental Study

Exploring Trusted Platform Module Capabilities: A Theoretical and Experimental Study

Date: May 2008
Creator: Gunupudi, Vandana
Description: Trusted platform modules (TPMs) are hardware modules that are bound to a computer's motherboard, that are being included in many desktops and laptops. Augmenting computers with these hardware modules adds powerful functionality in distributed settings, allowing us to reason about the security of these systems in new ways. In this dissertation, I study the functionality of TPMs from a theoretical as well as an experimental perspective. On the theoretical front, I leverage various features of TPMs to construct applications like random oracles that are impossible to implement in a standard model of computation. Apart from random oracles, I construct a new cryptographic primitive which is basically a non-interactive form of the standard cryptographic primitive of oblivious transfer. I apply this new primitive to secure mobile agent computations, where interaction between various entities is typically required to ensure security. I prove these constructions are secure using standard cryptographic techniques and assumptions. To test the practicability of these constructions and their applications, I performed an experimental study, both on an actual TPM and a software TPM simulator which has been enhanced to make it reflect timings from a real TPM. This allowed me to benchmark the performance of the applications and test ...
Contributing Partner: UNT Libraries
Keywords in the mist:  Automated keyword extraction for very large documents and back of the book indexing.

Keywords in the mist: Automated keyword extraction for very large documents and back of the book indexing.

Date: May 2008
Creator: Csomai, Andras
Description: This research addresses the problem of automatic keyphrase extraction from large documents and back of the book indexing. The potential benefits of automating this process are far reaching, from improving information retrieval in digital libraries, to saving countless man-hours by helping professional indexers creating back of the book indexes. The dissertation introduces a new methodology to evaluate automated systems, which allows for a detailed, comparative analysis of several techniques for keyphrase extraction. We introduce and evaluate both supervised and unsupervised techniques, designed to balance the resource requirements of an automated system and the best achievable performance. Additionally, a number of novel features are proposed, including a statistical informativeness measure based on chi statistics; an encyclopedic feature that taps into the vast knowledge base of Wikipedia to establish the likelihood of a phrase referring to an informative concept; and a linguistic feature based on sophisticated semantic analysis of the text using current theories of discourse comprehension. The resulting keyphrase extraction system is shown to outperform the current state of the art in supervised keyphrase extraction by a large margin. Moreover, a fully automated back of the book indexing system based on the keyphrase extraction system was shown to lead to back ...
Contributing Partner: UNT Libraries
FIRST PREV 1 2 3 4 5 NEXT LAST