You limited your search to:

  Partner: UNT Libraries
 Degree Discipline: Computer Science
High Performance Architecture using Speculative Threads and Dynamic Memory Management Hardware

High Performance Architecture using Speculative Threads and Dynamic Memory Management Hardware

Date: December 2007
Creator: Li, Wentong
Description: With the advances in very large scale integration (VLSI) technology, hundreds of billions of transistors can be packed into a single chip. With the increased hardware budget, how to take advantage of available hardware resources becomes an important research area. Some researchers have shifted from control flow Von-Neumann architecture back to dataflow architecture again in order to explore scalable architectures leading to multi-core systems with several hundreds of processing elements. In this dissertation, I address how the performance of modern processing systems can be improved, while attempting to reduce hardware complexity and energy consumptions. My research described here tackles both central processing unit (CPU) performance and memory subsystem performance. More specifically I will describe my research related to the design of an innovative decoupled multithreaded architecture that can be used in multi-core processor implementations. I also address how memory management functions can be off-loaded from processing pipelines to further improve system performance and eliminate cache pollution caused by runtime management functions.
Contributing Partner: UNT Libraries
Higher Compression from the Burrows-Wheeler Transform with New Algorithms for the List Update Problem

Higher Compression from the Burrows-Wheeler Transform with New Algorithms for the List Update Problem

Date: August 2001
Creator: Chapin, Brenton
Description: Burrows-Wheeler compression is a three stage process in which the data is transformed with the Burrows-Wheeler Transform, then transformed with Move-To-Front, and finally encoded with an entropy coder. Move-To-Front, Transpose, and Frequency Count are some of the many algorithms used on the List Update problem. In 1985, Competitive Analysis first showed the superiority of Move-To-Front over Transpose and Frequency Count for the List Update problem with arbitrary data. Earlier studies due to Bitner assumed independent identically distributed data, and showed that while Move-To-Front adapts to a distribution faster, incurring less overwork, the asymptotic costs of Frequency Count and Transpose are less. The improvements to Burrows-Wheeler compression this work covers are increases in the amount, not speed, of compression. Best x of 2x-1 is a new family of algorithms created to improve on Move-To-Front's processing of the output of the Burrows-Wheeler Transform which is like piecewise independent identically distributed data. Other algorithms for both the middle stage of Burrows-Wheeler compression and the List Update problem for which overwork, asymptotic cost, and competitive ratios are also analyzed are several variations of Move One From Front and part of the randomized algorithm Timestamp. The Best x of 2x - 1 family includes Move-To-Front, ...
Contributing Partner: UNT Libraries
A Highly Fault-Tolerant Distributed Database System with Replicated Data

A Highly Fault-Tolerant Distributed Database System with Replicated Data

Date: December 1994
Creator: Lin, Tsai S. (Tsai Shooumeei)
Description: Because of the high cost and impracticality of a high connectivity network, most recent research in transaction processing has focused on a distributed replicated database system. In such a system, multiple copies of a data item are created and stored at several sites in the network, so that the system is able to tolerate more crash and communication failures and attain higher data availability. However, the multiple copies also introduce a global inconsistency problem, especially in a partitioned network. In this dissertation a tree quorum algorithm is proposed to solve this problem, imposing a logical tree structure along with dynamic system reconfiguration on all the copies of each data item. The proposed algorithm can be viewed as a dynamic voting technique which, with the help of an appropriate concurrency control algorithm, exhibits the major advantages of quorum-based replica control algorithms and of the available copies algorithm, so that a single copy is read for a read operation and a quorum of copies is written for a write operation. In addition, read and write quorums are computed dynamically and independently. As a result expensive read operations, like those that require several copies of a data item to be read in most ...
Contributing Partner: UNT Libraries
Hopfield Networks as an Error Correcting Technique for Speech Recognition

Hopfield Networks as an Error Correcting Technique for Speech Recognition

Access: Use of this item is restricted to the UNT Community.
Date: May 2004
Creator: Bireddy, Chakradhar
Description: I experimented with Hopfield networks in the context of a voice-based, query-answering system. Hopfield networks are used to store and retrieve patterns. I used this technique to store queries represented as natural language sentences and I evaluated the accuracy of the technique for error correction in a spoken question-answering dialog between a computer and a user. I show that the use of an auto-associative Hopfield network helps make the speech recognition system more fault tolerant. I also looked at the available encoding schemes to convert a natural language sentence into a pattern of zeroes and ones that can be stored in the Hopfield network reliably, and I suggest scalable data representations which allow storing a large number of queries.
Contributing Partner: UNT Libraries
Impact of actual interference on capacity and call admission control in a CDMA network.

Impact of actual interference on capacity and call admission control in a CDMA network.

Date: May 2004
Creator: Parvez, Asad
Description: An overwhelming number of models in the literature use average inter-cell interference for the calculation of capacity of a Code Division Multiple Access (CDMA) network. The advantage gained in terms of simplicity by using such models comes at the cost of rendering the exact location of a user within a cell irrelevant. We calculate the actual per-user interference and analyze the effect of user-distribution within a cell on the capacity of a CDMA network. We show that even though the capacity obtained using average interference is a good approximation to the capacity calculated using actual interference for a uniform user distribution, the deviation can be tremendously large for non-uniform user distributions. Call admission control (CAC) algorithms are responsible for efficient management of a network's resources while guaranteeing the quality of service and grade of service, i.e., accepting the maximum number of calls without affecting the quality of service of calls already present in the network. We design and implement global and local CAC algorithms, and through simulations compare their network throughput and blocking probabilities for varying mobility scenarios. We show that even though our global CAC is better at resource management, the lack of substantial gain in network throughput and ...
Contributing Partner: UNT Libraries
Implementation of Back Up Host in TCP/IP

Implementation of Back Up Host in TCP/IP

Access: Use of this item is restricted to the UNT Community.
Date: December 2002
Creator: Golla,Mohan
Description: This problem in lieu thesis is considering a TCP client H1 making a connection to distant server S and is downloading a file. In the midst of the downloading, if H1 crashes, the TCP connection from H1 to S is lost. In the future, if H1 restarts, the TCP connection from H1 to S will be reestablished and the file will be downloaded again. This cannot happen until host H1 restarts. Now consider a situation where there is a standby host H2 for the host H1. H1 and H2 monitor the health of each other by heartbeat messages (like SCTP). If H2 detects the failure of H1, then H2 takes over. This implies that all resources assigned to H1 are now reassigned or taken over by H2. The host H1 and H2 transmit data between each other when any one of it crashed. Throughout the data transmission process, heart beat chunk is exchanged between the hosts when one of the host crashes. In particular, the IP addresses that were originally assigned to H1 are assigned to H2. In this scenario, movement of the TCP connection between H1 and S to a connection between H2 and S without disrupting the TCP ...
Contributing Partner: UNT Libraries
Implementation of Scalable Secure Multicasting

Implementation of Scalable Secure Multicasting

Access: Use of this item is restricted to the UNT Community.
Date: August 2002
Creator: Vellanki, Ramakrishnaprasad
Description: A large number of applications like multi-player games, video conferencing, chat groups and network management are presently based on multicast communication. As the group communication model is being deployed for mainstream use, it is critical to provide security mechanisms that facilitate confidentiality, authenticity and integrity in group communications. Providing security in multicast communication requires addressing the problem of scalability in group key distribution. Scalability is a concern in group communication due to group membership dynamics. Joining and leaving of members requires the distribution of a new session key to all the existing members of the group. The two approaches to key management namely centralized and distributed approaches are reviewed. A hybrid solution is then provided, which represents a improved scalable and robust approach for a secure multicast framework. This framework then is implemented in an example application of a multicast news service.
Contributing Partner: UNT Libraries
Improved Approximation Algorithms for Geometric Packing Problems With Experimental Evaluation

Improved Approximation Algorithms for Geometric Packing Problems With Experimental Evaluation

Access: Use of this item is restricted to the UNT Community.
Date: December 2003
Creator: Song, Yongqiang
Description: Geometric packing problems are NP-complete problems that arise in VLSI design. In this thesis, we present two novel algorithms using dynamic programming to compute exactly the maximum number of k x k squares of unit size that can be packed without overlap into a given n x m grid. The first algorithm was implemented and ran successfully on problems of large input up to 1,000,000 nodes for different values. A heuristic based on the second algorithm is implemented. This heuristic is fast in practice, but may not always be giving optimal times in theory. However, over a wide range of random data this version of the algorithm is giving very good solutions very fast and runs on problems of up to 100,000,000 nodes in a grid and different ranges for the variables. It is also shown that this version of algorithm is clearly superior to the first algorithm and has shown to be very efficient in practice.
Contributing Partner: UNT Libraries
An Integrated Architecture for Ad Hoc Grids

An Integrated Architecture for Ad Hoc Grids

Date: May 2006
Creator: Amin, Kaizar Abdul Husain
Description: Extensive research has been conducted by the grid community to enable large-scale collaborations in pre-configured environments. grid collaborations can vary in scale and motivation resulting in a coarse classification of grids: national grid, project grid, enterprise grid, and volunteer grid. Despite the differences in scope and scale, all the traditional grids in practice share some common assumptions. They support mutually collaborative communities, adopt a centralized control for membership, and assume a well-defined non-changing collaboration. To support grid applications that do not confirm to these assumptions, we propose the concept of ad hoc grids. In the context of this research, we propose a novel architecture for ad hoc grids that integrates a suite of component frameworks. Specifically, our architecture combines the community management framework, security framework, abstraction framework, quality of service framework, and reputation framework. The overarching objective of our integrated architecture is to support a variety of grid applications in a self-controlled fashion with the help of a self-organizing ad hoc community. We introduce mechanisms in our architecture that successfully isolates malicious elements from the community, inherently improving the quality of grid services and extracting deterministic quality assurances from the underlying infrastructure. We also emphasize on the technology-independence of our ...
Contributing Partner: UNT Libraries
Intelligent Memory Management Heuristics

Intelligent Memory Management Heuristics

Date: December 2003
Creator: Panthulu, Pradeep
Description: Automatic memory management is crucial in implementation of runtime systems even though it induces a significant computational overhead. In this thesis I explore the use of statistical properties of the directed graph describing the set of live data to decide between garbage collection and heap expansion in a memory management algorithm combining the dynamic array represented heaps with a mark and sweep garbage collector to enhance its performance. The sampling method predicting the density and the distribution of useful data is implemented as a partial marking algorithm. The algorithm randomly marks the nodes of the directed graph representing the live data at different depths with a variable probability factor p. Using the information gathered by the partial marking algorithm in the current step and the knowledge gathered in the previous iterations, the proposed empirical formula predicts with reasonable accuracy the density of live nodes on the heap, to decide between garbage collection and heap expansion. The resulting heuristics are tested empirically and shown to improve overall execution performance significantly in the context of the Jinni Prolog compiler's runtime system.
Contributing Partner: UNT Libraries