You limited your search to:

  Partner: UNT Libraries
 Degree Discipline: Computer Science
 Collection: UNT Theses and Dissertations
A Study of Perceptually Tuned, Wavelet Based, Rate Scalable, Image and Video Compression

A Study of Perceptually Tuned, Wavelet Based, Rate Scalable, Image and Video Compression

Access: Use of this item is restricted to the UNT Community.
Date: May 2002
Creator: Wei, Ming
Description: In this dissertation, first, we have proposed and implemented a new perceptually tuned wavelet based, rate scalable, and color image encoding/decoding system based on the human perceptual model. It is based on state-of-the-art research on embedded wavelet image compression technique, Contrast Sensitivity Function (CSF) for Human Visual System (HVS) and extends this scheme to handle optimal bit allocation among multiple bands, such as Y, Cb, and Cr. Our experimental image codec shows very exciting results in compression performance and visual quality comparing to the new wavelet based international still image compression standard - JPEG 2000. On the other hand, our codec also shows significant better speed performance and comparable visual quality in comparison to the best codec available in rate scalable color image compression - CSPIHT that is based on Set Partition In Hierarchical Tree (SPIHT) and Karhunen-Loeve Transform (KLT). Secondly, a novel wavelet based interframe compression scheme has been developed and put into practice. It is based on the Flexible Block Wavelet Transform (FBWT) that we have developed. FBWT based interframe compression is very efficient in both compression and speed performance. The compression performance of our video codec is compared with H263+. At the same bit rate, our encoder, ...
Contributing Partner: UNT Libraries
The Feasibility of Multicasting in RMI

The Feasibility of Multicasting in RMI

Date: May 2003
Creator: Ujjinihavildar, Vinay
Description: Due to the growing need of the Internet and networking technologies, simple, powerful, easily maintained distributed applications needed to be developed. These kinds of applications can benefit greatly from distributed computing concepts. Despite its powerful mechanisms, Jini has yet to be accepted in mainstream Java development. Until that happens, we need to find better Remote Method Invocation (RMI) solutions. Feasibility of implementation of Multicasting in RMI is worked in this paper. Multicasting capability can be used in RMI using Jini-like technique. Support of Multicast over Unicast reference layer is also studied. A piece of code explaining how it can be done, is added.
Contributing Partner: UNT Libraries
User Modeling Tools for Virtual Architecture

User Modeling Tools for Virtual Architecture

Date: May 2003
Creator: Uppuluri, Raja
Description: As the use of virtual environments (VEs) is becoming more widespread, user needs are becoming a more significant part in those environments. In order to adapt to the needs of the user, a system should be able to infer user interests and goals. I developed an architecture for user modeling to understand users' interests in a VE by monitoring their actions. In this paper, I discussed the architecture and the virtual environment that was created to test it. This architecture employs sensors to keep track of all the users' actions, data structures that can store a record of significant events that have occurred in the environment, and a rule base. The rule base continually monitors the data collected from the sensors, world state, and event history in order to update the user goal inferences. These inferences can then be used to modify the flow of events within a VE.
Contributing Partner: UNT Libraries
Developing a Test Bed for Interactive Narrative in Virtual Environments

Developing a Test Bed for Interactive Narrative in Virtual Environments

Access: Use of this item is restricted to the UNT Community.
Date: August 2002
Creator: Mellacheruvu, Krishna
Description: As Virtual Environments (VE) become a more commonly used method of interaction and presentation, supporting users as they navigate and interact with scenarios presented in VE will be a significant issue. A key step in understanding the needs of users in these situations will be observing them perform representative tasks in a fully developed environment. In this paper, we describe the development of a test bed for interactive narrative in a virtual environment. The test bed was specifically developed to present multiple, simultaneous sequences of events (scenarios or narratives) and to support user navigation through these scenarios. These capabilities will support the development of multiple users testing scenarios, allowing us to study and better understand the needs of users of narrative VEs.
Contributing Partner: UNT Libraries
Automatic Software Test Data Generation

Automatic Software Test Data Generation

Access: Use of this item is restricted to the UNT Community.
Date: December 2002
Creator: Munugala, Ajay Kumar
Description: In software testing, it is often desirable to find test inputs that exercise specific program features. Finding these inputs manually, is extremely time consuming, especially, when the software being tested is complex. Therefore, there have been numerous attempts automate this process. Random test data generation consists of generating test inputs at random, in the hope that they will exercise the desired software features. Often the desired inputs must satisfy complex constraints, and this makes a random approach seem unlikely to succeed. In contrast, combinatorial optimization techniques, such as those using genetic algorithms, are meant to solve difficult problems involving simultaneous satisfaction of many constraints.
Contributing Partner: UNT Libraries
Visualization of Surfaces and 3D Vector Fields

Visualization of Surfaces and 3D Vector Fields

Date: August 2002
Creator: Li, Wentong
Description: Visualization of trivariate functions and vector fields with three components in scientific computation is still a hard problem in compute graphic area. People build their own visualization packages for their special purposes. And there exist some general-purpose packages (MatLab, Vis5D), but they all require extensive user experience on setting all the parameters in order to generate images. We present a simple package to produce simplified but productive images of 3-D vector fields. We used this method to render the magnetic field and current as solutions of the Ginzburg-Landau equations on a 3-D domain.
Contributing Partner: UNT Libraries
Ensuring Authenticity and Integrity of Critical Information Using XML Digital Signatures

Ensuring Authenticity and Integrity of Critical Information Using XML Digital Signatures

Access: Use of this item is restricted to the UNT Community.
Date: December 2002
Creator: Korivi, Arjun
Description: It has been noticed in the past five years that the Internet use has been troubled by the lack of sufficient security and a legal framework to enable electronic commerce to flourish. Despite these shortcomings, governments, businesses and individuals are using the Internet more often as an inexpensive and ubiquitous means to disseminate and obtain information, goods and services. The Internet is insecure -- potentially millions of people have access, and "hackers" can intercept anything traveling over the wire. There is no way to make it a secure environment; it is, after all, a public network, hence the availability and affordability. In order for it to serve our purposes as a vehicle for legally binding transactions, efforts must be directed at securing the message itself, as opposed to the transport mechanism. Digital signatures have been evolved in the recent years as the best tool for ensuring the authenticity and integrity of critical information in the so called "paperless office". A model using XML digital signatures is developed and the level of security provided by this model in the real world scenario is outlined.
Contributing Partner: UNT Libraries
Control mechanisms and recovery techniques for real-time data transmission over the Internet.

Control mechanisms and recovery techniques for real-time data transmission over the Internet.

Date: August 2002
Creator: Battula, Venkata Krishna Rao
Description: Streaming multimedia content with UDP has become popular over distributed systems such as an Internet. This may encounter many losses due to dropped packets or late arrivals at destination since UDP can only provide best effort delivery. Even UDP doesn't have any self-recovery mechanism from congestion collapse or bursty loss to inform sender of the data to adjust future transmission rate of data like in TCP. So there is a need to incorporate various control schemes like forward error control, interleaving, and congestion control and error concealment into real-time transmission to prevent from effect of losses. Loss can be repaired by retransmission if roundtrip delay is allowed, otherwise error concealment techniques will be used based on the type and amount of loss. This paper implements the interleaving technique with packet spacing of varying interleaver block size for protecting real-time data from loss and its effect during transformation across the Internet. The packets are interleaved and maintain some time gap between two consecutive packets before being transmitted into the Internet. Thus loss of packets can be reduced from congestion and preventing loss of consecutive packets of information when a burst of several packets are lost. Several experiments have been conducted with ...
Contributing Partner: UNT Libraries
Evaluation of MPLS Enabled Networks

Evaluation of MPLS Enabled Networks

Access: Use of this item is restricted to the UNT Community.
Date: May 2003
Creator: Ratnakaram, Archith
Description: Recent developments in the Internet have inspired a wide range of business and consumer applications. The deployment of multimedia-based services has driven the demand for increased and guaranteed bandwidth requirements over the network. The diverse requirements of the wide range of users demand differentiated classes of service and quality assurance. The new technology of Multi-protocol label switching (MPLS) has emerged as a high performance and reliable option to address these challenges apart from the additional features that were not addressed before. This problem in lieu of thesis describes how the new paradigm of MPLS is advantageous over the conventional architecture. The motivation for this paradigm is discussed in the first part, followed by a detailed description of this new architecture. The information flow, the underlying protocols and the MPLS extensions to some of the traditional protocols are then discussed followed by the description of the simulation. The simulation results are used to show the advantages of the proposed technology.
Contributing Partner: UNT Libraries
Temporally Correct Algorithms for Transaction Concurrency Control in Distributed Databases

Temporally Correct Algorithms for Transaction Concurrency Control in Distributed Databases

Access: Use of this item is restricted to the UNT Community.
Date: May 2001
Creator: Tuck, Terry W.
Description: Many activities are comprised of temporally dependent events that must be executed in a specific chronological order. Supportive software applications must preserve these temporal dependencies. Whenever the processing of this type of an application includes transactions submitted to a database that is shared with other such applications, the transaction concurrency control mechanisms within the database must also preserve the temporal dependencies. A basis for preserving temporal dependencies is established by using (within the applications and databases) real-time timestamps to identify and order events and transactions. The use of optimistic approaches to transaction concurrency control can be undesirable in such situations, as they allow incorrect results for database read operations. Although the incorrectness is detected prior to transaction committal and the corresponding transaction(s) restarted, the impact on the application or entity that submitted the transaction can be too costly. Three transaction concurrency control algorithms are proposed in this dissertation. These algorithms are based on timestamp ordering, and are designed to preserve temporal dependencies existing among data-dependent transactions. The algorithms produce execution schedules that are equivalent to temporally ordered serial schedules, where the temporal order is established by the transactions' start times. The algorithms provide this equivalence while supporting currency to the ...
Contributing Partner: UNT Libraries