Search Results

Hopfield Networks as an Error Correcting Technique for Speech Recognition

Description: I experimented with Hopfield networks in the context of a voice-based, query-answering system. Hopfield networks are used to store and retrieve patterns. I used this technique to store queries represented as natural language sentences and I evaluated the accuracy of the technique for error correction in a spoken question-answering dialog between a computer and a user. I show that the use of an auto-associative Hopfield network helps make the speech recognition system more fault tolerant. I also looked at the available encoding schemes to convert a natural language sentence into a pattern of zeroes and ones that can be stored in the Hopfield network reliably, and I suggest scalable data representations which allow storing a large number of queries.
Access: This item is restricted to the UNT Community Members at a UNT Libraries Location.
Date: May 2004
Creator: Bireddy, Chakradhar
Partner: UNT Libraries

Ensuring Authenticity and Integrity of Critical Information Using XML Digital Signatures

Description: It has been noticed in the past five years that the Internet use has been troubled by the lack of sufficient security and a legal framework to enable electronic commerce to flourish. Despite these shortcomings, governments, businesses and individuals are using the Internet more often as an inexpensive and ubiquitous means to disseminate and obtain information, goods and services. The Internet is insecure -- potentially millions of people have access, and "hackers" can intercept anything traveling over the wire. There is no way to make it a secure environment; it is, after all, a public network, hence the availability and affordability. In order for it to serve our purposes as a vehicle for legally binding transactions, efforts must be directed at securing the message itself, as opposed to the transport mechanism. Digital signatures have been evolved in the recent years as the best tool for ensuring the authenticity and integrity of critical information in the so called "paperless office". A model using XML digital signatures is developed and the level of security provided by this model in the real world scenario is outlined.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2002
Creator: Korivi, Arjun
Partner: UNT Libraries

Performance Evaluation of MPLS on Quality of Service in Voice Over IP (VoIP) Networks

Description: The transmission of voice data over Internet Protocol (IP) networks is rapidly gaining acceptance in the field of networking. The major voice transmissions in the IP networks are involved in Internet telephony, which is also known as IP telephony or Voice Over IP (VoIP). VoIP is undergoing many enhancements to provide the end users with same quality as in the public switched telephone networks (PSTN). These enhancements are mostly required in quality of service (QoS) for the transmission of voice data over the IP networks. As with recent developments in the networking field, various protocols came into market to provide the QoS in IP networks - of them, multi protocol label switching (MPLS) is the most reliable and upcoming protocol for working on QoS. The problem of the thesis is to develop an IP-based virtual network, with end hosts and routers, implement MPLS on the network, and analyze its QoS for voice data transmission.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2002
Creator: Chetty, Sharath
Partner: UNT Libraries

Benchmark-based Page Replacement (BBPR) Strategy: A New Web Cache Page Replacement Strategy

Description: World Wide Web caching is widely used through today's Internet. When correctly deployed, Web caching systems can lead to significant bandwidth savings, network load reduction, server load balancing, and higher content availability. A document replacement algorithm that can lower retrieval latency and yield high hit ratio is the key to the effectiveness of proxy caches. More than twenty cache algorithms have been employed in academic studies and in corporate communities as well. But there are some drawbacks in the existing replacement algorithms. To overcome these shortcomings, we developed a new page replacement strategy named as Benchmark-Based Page Replacement (BBPR) strategy, in which a HTTP benchmark is used as a tool to evaluate the current network load and the server load. By our simulation model, the BBPR strategy shows better performance than the LRU (Least Recently Used) method, which is the most commonly used algorithm. The tradeoff is a reduced hit ratio. Slow pages benefit from BBPR.
Date: May 2003
Creator: He, Wei
Partner: UNT Libraries

Implementation of Back Up Host in TCP/IP

Description: This problem in lieu thesis is considering a TCP client H1 making a connection to distant server S and is downloading a file. In the midst of the downloading, if H1 crashes, the TCP connection from H1 to S is lost. In the future, if H1 restarts, the TCP connection from H1 to S will be reestablished and the file will be downloaded again. This cannot happen until host H1 restarts. Now consider a situation where there is a standby host H2 for the host H1. H1 and H2 monitor the health of each other by heartbeat messages (like SCTP). If H2 detects the failure of H1, then H2 takes over. This implies that all resources assigned to H1 are now reassigned or taken over by H2. The host H1 and H2 transmit data between each other when any one of it crashed. Throughout the data transmission process, heart beat chunk is exchanged between the hosts when one of the host crashes. In particular, the IP addresses that were originally assigned to H1 are assigned to H2. In this scenario, movement of the TCP connection between H1 and S to a connection between H2 and S without disrupting the TCP connection is achieved. Distant server S is not aware of any changes going on at the clients.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2002
Creator: Golla,Mohan
Partner: UNT Libraries

Study and Sample Implementation of the Secure Shell Protocol (SSH)

Description: Security is one of the main concerns of users who need to connect to a remote computer for various purposes, such as checking e-mails or viewing files. However in today's computer networks, privacy, transmission to intended client is not guaranteed. If data is transmitted over the Internet or a local network as plain text it may be captured and viewed by anyone with little technical knowledge. This may include sensitive data such as passwords. Big businesses use firewalls, virtual private networks and encrypt their transmissions to counter this at high costs. Secure shell protocol (SSH) provides an answer to this. SSH is a software protocol for secure communication over an insecure network. SSH not only offers authentication of hosts but also encrypts the sessions between the client and the server and is transparent to the end user. This Problem in Lieu of Thesis makes a study of SSH and creates a sample secure client and server which follows SSH and examines its performance.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2003
Creator: Subramanyam, Udayakiran
Partner: UNT Libraries

Evaluation of MPLS Enabled Networks

Description: Recent developments in the Internet have inspired a wide range of business and consumer applications. The deployment of multimedia-based services has driven the demand for increased and guaranteed bandwidth requirements over the network. The diverse requirements of the wide range of users demand differentiated classes of service and quality assurance. The new technology of Multi-protocol label switching (MPLS) has emerged as a high performance and reliable option to address these challenges apart from the additional features that were not addressed before. This problem in lieu of thesis describes how the new paradigm of MPLS is advantageous over the conventional architecture. The motivation for this paradigm is discussed in the first part, followed by a detailed description of this new architecture. The information flow, the underlying protocols and the MPLS extensions to some of the traditional protocols are then discussed followed by the description of the simulation. The simulation results are used to show the advantages of the proposed technology.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2003
Creator: Ratnakaram, Archith
Partner: UNT Libraries

Automatic Software Test Data Generation

Description: In software testing, it is often desirable to find test inputs that exercise specific program features. Finding these inputs manually, is extremely time consuming, especially, when the software being tested is complex. Therefore, there have been numerous attempts automate this process. Random test data generation consists of generating test inputs at random, in the hope that they will exercise the desired software features. Often the desired inputs must satisfy complex constraints, and this makes a random approach seem unlikely to succeed. In contrast, combinatorial optimization techniques, such as those using genetic algorithms, are meant to solve difficult problems involving simultaneous satisfaction of many constraints.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2002
Creator: Munugala, Ajay Kumar
Partner: UNT Libraries

A Netcentric Scientific Research Repository

Description: The Internet and networks in general have become essential tools for disseminating in-formation. Search engines have become the predominant means of finding information on the Web and all other data repositories, including local resources. Domain scientists regularly acquire and analyze images generated by equipment such as microscopes and cameras, resulting in complex image files that need to be managed in a convenient manner. This type of integrated environment has been recently termed a netcentric sci-entific research repository. I developed a number of data manipulation tools that allow researchers to manage their information more effectively in a netcentric environment. The specific contributions are: (1) A unique interface for management of data including files and relational databases. A wrapper for relational databases was developed so that the data can be indexed and searched using traditional search engines. This approach allows data in databases to be searched with the same interface as other data. Fur-thermore, this approach makes it easier for scientists to work with their data if they are not familiar with SQL. (2) A Web services based architecture for integrating analysis op-erations into a repository. This technique allows the system to leverage the large num-ber of existing tools by wrapping them with a Web service and registering the service with the repository. Metadata associated with Web services was enhanced to allow this feature to be included. In addition, an improved binary to text encoding scheme was de-veloped to reduce the size overhead for sending large scientific data files via XML mes-sages used in Web services. (3) Integrated image analysis operations with SQL. This technique allows for images to be stored and managed conveniently in a relational da-tabase. SQL supplemented with map algebra operations is used to select and perform operations on sets of images.
Access: This item is restricted to the UNT Community Members at a UNT Libraries Location.
Date: December 2006
Creator: Harrington, Brian
Partner: UNT Libraries

A Comparison of Agent-Oriented Software Engineering Frameworks and Methodologies

Description: Agent-oriented software engineering (AOSE) covers issues on developing systems with software agents. There are many techniques, mostly agent-oriented and object-oriented, ready to be chosen as building blocks to create agent-based systems. There have been several AOSE methodologies proposed intending to show engineers guidelines on how these elements are constituted in having agents achieve the overall system goals. Although these solutions are promising, most of them are designed in ad-hoc manner without truly obeying software developing life-cycle fully, as well as lacking of examinations on agent-oriented features. To address these issues, we investigated state-of-the-art techniques and AOSE methodologies. By examining them in different respects, we commented on the strength and weakness of them. Toward a formal study, a comparison framework has been set up regarding four aspects, including concepts and properties, notations and modeling techniques, process, and pragmatics. Under these criteria, we conducted the comparison in both overview and detailed level. The comparison helped us with empirical and analytical study, to inspect the issues on how an ideal agent-based system will be formed.
Date: December 2003
Creator: Lin, Chia-En
Partner: UNT Libraries

FP-tree Based Spatial Co-location Pattern Mining

Description: A co-location pattern is a set of spatial features frequently located together in space. A frequent pattern is a set of items that frequently appears in a transaction database. Since its introduction, the paradigm of frequent pattern mining has undergone a shift from candidate generation-and-test based approaches to projection based approaches. Co-location patterns resemble frequent patterns in many aspects. However, the lack of transaction concept, which is crucial in frequent pattern mining, makes the similar shift of paradigm in co-location pattern mining very difficult. This thesis investigates a projection based co-location pattern mining paradigm. In particular, a FP-tree based co-location mining framework and an algorithm called FP-CM, for FP-tree based co-location miner, are proposed. It is proved that FP-CM is complete, correct, and only requires a small constant number of database scans. The experimental results show that FP-CM outperforms candidate generation-and-test based co-location miner by an order of magnitude.
Date: May 2005
Creator: Yu, Ping
Partner: UNT Libraries

Mediation on XQuery Views

Description: The major goal of information integration is to provide efficient and easy-to-use access to multiple heterogeneous data sources with a single query. At the same time, one of the current trends is to use standard technologies for implementing solutions to complex software problems. In this dissertation, I used XML and XQuery as the standard technologies and have developed an extended projection algorithm to provide a solution to the information integration problem. In order to demonstrate my solution, I implemented a prototype mediation system called Omphalos based on XML related technologies. The dissertation describes the architecture of the system, its metadata, and the process it uses to answer queries. The system uses XQuery expressions (termed metaqueries) to capture complex mappings between global schemas and data source schemas. The system then applies these metaqueries in order to rewrite a user query on a virtual global database (representing the integrated view of the heterogeneous data sources) to a query (termed an outsourced query) on the real data sources. An extended XML document projection algorithm was developed to increase the efficiency of selecting the relevant subset of data from an individual data source to answer the user query. The system applies the projection algorithm to decompose an outsourced query into atomic queries which are each executed on a single data source. I also developed an algorithm to generate integrating queries, which the system uses to compose the answers from the atomic queries into a single answer to the original user query. I present a proof of both the extended XML document projection algorithm and the query integration algorithm. An analysis of the efficiency of the new extended algorithm is also presented. Finally I describe a collaborative schema-matching tool that was implemented to facilitate maintaining metadata.
Date: December 2006
Creator: Peng, Xiaobo
Partner: UNT Libraries

A Language and Visual Interface to Specify Complex Spatial Pattern Mining

Description: The emerging interests in spatial pattern mining leads to the demand for a flexible spatial pattern mining language, on which easy to use and understand visual pattern language could be built. It is worthwhile to define a pattern mining language called LCSPM to allow users to specify complex spatial patterns. I describe a proposed pattern mining language in this paper. A visual interface which allows users to specify the patterns visually is developed. Visual pattern queries are translated into the LCSPM language by a parser and data mining process can be triggered afterwards. The visual language is based on and goes beyond the visual language proposed in literature. I implemented a prototype system based on the open source JUMP framework.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2006
Creator: Li, Xiaohui
Partner: UNT Libraries

Natural Language Interfaces to Databases

Description: Natural language interfaces to databases (NLIDB) are systems that aim to bridge the gap between the languages used by humans and computers, and automatically translate natural language sentences to database queries. This thesis proposes a novel approach to NLIDB, using graph-based models. The system starts by collecting as much information as possible from existing databases and sentences, and transforms this information into a knowledge base for the system. Given a new question, the system will use this knowledge to analyze and translate the sentence into its corresponding database query statement. The graph-based NLIDB system uses English as the natural language, a relational database model, and SQL as the formal query language. In experiments performed with natural language questions ran against a large database containing information about U.S. geography, the system showed good performance compared to the state-of-the-art in the field.
Date: December 2006
Creator: Chandra, Yohan
Partner: UNT Libraries

Analysis of Web Services on J2EE Application Servers

Description: The Internet became a standard way of exchanging business data between B2B and B2C applications and with this came the need for providing various services on the web instead of just static text and images. Web services are a new type of services offered via the web that aid in the creation of globally distributed applications. Web services are enhanced e-business applications that are easier to advertise and easier to discover on the Internet because of their flexibility and uniformity. In a real life scenario it is highly difficult to decide which J2EE application server to go for when deploying a enterprise web service. This thesis analyzes the various ways by which web services can be developed & deployed. Underlying protocols and crucial issues like EAI (enterprise application integration), asynchronous messaging, Registry tModel architecture etc have been considered in this research. This paper presents a report by analyzing what various J2EE application servers provide by doing a case study and by developing applications to test functionality.
Date: May 2004
Creator: Gosu, Adarsh Kumar
Partner: UNT Libraries

Agent Extensions for Peer-to-Peer Networks.

Description: Peer-to-Peer (P2P) networks have seen tremendous growth in development and usage in recent times. This attention has brought many developments as well as new challenges to these networks. We will show that agent extensions to P2P networks offer solutions to many problems faced by P2P networks. In this research, an attempt is made to bring together JXTA P2P infrastructure and Jinni, a Prolog based agent engine to form an agent based P2P network. On top of the JXTA, we define simple Java API providing P2P services for agent programming constructs. Jinni is deployed on this JXTA network using an automated code update mechanism. Experiments are conducted on this Jinni/JXTA platform to implement a simple agent communication and data exchange protocol.
Date: December 2003
Creator: Valiveti, Kalyan
Partner: UNT Libraries

Adaptive Planning and Prediction in Agent-Supported Distributed Collaboration.

Description: Agents that act as user assistants will become invaluable as the number of information sources continue to proliferate. Such agents can support the work of users by learning to automate time-consuming tasks and filter information to manageable levels. Although considerable advances have been made in this area, it remains a fertile area for further development. One application of agents under careful scrutiny is the automated negotiation of conflicts between different user's needs and desires. Many techniques require explicit user models in order to function. This dissertation explores a technique for dynamically constructing user models and the impact of using them to anticipate the need for negotiation. Negotiation is reduced by including an advising aspect to the agent that can use this anticipation of conflict to adjust user behavior.
Date: December 2004
Creator: Hartness, Ken T. N.
Partner: UNT Libraries

Automatic Tagging of Communication Data

Description: Globally distributed software teams are widespread throughout industry. But finding reliable methods that can properly assess a team's activities is a real challenge. Methods such as surveys and manual coding of activities are too time consuming and are often unreliable. Recent advances in information retrieval and linguistics, however, suggest that automated and/or semi-automated text classification algorithms could be an effective way of finding differences in the communication patterns among individuals and groups. Communication among group members is frequent and generates a significant amount of data. Thus having a web-based tool that can automatically analyze the communication patterns among global software teams could lead to a better understanding of group performance. The goal of this thesis, therefore, is to compare automatic and semi-automatic measures of communication and evaluate their effectiveness in classifying different types of group activities that occur within a global software development project. In order to achieve this goal, we developed a web-based component that can be used to help clean and classify communication activities. The component was then used to compare different automated text classification techniques on various group activities to determine their effectiveness in correctly classifying data from a global software development team project.
Date: August 2012
Creator: Hoyt, Matthew Ray
Partner: UNT Libraries

Performance Evaluation of Data Integrity Mechanisms for Mobile Agents

Description: With the growing popularity of e-commerce applications that use software agents, the protection of mobile agent data has become imperative. To that end, the performance of four methods that protect the data integrity of mobile agents is evaluated. The methods investigated include existing approaches known as the Partial Result Authentication Codes, Hash Chaining, and Set Authentication Code methods, and a technique of our own design, called the Modified Set Authentication Code method, which addresses the limitations of the Set Authentication Code method. The experiments were run using the DADS agent system (developed at the Network Research Laboratory at UNT), for which a Data Integrity Module was designed. The experimental results show that our Modified Set Authentication Code technique performed comparably to the Set Authentication Code method.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2003
Creator: Gunupudi, Vandana
Partner: UNT Libraries

Optimal Access Point Selection and Channel Assignment in IEEE 802.11 Networks

Description: Designing 802.11 wireless networks includes two major components: selection of access points (APs) in the demand areas and assignment of radio frequencies to each AP. Coverage and capacity are some key issues when placing APs in a demand area. APs need to cover all users. A user is considered covered if the power received from its corresponding AP is greater than a given threshold. Moreover, from a capacity standpoint, APs need to provide certain minimum bandwidth to users located in the coverage area. A major challenge in designing wireless networks is the frequency assignment problem. The 802.11 wireless LANs operate in the unlicensed ISM frequency, and all APs share the same frequency. As a result, as 802.11 APs become widely deployed, they start to interfere with each other and degrade network throughput. In consequence, efficient assignment of channels becomes necessary to avoid and minimize interference. In this work, an optimal AP selection was developed by balancing traffic load. An optimization problem was formulated that minimizes heavy congestion. As a result, APs in wireless LANs will have well distributed traffic loads, which maximize the throughput of the network. The channel assignment algorithm was designed by minimizing channel interference between APs. The optimization algorithm assigns channels in such a way that minimizes co-channel and adjacent channel interference resulting in higher throughput.
Date: December 2004
Creator: Park, Sangtae
Partner: UNT Libraries

Evaluating the Scalability of SDF Single-chip Multiprocessor Architecture Using Automatically Parallelizing Code

Description: Advances in integrated circuit technology continue to provide more and more transistors on a chip. Computer architects are faced with the challenge of finding the best way to translate these resources into high performance. The challenge in the design of next generation CPU (central processing unit) lies not on trying to use up the silicon area, but on finding smart ways to make use of the wealth of transistors now available. In addition, the next generation architecture should offer high throughout performance, scalability, modularity, and low energy consumption, instead of an architecture that is suitable for only one class of applications or users, or only emphasize faster clock rate. A program exhibits different types of parallelism: instruction level parallelism (ILP), thread level parallelism (TLP), or data level parallelism (DLP). Likewise, architectures can be designed to exploit one or more of these types of parallelism. It is generally not possible to design architectures that can take advantage of all three types of parallelism without using very complex hardware structures and complex compiler optimizations. We present the state-of-art architecture SDF (scheduled data flowed) which explores the TLP parallelism as much as that is supplied by that application. We implement a SDF single-chip multiprocessor constructed from simpler processors and execute the automatically parallelizing application on the single-chip multiprocessor. SDF has many desirable features such as high throughput, scalability, and low power consumption, which meet the requirements of the next generation of CPU design. Compared with superscalar, VLIW (very long instruction word), and SMT (simultaneous multithreading), the experiment results show that for application with very little parallelism SDF is comparable to other architectures, for applications with large amounts of parallelism SDF outperforms other architectures.
Date: December 2004
Creator: Zhang, Yuhua
Partner: UNT Libraries

Resource Management in Wireless Networks

Description: A local call admission control (CAC) algorithm for third generation wireless networks was designed and implemented, which allows for the simulation of network throughput for different spreading factors and various mobility scenarios. A global CAC algorithm is also implemented and used as a benchmark since it is inherently optimized; it yields the best possible performance but has an intensive computational complexity. Optimized local CAC algorithm achieves similar performance as global CAC algorithm at a fraction of the computational cost. Design of a dynamic channel assignment algorithm for IEEE 802.11 wireless systems is also presented. Channels are assigned dynamically depending on the minimal interference generated by the neighboring access points on a reference access point. Analysis of dynamic channel assignment algorithm shows an improvement by a factor of 4 over the default settings of having all access points use the same channel, resulting significantly higher network throughput.
Date: August 2006
Creator: Arepally, Anurag
Partner: UNT Libraries

Impact of actual interference on capacity and call admission control in a CDMA network.

Description: An overwhelming number of models in the literature use average inter-cell interference for the calculation of capacity of a Code Division Multiple Access (CDMA) network. The advantage gained in terms of simplicity by using such models comes at the cost of rendering the exact location of a user within a cell irrelevant. We calculate the actual per-user interference and analyze the effect of user-distribution within a cell on the capacity of a CDMA network. We show that even though the capacity obtained using average interference is a good approximation to the capacity calculated using actual interference for a uniform user distribution, the deviation can be tremendously large for non-uniform user distributions. Call admission control (CAC) algorithms are responsible for efficient management of a network's resources while guaranteeing the quality of service and grade of service, i.e., accepting the maximum number of calls without affecting the quality of service of calls already present in the network. We design and implement global and local CAC algorithms, and through simulations compare their network throughput and blocking probabilities for varying mobility scenarios. We show that even though our global CAC is better at resource management, the lack of substantial gain in network throughput and exponential increase in complexity makes our optimized local CAC algorithm a much better choice for a given traffic distribution profile.
Date: May 2004
Creator: Parvez, Asad
Partner: UNT Libraries

A Comparative Analysis of Style of User Interface Look and Feel in a Synchronous Computer Supported Cooperative Work Environment

Description: The purpose of this study is to determine whether the style of a user interface (i.e., its look and feel) has an effect on the usability of a synchronous computer supported cooperative work (CSCW) environment for delivering Internet-based collaborative content. The problem motivating this study is that people who are located in different places need to be able to communicate with one another. One way to do this is by using complex computer tools that allow users to share information, documents, programs, etc. As an increasing number of business organizations require workers to use these types of complex communication tools, it is important to determine how users regard these types of tools and whether they are perceived to be useful. If a tool, or interface, is not perceived to be useful then it is often not used, or used ineffectively. As organizations strive to improve communication with and among users by providing more Internet-based collaborative environments, the users' experience in this form of delivery may be tied to a style of user interface look and feel that could negatively affect their overall acceptance and satisfaction of the collaborative environment. The significance of this study is that it applies the technology acceptance model (TAM) as a tool for evaluating style of user interface look and feel in a collaborative environment, and attempts to predict which factors of that model, perceived ease of use and/or perceived usefulness, could lead to better acceptance of collaborative tools within an organization.
Date: May 2005
Creator: Livingston, Alan
Partner: UNT Libraries