You limited your search to:

  Partner: UNT Libraries
 Department: Department of Computer Science
 Language: English
 Collection: UNT Theses and Dissertations
Agent-based architecture for web deployment of multi-agents as conversational interfaces.
Agent-based architecture explains the rationale and basis for developing agents that can interact with users through natural language query/answer patterns developed systematically using AIML (artificial intelligence mark-up language) scripts. This thesis research document also explains the architecture for VISTA (virtual interactive story-telling agents), which is used for interactive querying in educational and recreational purposes. Agents are very effective as conversational interfaces when used along side with graphical user interface (GUI) in applications and Web pages. This architecture platform can support multiple agents with or with out sharing of knowledgebase. They are very useful as chat robots for recreational purposes, customer service and educational purposes. This platform is powered by Java servlet implementation of Program D and contained in Apache Tomcat server. The AIML scripting language defined here in is a generic form of XML language and forms the knowledgebase of the bot. Animation is provided with Microsoft® Agent technology and text-to-speech support engine. digital.library.unt.edu/ark:/67531/metadc4187/
Agent extensions for peer-to-peer networks.
Peer-to-Peer (P2P) networks have seen tremendous growth in development and usage in recent times. This attention has brought many developments as well as new challenges to these networks. We will show that agent extensions to P2P networks offer solutions to many problems faced by P2P networks. In this research, an attempt is made to bring together JXTA P2P infrastructure and Jinni, a Prolog based agent engine to form an agent based P2P network. On top of the JXTA, we define simple Java API providing P2P services for agent programming constructs. Jinni is deployed on this JXTA network using an automated code update mechanism. Experiments are conducted on this Jinni/JXTA platform to implement a simple agent communication and data exchange protocol. digital.library.unt.edu/ark:/67531/metadc4382/
Algorithms for Efficient Utilization of Wireless Bandwidth and to Provide Quality-of-Service in Wireless Networks
This thesis presents algorithms to utilize the wireless bandwidth efficiently and at the same time meet the quality of service (QoS) requirements of the users. In the proposed algorithms we present an adaptive frame structure based upon the airlink frame loss probability and control the admission of call requests into the system based upon the load on the system and the QoS requirements of the incoming call requests. The performance of the proposed algorithms is studied by developing analytical formulations and simulation experiments. Finally we present an admission control algorithm which uses an adaptive delay computation algorithm to compute the queuing delay for each class of traffic and adapts the service rate and the reliability in the estimates based upon the deviation in the expected and obtained performance. We study the performance of the call admission control algorithm by simulation experiments. Simulation results for the adaptive frame structure algorithm show an improvement in the number of users in the system but there is a drop in the system throughput. In spite of the lower throughput the adaptive frame structure algorithm has fewer QoS delay violations. The adaptive call admission control algorithm adapts the call dropping probability of different classes of traffic and optimizes the system performance w.r.t the number of calls dropped and the reliability in meeting the QoS promised when the call is admitted into the system. digital.library.unt.edu/ark:/67531/metadc2635/
An analysis of motivational cues in virtual environments.
Access: Use of this item is restricted to the UNT Community.
Guiding navigation in virtual environments (VEs) is a challenging task. A key issue in the navigation of a virtual environment is to be able to strike a balance between the user's need to explore the environment freely and the designer's need to ensure that the user experiences all the important events in the VE. This thesis reports on a study aimed at comparing the effectiveness of various navigation cues that are used to motivate users towards a specific target location. The results of this study indicate some significant differences in how users responded to the various cues. digital.library.unt.edu/ark:/67531/metadc4352/
Analysis of Web Services on J2EE Application Servers
The Internet became a standard way of exchanging business data between B2B and B2C applications and with this came the need for providing various services on the web instead of just static text and images. Web services are a new type of services offered via the web that aid in the creation of globally distributed applications. Web services are enhanced e-business applications that are easier to advertise and easier to discover on the Internet because of their flexibility and uniformity. In a real life scenario it is highly difficult to decide which J2EE application server to go for when deploying a enterprise web service. This thesis analyzes the various ways by which web services can be developed & deployed. Underlying protocols and crucial issues like EAI (enterprise application integration), asynchronous messaging, Registry tModel architecture etc have been considered in this research. This paper presents a report by analyzing what various J2EE application servers provide by doing a case study and by developing applications to test functionality. digital.library.unt.edu/ark:/67531/metadc5547/
Analyzing Microwave Spectra Collected by the Solar Radio Burst Locator
Modern communication systems rely heavily upon microwave, radio, and other electromagnetic frequency bands as a means of providing wireless communication links. Although convenient, wireless communication is susceptible to electromagnetic interference. Solar activity causes both direct interference through electromagnetic radiation as well as indirect interference caused by charged particles interacting with Earth's magnetic field. The Solar Radio Burst Locator (SRBL) is a United States Air Force radio telescope designed to detect and locate solar microwave bursts as they occur on the Sun. By analyzing these events, the Air Force hopes to gain a better understanding of the root causes of solar interference and improve interference forecasts. This thesis presents methods of searching and analyzing events found in the previously unstudied SRBL data archive. A new web-based application aids in the searching and visualization of the data. Comparative analysis is performed amongst data collected by SRBL and several other instruments. This thesis also analyzes events across the time, intensity, and frequency domains. These analysis methods can be used to aid in the detection and understanding of solar events so as to provide improved forecasts of solar-induced electromagnetic interference. digital.library.unt.edu/ark:/67531/metadc3655/
An Annotated Bibliography of Mobile Agents in Networks
The purpose of this thesis is to present a comprehensive colligation of applications of mobile agents in networks, and provide a baseline association of these systems. This work has been motivated by the fact that mobile agent systems have been deemed proficuous alternatives in system applications. Several mobile agent systems have been developed to provide scalable and cogent solutions in network-centric applications. This thesis examines some existing mobile agent systems in core networking areas, in particular, those of network and resource management, routing, and the provision of fault tolerance and security. The inherent features of these systems are discussed with respect to their specific functionalities. The applicability and efficacy of mobile agents are further considered in the specific areas mentioned above. Although an initial foray into a collation of this nature, the goal of this annotated bibliography is to provide a generic referential view of mobile agent systems in network applications. digital.library.unt.edu/ark:/67531/metadc3310/
Arithmetic Computations and Memory Management Using a Binary Tree Encoding af Natural Numbers
Two applications of a binary tree data type based on a simple pairing function (a bijection between natural numbers and pairs of natural numbers) are explored. First, the tree is used to encode natural numbers, and algorithms that perform basic arithmetic computations are presented along with formal proofs of their correctness. Second, using this "canonical" representation as a base type, algorithms for encoding and decoding additional isomorphic data types of other mathematical constructs (sets, sequences, etc.) are also developed. An experimental application to a memory management system is constructed and explored using these isomorphic types. A practical analysis of this system's runtime complexity and space savings are provided, along with a proof of concept framework for both applications of the binary tree type, in the Java programming language. digital.library.unt.edu/ark:/67531/metadc103323/
Automatic Software Test Data Generation
Access: Use of this item is restricted to the UNT Community.
In software testing, it is often desirable to find test inputs that exercise specific program features. Finding these inputs manually, is extremely time consuming, especially, when the software being tested is complex. Therefore, there have been numerous attempts automate this process. Random test data generation consists of generating test inputs at random, in the hope that they will exercise the desired software features. Often the desired inputs must satisfy complex constraints, and this makes a random approach seem unlikely to succeed. In contrast, combinatorial optimization techniques, such as those using genetic algorithms, are meant to solve difficult problems involving simultaneous satisfaction of many constraints. digital.library.unt.edu/ark:/67531/metadc3276/
Benchmark-based Page Replacement (BBPR) Strategy: A New Web Cache Page Replacement Strategy
World Wide Web caching is widely used through today's Internet. When correctly deployed, Web caching systems can lead to significant bandwidth savings, network load reduction, server load balancing, and higher content availability. A document replacement algorithm that can lower retrieval latency and yield high hit ratio is the key to the effectiveness of proxy caches. More than twenty cache algorithms have been employed in academic studies and in corporate communities as well. But there are some drawbacks in the existing replacement algorithms. To overcome these shortcomings, we developed a new page replacement strategy named as Benchmark-Based Page Replacement (BBPR) strategy, in which a HTTP benchmark is used as a tool to evaluate the current network load and the server load. By our simulation model, the BBPR strategy shows better performance than the LRU (Least Recently Used) method, which is the most commonly used algorithm. The tradeoff is a reduced hit ratio. Slow pages benefit from BBPR. digital.library.unt.edu/ark:/67531/metadc4215/
Bounded Dynamic Source Routing in Mobile Ad Hoc Networks
Access: Use of this item is restricted to the UNT Community.
A mobile ad hoc network (MANET) is a collection of mobile platforms or nodes that come together to form a network capable of communicating with each other, without the help of a central controller. To avail the maximum potential of a MANET, it is of great importance to devise a routing scheme, which will optimize upon the performance of a MANET, given the high rate of random mobility of the nodes. In a MANET individual nodes perform the routing functions like route discovery, route maintenance and delivery of packets from one node to the other. Existing routing protocols flood the network with broadcasts of route discovery messages, while attempting to establish a route. This characteristic is instrumental in deteriorating the performance of a MANET, as resource overhead triggered by broadcasts is directly proportional to the size of the network. Bounded-dynamic source routing (B-DSR), is proposed to curb this multitude of superfluous broadcasts, thus enabling to reserve valuable resources like bandwidth and battery power. B-DSR establishes a bounded region in the network, only within which, transmissions of route discovery messages are processed and validated for establishing a route. All route discovery messages reaching outside of this bounded region are dropped, thus preventing the network from being flooded. In addition B-DSR also guarantees loop-free routing and is robust for a rapid recovery when routes in the network change. digital.library.unt.edu/ark:/67531/metadc4264/
Building an Intelligent Filtering System Using Idea Indexing
The widely used vector model maintains its popularity because of its simplicity, fast speed, and the appeal of using spatial proximity for semantic proximity. However, this model faces a disadvantage that is associated with the vagueness from keywords overlapping. Efforts have been made to improve the vector model. The research on improving document representation has been focused on four areas, namely, statistical co-occurrence of related items, forming term phrases, grouping of related words, and representing the content of documents. In this thesis, we propose the idea-indexing model to improve document representation for the filtering task in IR. The idea-indexing model matches document terms with the ideas they express and indexes the document with these ideas. This indexing scheme represents the document with its semantics instead of sets of independent terms. We show in this thesis that indexing with ideas leads to better performance. digital.library.unt.edu/ark:/67531/metadc4275/
Case-Based Reasoning (CBR) for Children Story Selection in ASP.NET and C#
This paper describes the general architecture and function of a Case-Based Reasoning (CBR) system implemented with ASP.NET and C#. Microsoft Visual Studio .NET and XML Web Services provide a flexible, standards-based model that allows clients to access data. Web Form Pages offer a powerful programming model for Web-enabled user interface. The system provides a variety of mechanisms and services related to story retrieval and adaptation. Users may browse and search a library of text stories. More advanced CBR capabilities were also implemented, including a multi-factor distance-calculation for matching user interests with stories in the library, recommendations on optimizing search, and adaptation of stories to match user interests. digital.library.unt.edu/ark:/67531/metadc4417/
A Comparison of Agent-Oriented Software Engineering Frameworks and Methodologies
Agent-oriented software engineering (AOSE) covers issues on developing systems with software agents. There are many techniques, mostly agent-oriented and object-oriented, ready to be chosen as building blocks to create agent-based systems. There have been several AOSE methodologies proposed intending to show engineers guidelines on how these elements are constituted in having agents achieve the overall system goals. Although these solutions are promising, most of them are designed in ad-hoc manner without truly obeying software developing life-cycle fully, as well as lacking of examinations on agent-oriented features. To address these issues, we investigated state-of-the-art techniques and AOSE methodologies. By examining them in different respects, we commented on the strength and weakness of them. Toward a formal study, a comparison framework has been set up regarding four aspects, including concepts and properties, notations and modeling techniques, process, and pragmatics. Under these criteria, we conducted the comparison in both overview and detailed level. The comparison helped us with empirical and analytical study, to inspect the issues on how an ideal agent-based system will be formed. digital.library.unt.edu/ark:/67531/metadc4411/
Computational Complexity of Hopfield Networks
There are three main results in this dissertation. They are PLS-completeness of discrete Hopfield network convergence with eight different restrictions, (degree 3, bipartite and degree 3, 8-neighbor mesh, dual of the knight's graph, hypercube, butterfly, cube-connected cycles and shuffle-exchange), exponential convergence behavior of discrete Hopfield network, and simulation of Turing machines by discrete Hopfield Network. digital.library.unt.edu/ark:/67531/metadc278272/
Content-based image retrieval by integration of metadata encoded multimedia (image and text) features in constructing a video summarizer application.
Content-based image retrieval (CBIR) is the retrieval of images from a collection by means of internal feature measures of the information content of the images. In CBIR systems, text media is usually used only to retrieve exemplar images for further searching by image feature content. This research work describes a new method for integrating multimedia text and image content features to increase the retrieval performance of the system. I am exploring the content-based features of an image extracted from a video to build a storyboard for search retrieval of images. Metadata encoded multimedia features include extracting primitive features like color, shape and text from an image. Histograms are built for all the features extracted and stored in a database. Images are searched based on comparing these histogram values of the extracted image with the stored values. These histogram values are used for extraction of keyframes from a collection of images parsed from a video file. Individual shots of images are extracted from a video clip and run through processes that extract the features and build the histogram values. A keyframe extraction algorithm is run to get the keyframes from the collection of images to build a storyboard of images. In video retrieval, speech recognition and other multimedia encoding could help improve the CBIR indexing technique and makes keyframe extraction and searching effective. Research in area of embedding sound and other multimedia could enhance effective video retrieval. digital.library.unt.edu/ark:/67531/metadc4238/
Control mechanisms and recovery techniques for real-time data transmission over the Internet.
Streaming multimedia content with UDP has become popular over distributed systems such as an Internet. This may encounter many losses due to dropped packets or late arrivals at destination since UDP can only provide best effort delivery. Even UDP doesn't have any self-recovery mechanism from congestion collapse or bursty loss to inform sender of the data to adjust future transmission rate of data like in TCP. So there is a need to incorporate various control schemes like forward error control, interleaving, and congestion control and error concealment into real-time transmission to prevent from effect of losses. Loss can be repaired by retransmission if roundtrip delay is allowed, otherwise error concealment techniques will be used based on the type and amount of loss. This paper implements the interleaving technique with packet spacing of varying interleaver block size for protecting real-time data from loss and its effect during transformation across the Internet. The packets are interleaved and maintain some time gap between two consecutive packets before being transmitted into the Internet. Thus loss of packets can be reduced from congestion and preventing loss of consecutive packets of information when a burst of several packets are lost. Several experiments have been conducted with video data for analysis of proposed model. digital.library.unt.edu/ark:/67531/metadc3268/
DADS - A Distributed Agent Delivery System
Mobile agents require an appropriate platform that can facilitate their migration and execution. In particular, the design and implementation of such a system must balance several factors that will ensure that its constituent agents are executed without problems. Besides the basic requirements of migration and execution, an agent system must also provide mechanisms to ensure the security and survivability of an agent when it migrates between hosts. In addition, the system should be simple enough to facilitate its widespread use across large scale networks (i.e Internet). To address these issues, this thesis discusses the design and implementation of the Distributed Agent Delivery System (DADS). The DADS provides a de-coupled design that separates agent acceptance from agent execution. Using functional modules, the DADS provides services ranging from language execution and security to fault-tolerance and compression. Modules allow the administrator(s) of hosts to declare, at run-time, the services that they want to provide. Since each administrative domain is different, the DADS provides a platform that can be adapted to exchange heterogeneous blends of agents across large scale networks. digital.library.unt.edu/ark:/67531/metadc3352/
The Design and Implementation of a Prolog Parser using javacc
Operatorless Prolog text is LL(1) in nature and any standard LL parser generator tool can be used to parse it. However, the Prolog text that conforms to the ISO Prolog standard allows the definition of dynamic operators. Since Prolog operators can be defined at run-time, operator symbols are not present in the grammar rules of the language. Unless the parser generator allows for some flexibility in the specification of the grammar rules, it is very difficult to generate a parser for such text. In this thesis we discuss the existing parsing methods and their modified versions to parse languages with dynamic operator capabilities. Implementation details of a parser using Javacc as a parser generator tool to parse standard Prolog text is provided. The output of the parser is an “Abstract Syntax Tree” that reflects the correct precedence and associativity rules among the various operators (static and dynamic) of the language. Empirical results are provided that show that a Prolog parser that is generated by the parser generator like Javacc is comparable in efficiency to a hand-coded parser. digital.library.unt.edu/ark:/67531/metadc3251/
The Design and Implementation of an Intelligent Agent-Based File System
As bandwidth constraints on LAN/WAN environments decrease, the demand for distributed services will continue to increase. In particular, the proliferation of user-level applications requiring high-capacity distributed file storage systems will demand that such services be universally available. At the same time, the advent of high-speed networks have made the deployment of application and communication solutions based upon an Intelligent Mobile Agent (IMA) framework practical. Agents have proven to present an ideal development paradigm for the creation of autonomous large-scale distributed systems, and an agent-based communication scheme would facilitate the creation of independently administered distributed file services. This thesis thus outlines an architecture for such a distributed file system based upon an IMA communication framework. digital.library.unt.edu/ark:/67531/metadc2523/
Developing a Test Bed for Interactive Narrative in Virtual Environments
Access: Use of this item is restricted to the UNT Community.
As Virtual Environments (VE) become a more commonly used method of interaction and presentation, supporting users as they navigate and interact with scenarios presented in VE will be a significant issue. A key step in understanding the needs of users in these situations will be observing them perform representative tasks in a fully developed environment. In this paper, we describe the development of a test bed for interactive narrative in a virtual environment. The test bed was specifically developed to present multiple, simultaneous sequences of events (scenarios or narratives) and to support user navigation through these scenarios. These capabilities will support the development of multiple users testing scenarios, allowing us to study and better understand the needs of users of narrative VEs. digital.library.unt.edu/ark:/67531/metadc3176/
DICOM Image Scrubbing Software Library/Utility
This software is aimed at providing user-friendly, easy-to-use environment for the user to scrub (de-identify/modify) the DICOM header information. Some tools either anonymize or default the values without the user interaction. The user doesn't have the flexibility to edit the header information. One cannot scrub set of images simultaneously (batch scrubbing). This motivated to develop a tool/utility that can scrub a set of images in a single step more efficiently. This document also addresses security issues of the patient confidentiality to achieve protection of patient identifying information and some technical requirements digital.library.unt.edu/ark:/67531/metadc4189/
DirectShow Approach to Low-Cost Multimedia Security Surveillance System
In response to the recent intensive needs for civilian security surveillance, both full and compact versions of a Multimedia Security Surveillance (MSS) system have been built up. The new Microsoft DirectShow technology was applied in implementing the multimedia stream-processing module. Through Microsoft Windows Driver Model interface, the chosen IEEE1394 enabled Fire-i cameras as external sensors are integrated with PC based continuous storage unit. The MSS application also allows multimedia broadcasting and remote controls. Cost analysis is included. digital.library.unt.edu/ark:/67531/metadc3297/
Dynamic Grid-Based Data Distribution Management in Large Scale Distributed Simulations
Distributed simulation is an enabling concept to support the networked interaction of models and real world elements that are geographically distributed. This technology has brought a new set of challenging problems to solve, such as Data Distribution Management (DDM). The aim of DDM is to limit and control the volume of the data exchanged during a distributed simulation, and reduce the processing requirements of the simulation hosts by relaying events and state information only to those applications that require them. In this thesis, we propose a new DDM scheme, which we refer to as dynamic grid-based DDM. A lightweight UNT-RTI has been developed and implemented to investigate the performance of our DDM scheme. Our results clearly indicate that our scheme is scalable and it significantly reduces both the number of multicast groups used, and the message overhead, when compared to previous grid-based allocation schemes using large-scale and real-world scenarios. digital.library.unt.edu/ark:/67531/metadc2699/
Dynamic Resource Management in RSVP- Controlled Unicast Networks
Resources are said to be fragmented in the network when they are available in non-contiguous blocks, and calls are dropped as they may not end sufficient resources. Hence, available resources may remain unutilized. In this thesis, the effect of resource fragmentation (RF) on RSVP-controlled networks was studied and new algorithms were proposed to reduce the effect of RF. In order to minimize the effect of RF, resources in the network are dynamically redistributed on different paths to make them available in contiguous blocks. Extra protocol messages are introduced to facilitate resource redistribution in the network. The Dynamic Resource Redistribution (DRR) algorithm when used in conjunction with RSVP, not only increased the number of calls accommodated into the network but also increased the overall resource utilization of the network. Issues such as how many resources need to be redistributed and of which call(s), and how these choices affect the redistribution process were investigated. Further, various simulation experiments were conducted to study the performance of the DRR algorithm on different network topologies with varying traffic characteristics. digital.library.unt.edu/ark:/67531/metadc3048/
Embedded monitors for detecting and preventing intrusions in cryptographic and application protocols.
There are two main approaches for intrusion detection: signature-based and anomaly-based. Signature-based detection employs pattern matching to match attack signatures with observed data making it ideal for detecting known attacks. However, it cannot detect unknown attacks for which there is no signature available. Anomaly-based detection builds a profile of normal system behavior to detect known and unknown attacks as behavioral deviations. However, it has a drawback of a high false alarm rate. In this thesis, we describe our anomaly-based IDS designed for detecting intrusions in cryptographic and application-level protocols. Our system has several unique characteristics, such as the ability to monitor cryptographic protocols and application-level protocols embedded in encrypted sessions, a very lightweight monitoring process, and the ability to react to protocol misuse by modifying protocol response directly. digital.library.unt.edu/ark:/67531/metadc4414/
Ensuring Authenticity and Integrity of Critical Information Using XML Digital Signatures
Access: Use of this item is restricted to the UNT Community.
It has been noticed in the past five years that the Internet use has been troubled by the lack of sufficient security and a legal framework to enable electronic commerce to flourish. Despite these shortcomings, governments, businesses and individuals are using the Internet more often as an inexpensive and ubiquitous means to disseminate and obtain information, goods and services. The Internet is insecure -- potentially millions of people have access, and "hackers" can intercept anything traveling over the wire. There is no way to make it a secure environment; it is, after all, a public network, hence the availability and affordability. In order for it to serve our purposes as a vehicle for legally binding transactions, efforts must be directed at securing the message itself, as opposed to the transport mechanism. Digital signatures have been evolved in the recent years as the best tool for ensuring the authenticity and integrity of critical information in the so called "paperless office". A model using XML digital signatures is developed and the level of security provided by this model in the real world scenario is outlined. digital.library.unt.edu/ark:/67531/metadc3282/
Evaluation of MPLS Enabled Networks
Access: Use of this item is restricted to the UNT Community.
Recent developments in the Internet have inspired a wide range of business and consumer applications. The deployment of multimedia-based services has driven the demand for increased and guaranteed bandwidth requirements over the network. The diverse requirements of the wide range of users demand differentiated classes of service and quality assurance. The new technology of Multi-protocol label switching (MPLS) has emerged as a high performance and reliable option to address these challenges apart from the additional features that were not addressed before. This problem in lieu of thesis describes how the new paradigm of MPLS is advantageous over the conventional architecture. The motivation for this paradigm is discussed in the first part, followed by a detailed description of this new architecture. The information flow, the underlying protocols and the MPLS extensions to some of the traditional protocols are then discussed followed by the description of the simulation. The simulation results are used to show the advantages of the proposed technology. digital.library.unt.edu/ark:/67531/metadc5797/
Exon/Intron Discrimination Using the Finite Induction Pattern Matching Technique
DNA sequence analysis involves precise discrimination of two of the sequence's most important components: exons and introns. Exons encode the proteins that are responsible for almost all the functions in a living organism. Introns interrupt the sequence coding for a protein and must be removed from primary RNA transcripts before translation to protein can occur. A pattern recognition technique called Finite Induction (FI) is utilized to study the language of exons and introns. FI is especially suited for analyzing and classifying large amounts of data representing sequences of interest. It requires no biological information and employs no statistical functions. Finite Induction is applied to the exon and intron components of DNA by building a collection of rules based upon what it finds in the sequences it examines. It then attempts to match the known rule patterns with new rules formed as a result of analyzing a new sequence. A high number of matches predict a probable close relationship between the two sequences; a low number of matches signifies a large amount of difference between the two. This research demonstrates FI to be a viable tool for measurement when known patterns are available for the formation of rule sets. digital.library.unt.edu/ark:/67531/metadc277629/
Extensions to Jinni Mobile Agent Architecture
We extend the Jinni mobile agent architecture with a multicast network transport layer, an agent-to-agent delegation mechanism and a reflection based Prolog-to-Java interface. To ensure that our agent infrastructure runs efficiently, independently of router-level multicast support, we describe a blackboard based algorithm for locating a randomly roaming agent. As part of the agent-to-agent delegation mechanism, we describe an alternative to code-fetching mechanism for stronger mobility of mobile agents with less network overhead. In the context of direct and reflection based extension mechanisms for Jinni, we describe the design and the implementation of a reflection based Prolog-to-Java interface. The presence of subtyping and method overloading makes finding the most specific method corresponding to a Prolog call pattern fairly difficult. We describe a run-time algorithm which provides accurate handling of overloaded methods beyond Java's reflection package's limitations. digital.library.unt.edu/ark:/67531/metadc2773/
The Feasibility of Multicasting in RMI
Due to the growing need of the Internet and networking technologies, simple, powerful, easily maintained distributed applications needed to be developed. These kinds of applications can benefit greatly from distributed computing concepts. Despite its powerful mechanisms, Jini has yet to be accepted in mainstream Java development. Until that happens, we need to find better Remote Method Invocation (RMI) solutions. Feasibility of implementation of Multicasting in RMI is worked in this paper. Multicasting capability can be used in RMI using Jini-like technique. Support of Multicast over Unicast reference layer is also studied. A piece of code explaining how it can be done, is added. digital.library.unt.edu/ark:/67531/metadc4170/
A general purpose semantic parser using FrameNet and WordNet®.
Syntactic parsing is one of the best understood language processing applications. Since language and grammar have been formally defined, it is easy for computers to parse the syntactic structure of natural language text. Does meaning have structure as well? If it has, how can we analyze the structure? Previous systems rely on a one-to-one correspondence between syntactic rules and semantic rules. But such systems can only be applied to limited fragments of English. In this thesis, we propose a general-purpose shallow semantic parser which utilizes a semantic network (WordNet), and a frame dataset (FrameNet). Semantic relations recognized by the parser are based on how human beings represent knowledge of the world. Parsing semantic structure allows semantic units and constituents to be accessed and processed in a more meaningful way than syntactic parsing, moving the automation of understanding natural language text to a higher level. digital.library.unt.edu/ark:/67531/metadc4483/
Higher Compression from the Burrows-Wheeler Transform with New Algorithms for the List Update Problem
Burrows-Wheeler compression is a three stage process in which the data is transformed with the Burrows-Wheeler Transform, then transformed with Move-To-Front, and finally encoded with an entropy coder. Move-To-Front, Transpose, and Frequency Count are some of the many algorithms used on the List Update problem. In 1985, Competitive Analysis first showed the superiority of Move-To-Front over Transpose and Frequency Count for the List Update problem with arbitrary data. Earlier studies due to Bitner assumed independent identically distributed data, and showed that while Move-To-Front adapts to a distribution faster, incurring less overwork, the asymptotic costs of Frequency Count and Transpose are less. The improvements to Burrows-Wheeler compression this work covers are increases in the amount, not speed, of compression. Best x of 2x-1 is a new family of algorithms created to improve on Move-To-Front's processing of the output of the Burrows-Wheeler Transform which is like piecewise independent identically distributed data. Other algorithms for both the middle stage of Burrows-Wheeler compression and the List Update problem for which overwork, asymptotic cost, and competitive ratios are also analyzed are several variations of Move One From Front and part of the randomized algorithm Timestamp. The Best x of 2x - 1 family includes Move-To-Front, the part of Timestamp of interest, and Frequency Count. Lastly, a greedy choosing scheme, Snake, switches back and forth as the amount of compression that two List Update algorithms achieves fluctuates, to increase overall compression. The Burrows-Wheeler Transform is based on sorting of contexts. The other improvements are better sorting orders, such as “aeioubcdf...” instead of standard alphabetical “abcdefghi...” on English text data, and an algorithm for computing orders for any data, and Gray code sorting instead of standard sorting. Both techniques lessen the overwork incurred by whatever List Update algorithms are used by reducing the difference between adjacent sorted contexts. digital.library.unt.edu/ark:/67531/metadc2909/
Hopfield Networks as an Error Correcting Technique for Speech Recognition
Access: Use of this item is restricted to the UNT Community.
I experimented with Hopfield networks in the context of a voice-based, query-answering system. Hopfield networks are used to store and retrieve patterns. I used this technique to store queries represented as natural language sentences and I evaluated the accuracy of the technique for error correction in a spoken question-answering dialog between a computer and a user. I show that the use of an auto-associative Hopfield network helps make the speech recognition system more fault tolerant. I also looked at the available encoding schemes to convert a natural language sentence into a pattern of zeroes and ones that can be stored in the Hopfield network reliably, and I suggest scalable data representations which allow storing a large number of queries. digital.library.unt.edu/ark:/67531/metadc5551/
Impact of actual interference on capacity and call admission control in a CDMA network.
An overwhelming number of models in the literature use average inter-cell interference for the calculation of capacity of a Code Division Multiple Access (CDMA) network. The advantage gained in terms of simplicity by using such models comes at the cost of rendering the exact location of a user within a cell irrelevant. We calculate the actual per-user interference and analyze the effect of user-distribution within a cell on the capacity of a CDMA network. We show that even though the capacity obtained using average interference is a good approximation to the capacity calculated using actual interference for a uniform user distribution, the deviation can be tremendously large for non-uniform user distributions. Call admission control (CAC) algorithms are responsible for efficient management of a network's resources while guaranteeing the quality of service and grade of service, i.e., accepting the maximum number of calls without affecting the quality of service of calls already present in the network. We design and implement global and local CAC algorithms, and through simulations compare their network throughput and blocking probabilities for varying mobility scenarios. We show that even though our global CAC is better at resource management, the lack of substantial gain in network throughput and exponential increase in complexity makes our optimized local CAC algorithm a much better choice for a given traffic distribution profile. digital.library.unt.edu/ark:/67531/metadc4496/
Implementation of Back Up Host in TCP/IP
Access: Use of this item is restricted to the UNT Community.
This problem in lieu thesis is considering a TCP client H1 making a connection to distant server S and is downloading a file. In the midst of the downloading, if H1 crashes, the TCP connection from H1 to S is lost. In the future, if H1 restarts, the TCP connection from H1 to S will be reestablished and the file will be downloaded again. This cannot happen until host H1 restarts. Now consider a situation where there is a standby host H2 for the host H1. H1 and H2 monitor the health of each other by heartbeat messages (like SCTP). If H2 detects the failure of H1, then H2 takes over. This implies that all resources assigned to H1 are now reassigned or taken over by H2. The host H1 and H2 transmit data between each other when any one of it crashed. Throughout the data transmission process, heart beat chunk is exchanged between the hosts when one of the host crashes. In particular, the IP addresses that were originally assigned to H1 are assigned to H2. In this scenario, movement of the TCP connection between H1 and S to a connection between H2 and S without disrupting the TCP connection is achieved. Distant server S is not aware of any changes going on at the clients. digital.library.unt.edu/ark:/67531/metadc3288/
Implementation of Scalable Secure Multicasting
Access: Use of this item is restricted to the UNT Community.
A large number of applications like multi-player games, video conferencing, chat groups and network management are presently based on multicast communication. As the group communication model is being deployed for mainstream use, it is critical to provide security mechanisms that facilitate confidentiality, authenticity and integrity in group communications. Providing security in multicast communication requires addressing the problem of scalability in group key distribution. Scalability is a concern in group communication due to group membership dynamics. Joining and leaving of members requires the distribution of a new session key to all the existing members of the group. The two approaches to key management namely centralized and distributed approaches are reviewed. A hybrid solution is then provided, which represents a improved scalable and robust approach for a secure multicast framework. This framework then is implemented in an example application of a multicast news service. digital.library.unt.edu/ark:/67531/metadc3169/
Improved Approximation Algorithms for Geometric Packing Problems With Experimental Evaluation
Access: Use of this item is restricted to the UNT Community.
Geometric packing problems are NP-complete problems that arise in VLSI design. In this thesis, we present two novel algorithms using dynamic programming to compute exactly the maximum number of k x k squares of unit size that can be packed without overlap into a given n x m grid. The first algorithm was implemented and ran successfully on problems of large input up to 1,000,000 nodes for different values. A heuristic based on the second algorithm is implemented. This heuristic is fast in practice, but may not always be giving optimal times in theory. However, over a wide range of random data this version of the algorithm is giving very good solutions very fast and runs on problems of up to 100,000,000 nodes in a grid and different ranges for the variables. It is also shown that this version of algorithm is clearly superior to the first algorithm and has shown to be very efficient in practice. digital.library.unt.edu/ark:/67531/metadc4355/
Intelligent Memory Management Heuristics
Automatic memory management is crucial in implementation of runtime systems even though it induces a significant computational overhead. In this thesis I explore the use of statistical properties of the directed graph describing the set of live data to decide between garbage collection and heap expansion in a memory management algorithm combining the dynamic array represented heaps with a mark and sweep garbage collector to enhance its performance. The sampling method predicting the density and the distribution of useful data is implemented as a partial marking algorithm. The algorithm randomly marks the nodes of the directed graph representing the live data at different depths with a variable probability factor p. Using the information gathered by the partial marking algorithm in the current step and the knowledge gathered in the previous iterations, the proposed empirical formula predicts with reasonable accuracy the density of live nodes on the heap, to decide between garbage collection and heap expansion. The resulting heuristics are tested empirically and shown to improve overall execution performance significantly in the context of the Jinni Prolog compiler's runtime system. digital.library.unt.edu/ark:/67531/metadc4399/
Intrinsic and Extrinsic Adaptation in a Simulated Combat Environment
Genetic algorithm and artificial life techniques are applied to the development of challenging and interesting opponents in a combat-based computer game. Computer simulations are carried out against an idealized human player to gather data on the effectiveness of the computer generated opponents. digital.library.unt.edu/ark:/67531/metadc278231/
Machine Language Techniques for Conversational Agents
Machine Learning is the ability of a machine to perform better at a given task, using its previous experience. Various algorithms like decision trees, Bayesian learning, artificial neural networks and instance-based learning algorithms are used widely in machine learning systems. Current applications of machine learning include credit card fraud detection, customer service based on history of purchased products, games and many more. The application of machine learning techniques to natural language processing (NLP) has increased tremendously in recent years. Examples are handwriting recognition and speech recognition. The problem we tackle in this Problem in Lieu of Thesis is applying machine-learning techniques to improve the performance of a conversational agent. The OpenMind repository of common sense, in the form of question-answer pairs is treated as the training data for the machine learning system. WordNet is interfaced with to capture important semantic and syntactic information about the words in the sentences. Further, k-closest neighbors algorithm, an instance based learning algorithm is used to simulate a case based learning system. The resulting system is expected to be able to answer new queries with knowledge gained from the training data it was fed with. digital.library.unt.edu/ark:/67531/metadc4385/
Modeling Complex Forest Ecology in a Parallel Computing Infrastructure
Effective stewardship of forest ecosystems make it imperative to measure, monitor, and predict the dynamic changes of forest ecology. Measuring and monitoring provides us a picture of a forest's current state and the necessary data to formulate models for prediction. However, societal and natural events alter the course of a forest's development. A simulation environment that takes into account these events will facilitate forest management. In this thesis, we describe an efficient parallel implementation of a land cover use model, Mosaic, and discuss the development efforts to incorporate spatial interaction and succession dynamics into the model. To evaluate the performance of our implementation, an extensive set of simulation experiments was carried out using a dataset representing the H.J. Andrews Forest in the Oregon Cascades. Results indicate that a significant reduction in the simulation execution time of our parallel model can be achieved as compared to uni-processor simulations. digital.library.unt.edu/ark:/67531/metadc4305/
Modeling the Impact and Intervention of a Sexually Transmitted Disease: Human Papilloma Virus
Many human papilloma virus (HPV) types are sexually transmitted and HPV DNA types 16, 18, 31, and 45 account for more than 75% if all cervical dysplasia. Candidate vaccines are successfully completing US Federal Drug Agency (FDA) phase III testing and several drug companies are in licensing arbitration. Once this vaccine become available it is unlikely that 100% vaccination coverage will be probable; hence, the need for vaccination strategies that will have the greatest reduction on the endemic prevalence of HPV. This thesis introduces two discrete-time models for evaluating the effect of demographic-biased vaccination strategies: one model incorporates temporal demographics (i.e., age) in population compartments; the other non-temporal demographics (i.e., race, ethnicity). Also presented is an intuitive Web-based interface that was developed to allow the user to evaluate the effects on prevalence of a demographic-biased intervention by tailoring the model parameters to specific demographics and geographical region. digital.library.unt.edu/ark:/67531/metadc5289/
Multi-Agent Architecture for Internet Information Extraction and Visualization
Access: Use of this item is restricted to the UNT Community.
The World Wide Web is one of the largest sources of information; more and more applications are being developed daily to make use of this information. This thesis presents a multi-agent architecture that deals with some of the issues related to Internet data extraction. The primary issue addresses the reliable, efficient and quick extraction of data through the use of HTTP performance monitoring agents. A second issue focuses on how to make use of available data to take decisions and alert the user when there is change in data; this is done with the help of user agents that are equipped with a Defeasible reasoning interpreter. An additional issue is the visualization of extracted data; this is done with the aid of VRML visualization agents. The cited issues are discussed using stock portfolio management as an example application. digital.library.unt.edu/ark:/67531/metadc2575/
The Multipath Fault-Tolerant Protocol for Routing in Packet-Switched Communication Network
In order to provide improved service quality to applications, networks need to address the need for reliability of data delivery. Reliability can be improved by incorporating fault tolerance into network routing, wherein a set of multiple routes are used for routing between a given source and destination. This thesis proposes a new fault-tolerant protocol, called the Multipath Fault Tolerant Protocol for Routing (MFTPR), to improve the reliability of network routing services. The protocol is based on a multipath discovery algorithm, the Quasi-Shortest Multipath (QSMP), and is designed to work in conjunction with the routing protocol employed by the network. MFTPR improves upon the QSMP algorithm by finding more routes than QSMP, and also provides for maintenance of these routes in the event of failure of network components. In order to evaluate the resilience of a pair of paths to failure, this thesis proposes metrics that evaluate the non-disjointness of a pair of paths and measure the probability of simultaneous failure of these paths. The performance of MFTPR to find alternate routes based on these metrics is analyzed through simulation. digital.library.unt.edu/ark:/67531/metadc4199/
OLAP Services
Access: Use of this item is restricted to the UNT Community.
On-line Analytical Processing (OLAP) is a very interesting platform to provide analytical power to the data present in the database. This paper discusses the system designed which handles integration of data from two remote legacy reservation systems to merge as one Integrated database server and also the design of an OLAP database and building an OLAP cube for the data warehousing. OLAP cube is useful for analysis of data and also for making various business decisions. The Data Transformation Services (DTS) in the Microsoft® SQL Server 2000 is used to integrate as a package the collection of data and also for refreshing data in the databases. On-line Analytical Processing (OLAP) cube is designed using Microsoft® Analysis Server. digital.library.unt.edu/ark:/67531/metadc4356/
Performance comparison of data distribution management strategies in large-scale distributed simulation.
Data distribution management (DDM) is a High Level Architecture/Run-time Infrastructure (HLA/RTI) service that manages the distribution of state updates and interaction information in large-scale distributed simulations. The key to efficient DDM is to limit and control the volume of data exchanged during the simulation, to relay data to only those hosts requiring the data. This thesis focuses upon different DDM implementations and strategies. This thesis includes analysis of three DDM methods including the fixed grid-based, dynamic grid-based, and region-based methods. Also included is the use of multi-resolution modeling with various DDM strategies and analysis of the performance effects of aggregation/disaggregation with these strategies. Running numerous federation executions, I simulate four different scenarios on a cluster of workstations with a mini-RTI Kit framework and propose a set of benchmarks for a comparison of the DDM schemes. The goals of this work are to determine the most efficient model for applying each DDM scheme, discover the limitations of the scalability of the various DDM methods, evaluate the effects of aggregation/disaggregation on performance and resource usage, and present accepted benchmarks for use in future research. digital.library.unt.edu/ark:/67531/metadc4524/
Performance Evaluation of Data Integrity Mechanisms for Mobile Agents
Access: Use of this item is restricted to the UNT Community.
With the growing popularity of e-commerce applications that use software agents, the protection of mobile agent data has become imperative. To that end, the performance of four methods that protect the data integrity of mobile agents is evaluated. The methods investigated include existing approaches known as the Partial Result Authentication Codes, Hash Chaining, and Set Authentication Code methods, and a technique of our own design, called the Modified Set Authentication Code method, which addresses the limitations of the Set Authentication Code method. The experiments were run using the DADS agent system (developed at the Network Research Laboratory at UNT), for which a Data Integrity Module was designed. The experimental results show that our Modified Set Authentication Code technique performed comparably to the Set Authentication Code method. digital.library.unt.edu/ark:/67531/metadc4366/
Performance Evaluation of MPLS on Quality of Service in Voice Over IP (VoIP) Networks
Access: Use of this item is restricted to the UNT Community.
The transmission of voice data over Internet Protocol (IP) networks is rapidly gaining acceptance in the field of networking. The major voice transmissions in the IP networks are involved in Internet telephony, which is also known as IP telephony or Voice Over IP (VoIP). VoIP is undergoing many enhancements to provide the end users with same quality as in the public switched telephone networks (PSTN). These enhancements are mostly required in quality of service (QoS) for the transmission of voice data over the IP networks. As with recent developments in the networking field, various protocols came into market to provide the QoS in IP networks - of them, multi protocol label switching (MPLS) is the most reliable and upcoming protocol for working on QoS. The problem of the thesis is to develop an IP-based virtual network, with end hosts and routers, implement MPLS on the network, and analyze its QoS for voice data transmission. digital.library.unt.edu/ark:/67531/metadc3293/
A Programming Language For Concurrent Processing
This thesis is a proposed solution to the problem of including an effective interrupt mechanism in the set of concurrent- processing primitives of a block-structured programming language or system. The proposed solution is presented in the form of a programming language definition and model. The language is called TRIPLE. digital.library.unt.edu/ark:/67531/metadc164005/
FIRST PREV 1 2 NEXT LAST