You limited your search to:

  Partner: UNT Libraries
 Degree Discipline: Computer Science
 Collection: UNT Theses and Dissertations
3D Reconstruction Using Lidar and Visual Images
In this research, multi-perspective image registration using LiDAR and visual images was considered. 2D-3D image registration is a difficult task because it requires the extraction of different semantic features from each modality. This problem is solved in three parts. The first step involves detection and extraction of common features from each of the data sets. The second step consists of associating the common features between two different modalities. Traditional methods use lines or orthogonal corners as common features. The third step consists of building the projection matrix. Many existing methods use global positing system (GPS) or inertial navigation system (INS) for an initial estimate of the camera pose. However, the approach discussed herein does not use GPS, INS, or any such devices for initial estimate; hence the model can be used in places like the lunar surface or Mars where GPS or INS are not available. A variation of the method is also described, which does not require strong features from both images but rather uses intensity gradients in the image. This can be useful when one image does not have strong features (such as lines) or there are too many extraneous features. digital.library.unt.edu/ark:/67531/metadc177193/
Adaptive planning and prediction in agent-supported distributed collaboration.
Agents that act as user assistants will become invaluable as the number of information sources continue to proliferate. Such agents can support the work of users by learning to automate time-consuming tasks and filter information to manageable levels. Although considerable advances have been made in this area, it remains a fertile area for further development. One application of agents under careful scrutiny is the automated negotiation of conflicts between different user's needs and desires. Many techniques require explicit user models in order to function. This dissertation explores a technique for dynamically constructing user models and the impact of using them to anticipate the need for negotiation. Negotiation is reduced by including an advising aspect to the agent that can use this anticipation of conflict to adjust user behavior. digital.library.unt.edu/ark:/67531/metadc4702/
Agent-based architecture for web deployment of multi-agents as conversational interfaces.
Agent-based architecture explains the rationale and basis for developing agents that can interact with users through natural language query/answer patterns developed systematically using AIML (artificial intelligence mark-up language) scripts. This thesis research document also explains the architecture for VISTA (virtual interactive story-telling agents), which is used for interactive querying in educational and recreational purposes. Agents are very effective as conversational interfaces when used along side with graphical user interface (GUI) in applications and Web pages. This architecture platform can support multiple agents with or with out sharing of knowledgebase. They are very useful as chat robots for recreational purposes, customer service and educational purposes. This platform is powered by Java servlet implementation of Program D and contained in Apache Tomcat server. The AIML scripting language defined here in is a generic form of XML language and forms the knowledgebase of the bot. Animation is provided with Microsoft® Agent technology and text-to-speech support engine. digital.library.unt.edu/ark:/67531/metadc4187/
Agent extensions for peer-to-peer networks.
Peer-to-Peer (P2P) networks have seen tremendous growth in development and usage in recent times. This attention has brought many developments as well as new challenges to these networks. We will show that agent extensions to P2P networks offer solutions to many problems faced by P2P networks. In this research, an attempt is made to bring together JXTA P2P infrastructure and Jinni, a Prolog based agent engine to form an agent based P2P network. On top of the JXTA, we define simple Java API providing P2P services for agent programming constructs. Jinni is deployed on this JXTA network using an automated code update mechanism. Experiments are conducted on this Jinni/JXTA platform to implement a simple agent communication and data exchange protocol. digital.library.unt.edu/ark:/67531/metadc4382/
Algorithms for Efficient Utilization of Wireless Bandwidth and to Provide Quality-of-Service in Wireless Networks
This thesis presents algorithms to utilize the wireless bandwidth efficiently and at the same time meet the quality of service (QoS) requirements of the users. In the proposed algorithms we present an adaptive frame structure based upon the airlink frame loss probability and control the admission of call requests into the system based upon the load on the system and the QoS requirements of the incoming call requests. The performance of the proposed algorithms is studied by developing analytical formulations and simulation experiments. Finally we present an admission control algorithm which uses an adaptive delay computation algorithm to compute the queuing delay for each class of traffic and adapts the service rate and the reliability in the estimates based upon the deviation in the expected and obtained performance. We study the performance of the call admission control algorithm by simulation experiments. Simulation results for the adaptive frame structure algorithm show an improvement in the number of users in the system but there is a drop in the system throughput. In spite of the lower throughput the adaptive frame structure algorithm has fewer QoS delay violations. The adaptive call admission control algorithm adapts the call dropping probability of different classes of traffic and optimizes the system performance w.r.t the number of calls dropped and the reliability in meeting the QoS promised when the call is admitted into the system. digital.library.unt.edu/ark:/67531/metadc2635/
An analysis of motivational cues in virtual environments.
Access: Use of this item is restricted to the UNT Community.
Guiding navigation in virtual environments (VEs) is a challenging task. A key issue in the navigation of a virtual environment is to be able to strike a balance between the user's need to explore the environment freely and the designer's need to ensure that the user experiences all the important events in the VE. This thesis reports on a study aimed at comparing the effectiveness of various navigation cues that are used to motivate users towards a specific target location. The results of this study indicate some significant differences in how users responded to the various cues. digital.library.unt.edu/ark:/67531/metadc4352/
Analysis of Web Services on J2EE Application Servers
The Internet became a standard way of exchanging business data between B2B and B2C applications and with this came the need for providing various services on the web instead of just static text and images. Web services are a new type of services offered via the web that aid in the creation of globally distributed applications. Web services are enhanced e-business applications that are easier to advertise and easier to discover on the Internet because of their flexibility and uniformity. In a real life scenario it is highly difficult to decide which J2EE application server to go for when deploying a enterprise web service. This thesis analyzes the various ways by which web services can be developed & deployed. Underlying protocols and crucial issues like EAI (enterprise application integration), asynchronous messaging, Registry tModel architecture etc have been considered in this research. This paper presents a report by analyzing what various J2EE application servers provide by doing a case study and by developing applications to test functionality. digital.library.unt.edu/ark:/67531/metadc5547/
Anchor Nodes Placement for Effective Passive Localization
Access: Use of this item is restricted to the UNT Community.
Wireless sensor networks are composed of sensor nodes, which can monitor an environment and observe events of interest. These networks are applied in various fields including but not limited to environmental, industrial and habitat monitoring. In many applications, the exact location of the sensor nodes is unknown after deployment. Localization is a process used to find sensor node's positional coordinates, which is vital information. The localization is generally assisted by anchor nodes that are also sensor nodes but with known locations. Anchor nodes generally are expensive and need to be optimally placed for effective localization. Passive localization is one of the localization techniques where the sensor nodes silently listen to the global events like thunder sounds, seismic waves, lighting, etc. According to previous studies, the ideal location to place anchor nodes was on the perimeter of the sensor network. This may not be the case in passive localization, since the function of anchor nodes here is different than the anchor nodes used in other localization systems. I do extensive studies on positioning anchor nodes for effective localization. Several simulations are run in dense and sparse networks for proper positioning of anchor nodes. I show that, for effective passive localization, the optimal placement of the anchor nodes is at the center of the network in such a way that no three anchor nodes share linearity. The more the non-linearity, the better the localization. The localization for our network design proves better when I place anchor nodes at right angles. digital.library.unt.edu/ark:/67531/metadc33132/
An Annotated Bibliography of Mobile Agents in Networks
The purpose of this thesis is to present a comprehensive colligation of applications of mobile agents in networks, and provide a baseline association of these systems. This work has been motivated by the fact that mobile agent systems have been deemed proficuous alternatives in system applications. Several mobile agent systems have been developed to provide scalable and cogent solutions in network-centric applications. This thesis examines some existing mobile agent systems in core networking areas, in particular, those of network and resource management, routing, and the provision of fault tolerance and security. The inherent features of these systems are discussed with respect to their specific functionalities. The applicability and efficacy of mobile agents are further considered in the specific areas mentioned above. Although an initial foray into a collation of this nature, the goal of this annotated bibliography is to provide a generic referential view of mobile agent systems in network applications. digital.library.unt.edu/ark:/67531/metadc3310/
An Approach Towards Self-Supervised Classification Using Cyc
Due to the long duration required to perform manual knowledge entry by human knowledge engineers it is desirable to find methods to automatically acquire knowledge about the world by accessing online information. In this work I examine using the Cyc ontology to guide the creation of Naïve Bayes classifiers to provide knowledge about items described in Wikipedia articles. Given an initial set of Wikipedia articles the system uses the ontology to create positive and negative training sets for the classifiers in each category. The order in which classifiers are generated and used to test articles is also guided by the ontology. The research conducted shows that a system can be created that utilizes statistical text classification methods to extract information from an ad-hoc generated information source like Wikipedia for use in a formal semantic ontology like Cyc. Benefits and limitations of the system are discussed along with future work. digital.library.unt.edu/ark:/67531/metadc5470/
Arithmetic Computations and Memory Management Using a Binary Tree Encoding af Natural Numbers
Two applications of a binary tree data type based on a simple pairing function (a bijection between natural numbers and pairs of natural numbers) are explored. First, the tree is used to encode natural numbers, and algorithms that perform basic arithmetic computations are presented along with formal proofs of their correctness. Second, using this "canonical" representation as a base type, algorithms for encoding and decoding additional isomorphic data types of other mathematical constructs (sets, sequences, etc.) are also developed. An experimental application to a memory management system is constructed and explored using these isomorphic types. A practical analysis of this system's runtime complexity and space savings are provided, along with a proof of concept framework for both applications of the binary tree type, in the Java programming language. digital.library.unt.edu/ark:/67531/metadc103323/
Automated Classification of Emotions Using Song Lyrics
This thesis explores the classification of emotions in song lyrics, using automatic approaches applied to a novel corpus of 100 popular songs. I use crowd sourcing via Amazon Mechanical Turk to collect line-level emotions annotations for this collection of song lyrics.  I then build classifiers that rely on textual features to automatically identify the presence of one or more of the following six Ekman emotions: anger, disgust, fear, joy, sadness and surprise. I compare different classification systems and evaluate the performance of the automatic systems against the manual annotations. I also introduce a system that uses data collected from the social network Twitter. I use the Twitter API to collect a large corpus of tweets manually labeled by their authors for one of the six emotions of interest. I then compare the classification of emotions obtained when training on data automatically collected from Twitter versus data obtained through crowd sourced annotations. digital.library.unt.edu/ark:/67531/metadc177253/
Automated defense against worm propagation.
Access: Use of this item is restricted to the UNT Community.
Worms have caused significant destruction over the last few years. Network security elements such as firewalls, IDS, etc have been ineffective against worms. Some worms are so fast that a manual intervention is not possible. This brings in the need for a stronger security architecture which can automatically react to stop worm propagation. The method has to be signature independent so that it can stop new worms. In this thesis, an automated defense system (ADS) is developed to automate defense against worms and contain the worm to a level where manual intervention is possible. This is accomplished with a two level architecture with feedback at each level. The inner loop is based on control system theory and uses the properties of PID (proportional, integral and differential controller). The outer loop works at the network level and stops the worm to reach its spread saturation point. In our lab setup, we verified that with only inner loop active the worm was delayed, and with both loops active we were able to restrict the propagation to 10% of the targeted hosts. One concern for deployment of a worm containment mechanism was degradation of throughput for legitimate traffic. We found that with proper intelligent algorithm we can minimize the degradation to an acceptable level. digital.library.unt.edu/ark:/67531/metadc4909/
Automated Syndromic Surveillance using Intelligent Mobile Agents
Current syndromic surveillance systems utilize centralized databases that are neither scalable in storage space nor in computing power. Such systems are limited in the amount of syndromic data that may be collected and analyzed for the early detection of infectious disease outbreaks. However, with the increased prevalence of international travel, public health monitoring must extend beyond the borders of municipalities or states which will require the ability to store vasts amount of data and significant computing power for analyzing the data. Intelligent mobile agents may be used to create a distributed surveillance system that will utilize the hard drives and computer processing unit (CPU) power of the hosts on the agent network where the syndromic information is located. This thesis proposes the design of a mobile agent-based syndromic surveillance system and an agent decision model for outbreak detection. Simulation results indicate that mobile agents are capable of detecting an outbreak that occurs at all hosts the agent is monitoring. Further study of agent decision models is required to account for localized epidemics and variable agent movement rates. digital.library.unt.edu/ark:/67531/metadc5141/
Automatic Software Test Data Generation
Access: Use of this item is restricted to the UNT Community.
In software testing, it is often desirable to find test inputs that exercise specific program features. Finding these inputs manually, is extremely time consuming, especially, when the software being tested is complex. Therefore, there have been numerous attempts automate this process. Random test data generation consists of generating test inputs at random, in the hope that they will exercise the desired software features. Often the desired inputs must satisfy complex constraints, and this makes a random approach seem unlikely to succeed. In contrast, combinatorial optimization techniques, such as those using genetic algorithms, are meant to solve difficult problems involving simultaneous satisfaction of many constraints. digital.library.unt.edu/ark:/67531/metadc3276/
Automatic Tagging of Communication Data
Globally distributed software teams are widespread throughout industry. But finding reliable methods that can properly assess a team's activities is a real challenge. Methods such as surveys and manual coding of activities are too time consuming and are often unreliable. Recent advances in information retrieval and linguistics, however, suggest that automated and/or semi-automated text classification algorithms could be an effective way of finding differences in the communication patterns among individuals and groups. Communication among group members is frequent and generates a significant amount of data. Thus having a web-based tool that can automatically analyze the communication patterns among global software teams could lead to a better understanding of group performance. The goal of this thesis, therefore, is to compare automatic and semi-automatic measures of communication and evaluate their effectiveness in classifying different types of group activities that occur within a global software development project. In order to achieve this goal, we developed a web-based component that can be used to help clean and classify communication activities. The component was then used to compare different automated text classification techniques on various group activities to determine their effectiveness in correctly classifying data from a global software development team project. digital.library.unt.edu/ark:/67531/metadc149611/
Bayesian Probabilistic Reasoning Applied to Mathematical Epidemiology for Predictive Spatiotemporal Analysis of Infectious Diseases
Abstract Probabilistic reasoning under uncertainty suits well to analysis of disease dynamics. The stochastic nature of disease progression is modeled by applying the principles of Bayesian learning. Bayesian learning predicts the disease progression, including prevalence and incidence, for a geographic region and demographic composition. Public health resources, prioritized by the order of risk levels of the population, will efficiently minimize the disease spread and curtail the epidemic at the earliest. A Bayesian network representing the outbreak of influenza and pneumonia in a geographic region is ported to a newer region with different demographic composition. Upon analysis for the newer region, the corresponding prevalence of influenza and pneumonia among the different demographic subgroups is inferred for the newer region. Bayesian reasoning coupled with disease timeline is used to reverse engineer an influenza outbreak for a given geographic and demographic setting. The temporal flow of the epidemic among the different sections of the population is analyzed to identify the corresponding risk levels. In comparison to spread vaccination, prioritizing the limited vaccination resources to the higher risk groups results in relatively lower influenza prevalence. HIV incidence in Texas from 1989-2002 is analyzed using demographic based epidemic curves. Dynamic Bayesian networks are integrated with probability distributions of HIV surveillance data coupled with the census population data to estimate the proportion of HIV incidence among the different demographic subgroups. Demographic based risk analysis lends to observation of varied spectrum of HIV risk among the different demographic subgroups. A methodology using hidden Markov models is introduced that enables to investigate the impact of social behavioral interactions in the incidence and prevalence of infectious diseases. The methodology is presented in the context of simulated disease outbreak data for influenza. Probabilistic reasoning analysis enhances the understanding of disease progression in order to identify the critical points of surveillance, control and prevention. Public health resources, prioritized by the order of risk levels of the population, will efficiently minimize the disease spread and curtail the epidemic at the earliest. digital.library.unt.edu/ark:/67531/metadc5302/
Benchmark-based Page Replacement (BBPR) Strategy: A New Web Cache Page Replacement Strategy
World Wide Web caching is widely used through today's Internet. When correctly deployed, Web caching systems can lead to significant bandwidth savings, network load reduction, server load balancing, and higher content availability. A document replacement algorithm that can lower retrieval latency and yield high hit ratio is the key to the effectiveness of proxy caches. More than twenty cache algorithms have been employed in academic studies and in corporate communities as well. But there are some drawbacks in the existing replacement algorithms. To overcome these shortcomings, we developed a new page replacement strategy named as Benchmark-Based Page Replacement (BBPR) strategy, in which a HTTP benchmark is used as a tool to evaluate the current network load and the server load. By our simulation model, the BBPR strategy shows better performance than the LRU (Least Recently Used) method, which is the most commonly used algorithm. The tradeoff is a reduced hit ratio. Slow pages benefit from BBPR. digital.library.unt.edu/ark:/67531/metadc4215/
Bounded Dynamic Source Routing in Mobile Ad Hoc Networks
Access: Use of this item is restricted to the UNT Community.
A mobile ad hoc network (MANET) is a collection of mobile platforms or nodes that come together to form a network capable of communicating with each other, without the help of a central controller. To avail the maximum potential of a MANET, it is of great importance to devise a routing scheme, which will optimize upon the performance of a MANET, given the high rate of random mobility of the nodes. In a MANET individual nodes perform the routing functions like route discovery, route maintenance and delivery of packets from one node to the other. Existing routing protocols flood the network with broadcasts of route discovery messages, while attempting to establish a route. This characteristic is instrumental in deteriorating the performance of a MANET, as resource overhead triggered by broadcasts is directly proportional to the size of the network. Bounded-dynamic source routing (B-DSR), is proposed to curb this multitude of superfluous broadcasts, thus enabling to reserve valuable resources like bandwidth and battery power. B-DSR establishes a bounded region in the network, only within which, transmissions of route discovery messages are processed and validated for establishing a route. All route discovery messages reaching outside of this bounded region are dropped, thus preventing the network from being flooded. In addition B-DSR also guarantees loop-free routing and is robust for a rapid recovery when routes in the network change. digital.library.unt.edu/ark:/67531/metadc4264/
Building an Intelligent Filtering System Using Idea Indexing
The widely used vector model maintains its popularity because of its simplicity, fast speed, and the appeal of using spatial proximity for semantic proximity. However, this model faces a disadvantage that is associated with the vagueness from keywords overlapping. Efforts have been made to improve the vector model. The research on improving document representation has been focused on four areas, namely, statistical co-occurrence of related items, forming term phrases, grouping of related words, and representing the content of documents. In this thesis, we propose the idea-indexing model to improve document representation for the filtering task in IR. The idea-indexing model matches document terms with the ideas they express and indexes the document with these ideas. This indexing scheme represents the document with its semantics instead of sets of independent terms. We show in this thesis that indexing with ideas leads to better performance. digital.library.unt.edu/ark:/67531/metadc4275/
Case-Based Reasoning (CBR) for Children Story Selection in ASP.NET and C#
This paper describes the general architecture and function of a Case-Based Reasoning (CBR) system implemented with ASP.NET and C#. Microsoft Visual Studio .NET and XML Web Services provide a flexible, standards-based model that allows clients to access data. Web Form Pages offer a powerful programming model for Web-enabled user interface. The system provides a variety of mechanisms and services related to story retrieval and adaptation. Users may browse and search a library of text stories. More advanced CBR capabilities were also implemented, including a multi-factor distance-calculation for matching user interests with stories in the library, recommendations on optimizing search, and adaptation of stories to match user interests. digital.library.unt.edu/ark:/67531/metadc4417/
CLUE: A Cluster Evaluation Tool
Modern high performance computing is dependent on parallel processing systems. Most current benchmarks reveal only the high level computational throughput metrics, which may be sufficient for single processor systems, but can lead to a misrepresentation of true system capability for parallel systems. A new benchmark is therefore proposed. CLUE (Cluster Evaluator) uses a cellular automata algorithm to evaluate the scalability of parallel processing machines. The benchmark also uses algorithmic variations to evaluate individual system components' impact on the overall serial fraction and efficiency. CLUE is not a replacement for other performance-centric benchmarks, but rather shows the scalability of a system and provides metrics to reveal where one can improve overall performance. CLUE is a new benchmark which demonstrates a better comparison among different parallel systems than existing benchmarks and can diagnose where a particular parallel system can be optimized. digital.library.unt.edu/ark:/67531/metadc5444/
A Comparison of Agent-Oriented Software Engineering Frameworks and Methodologies
Agent-oriented software engineering (AOSE) covers issues on developing systems with software agents. There are many techniques, mostly agent-oriented and object-oriented, ready to be chosen as building blocks to create agent-based systems. There have been several AOSE methodologies proposed intending to show engineers guidelines on how these elements are constituted in having agents achieve the overall system goals. Although these solutions are promising, most of them are designed in ad-hoc manner without truly obeying software developing life-cycle fully, as well as lacking of examinations on agent-oriented features. To address these issues, we investigated state-of-the-art techniques and AOSE methodologies. By examining them in different respects, we commented on the strength and weakness of them. Toward a formal study, a comparison framework has been set up regarding four aspects, including concepts and properties, notations and modeling techniques, process, and pragmatics. Under these criteria, we conducted the comparison in both overview and detailed level. The comparison helped us with empirical and analytical study, to inspect the issues on how an ideal agent-based system will be formed. digital.library.unt.edu/ark:/67531/metadc4411/
Computational Epidemiology - Analyzing Exposure Risk: A Deterministic, Agent-Based Approach
Many infectious diseases are spread through interactions between susceptible and infectious individuals. Keeping track of where each exposure to the disease took place, when it took place, and which individuals were involved in the exposure can give public health officials important information that they may use to formulate their interventions. Further, knowing which individuals in the population are at the highest risk of becoming infected with the disease may prove to be a useful tool for public health officials trying to curtail the spread of the disease. Epidemiological models are needed to allow epidemiologists to study the population dynamics of transmission of infectious agents and the potential impact of infectious disease control programs. While many agent-based computational epidemiological models exist in the literature, they focus on the spread of disease rather than exposure risk. These models are designed to simulate very large populations, representing individuals as agents, and using random experiments and probabilities in an attempt to more realistically guide the course of the modeled disease outbreak. The work presented in this thesis focuses on tracking exposure risk to chickenpox in an elementary school setting. This setting is chosen due to the high level of detailed information realistically available to school administrators regarding individuals' schedules and movements. Using an agent-based approach, contacts between individuals are tracked and analyzed with respect to both individuals and locations. The results are then analyzed using a combination of tools from computer science and geographic information science. digital.library.unt.edu/ark:/67531/metadc11017/
Content-based image retrieval by integration of metadata encoded multimedia (image and text) features in constructing a video summarizer application.
Content-based image retrieval (CBIR) is the retrieval of images from a collection by means of internal feature measures of the information content of the images. In CBIR systems, text media is usually used only to retrieve exemplar images for further searching by image feature content. This research work describes a new method for integrating multimedia text and image content features to increase the retrieval performance of the system. I am exploring the content-based features of an image extracted from a video to build a storyboard for search retrieval of images. Metadata encoded multimedia features include extracting primitive features like color, shape and text from an image. Histograms are built for all the features extracted and stored in a database. Images are searched based on comparing these histogram values of the extracted image with the stored values. These histogram values are used for extraction of keyframes from a collection of images parsed from a video file. Individual shots of images are extracted from a video clip and run through processes that extract the features and build the histogram values. A keyframe extraction algorithm is run to get the keyframes from the collection of images to build a storyboard of images. In video retrieval, speech recognition and other multimedia encoding could help improve the CBIR indexing technique and makes keyframe extraction and searching effective. Research in area of embedding sound and other multimedia could enhance effective video retrieval. digital.library.unt.edu/ark:/67531/metadc4238/
Control mechanisms and recovery techniques for real-time data transmission over the Internet.
Streaming multimedia content with UDP has become popular over distributed systems such as an Internet. This may encounter many losses due to dropped packets or late arrivals at destination since UDP can only provide best effort delivery. Even UDP doesn't have any self-recovery mechanism from congestion collapse or bursty loss to inform sender of the data to adjust future transmission rate of data like in TCP. So there is a need to incorporate various control schemes like forward error control, interleaving, and congestion control and error concealment into real-time transmission to prevent from effect of losses. Loss can be repaired by retransmission if roundtrip delay is allowed, otherwise error concealment techniques will be used based on the type and amount of loss. This paper implements the interleaving technique with packet spacing of varying interleaver block size for protecting real-time data from loss and its effect during transformation across the Internet. The packets are interleaved and maintain some time gap between two consecutive packets before being transmitted into the Internet. Thus loss of packets can be reduced from congestion and preventing loss of consecutive packets of information when a burst of several packets are lost. Several experiments have been conducted with video data for analysis of proposed model. digital.library.unt.edu/ark:/67531/metadc3268/
Cross Language Information Retrieval for Languages with Scarce Resources
Our generation has experienced one of the most dramatic changes in how society communicates. Today, we have online information on almost any imaginable topic. However, most of this information is available in only a few dozen languages. In this thesis, I explore the use of parallel texts to enable cross-language information retrieval (CLIR) for languages with scarce resources. To build the parallel text I use the Bible. I evaluate different variables and their impact on the resulting CLIR system, specifically: (1) the CLIR results when using different amounts of parallel text; (2) the role of paraphrasing on the quality of the CLIR output; (3) the impact on accuracy when translating the query versus translating the collection of documents; and finally (4) how the results are affected by the use of different dialects. The results show that all these variables have a direct impact on the quality of the CLIR system. digital.library.unt.edu/ark:/67531/metadc12157/
Cuff-less Blood Pressure Measurement Using a Smart Phone
Access: Use of this item is restricted to the UNT Community.
Blood pressure is vital sign information that physicians often need as preliminary data for immediate intervention during emergency situations or for regular monitoring of people with cardiovascular diseases. Despite the availability of portable blood pressure meters in the market, they are not regularly carried by people, creating a need for an ultra-portable measurement platform or device that can be easily carried and used at all times. One such device is the smartphone which, according to comScore survey is used by 26.2% of the US adult population. the mass production of these phones with built-in sensors and high computation power has created numerous possibilities for application development in different domains including biomedical. Motivated by this capability and their extensive usage, this thesis focuses on developing a blood pressure measurement platform on smartphones. Specifically, I developed a blood pressure measurement system on a smart phone using the built-in camera and a customized external microphone. the system consists of first obtaining heart beats using the microphone and finger pulse with the camera, and finally calculating the blood pressure using the recorded data. I developed techniques for finding the best location for obtaining the data, making the system usable by all categories of people. the proposed system resulted in accuracies between 90-100%, when compared to traditional blood pressure meters. the second part of this thesis presents a new system for remote heart beat monitoring using the smart phone. with the proposed system, heart beats can be transferred live by patients and monitored by physicians remotely for diagnosis. the proposed blood pressure measurement and remote monitoring systems will be able to facilitate information acquisition and decision making by the 9-1-1 operators. digital.library.unt.edu/ark:/67531/metadc115102/
DADS - A Distributed Agent Delivery System
Mobile agents require an appropriate platform that can facilitate their migration and execution. In particular, the design and implementation of such a system must balance several factors that will ensure that its constituent agents are executed without problems. Besides the basic requirements of migration and execution, an agent system must also provide mechanisms to ensure the security and survivability of an agent when it migrates between hosts. In addition, the system should be simple enough to facilitate its widespread use across large scale networks (i.e Internet). To address these issues, this thesis discusses the design and implementation of the Distributed Agent Delivery System (DADS). The DADS provides a de-coupled design that separates agent acceptance from agent execution. Using functional modules, the DADS provides services ranging from language execution and security to fault-tolerance and compression. Modules allow the administrator(s) of hosts to declare, at run-time, the services that they want to provide. Since each administrative domain is different, the DADS provides a platform that can be adapted to exchange heterogeneous blends of agents across large scale networks. digital.library.unt.edu/ark:/67531/metadc3352/
The Design and Implementation of a Prolog Parser using javacc
Operatorless Prolog text is LL(1) in nature and any standard LL parser generator tool can be used to parse it. However, the Prolog text that conforms to the ISO Prolog standard allows the definition of dynamic operators. Since Prolog operators can be defined at run-time, operator symbols are not present in the grammar rules of the language. Unless the parser generator allows for some flexibility in the specification of the grammar rules, it is very difficult to generate a parser for such text. In this thesis we discuss the existing parsing methods and their modified versions to parse languages with dynamic operator capabilities. Implementation details of a parser using Javacc as a parser generator tool to parse standard Prolog text is provided. The output of the parser is an “Abstract Syntax Tree” that reflects the correct precedence and associativity rules among the various operators (static and dynamic) of the language. Empirical results are provided that show that a Prolog parser that is generated by the parser generator like Javacc is comparable in efficiency to a hand-coded parser. digital.library.unt.edu/ark:/67531/metadc3251/
The Design and Implementation of an Intelligent Agent-Based File System
As bandwidth constraints on LAN/WAN environments decrease, the demand for distributed services will continue to increase. In particular, the proliferation of user-level applications requiring high-capacity distributed file storage systems will demand that such services be universally available. At the same time, the advent of high-speed networks have made the deployment of application and communication solutions based upon an Intelligent Mobile Agent (IMA) framework practical. Agents have proven to present an ideal development paradigm for the creation of autonomous large-scale distributed systems, and an agent-based communication scheme would facilitate the creation of independently administered distributed file services. This thesis thus outlines an architecture for such a distributed file system based upon an IMA communication framework. digital.library.unt.edu/ark:/67531/metadc2523/
Design and Implementation of Large-Scale Wireless Sensor Networks for Environmental Monitoring Applications
Environmental monitoring represents a major application domain for wireless sensor networks (WSN). However, despite significant advances in recent years, there are still many challenging issues to be addressed to exploit the full potential of the emerging WSN technology. In this dissertation, we introduce the design and implementation of low-power wireless sensor networks for long-term, autonomous, and near-real-time environmental monitoring applications. We have developed an out-of-box solution consisting of a suite of software, protocols and algorithms to provide reliable data collection with extremely low power consumption. Two wireless sensor networks based on the proposed solution have been deployed in remote field stations to monitor soil moisture along with other environmental parameters. As parts of the ever-growing environmental monitoring cyberinfrastructure, these networks have been integrated into the Texas Environmental Observatory system for long-term operation. Environmental measurement and network performance results are presented to demonstrate the capability, reliability and energy-efficiency of the network. digital.library.unt.edu/ark:/67531/metadc28493/
The Design Of A Benchmark For Geo-stream Management Systems
The recent growth in sensor technology allows easier information gathering in real-time as sensors have grown smaller, more accurate, and less expensive. The resulting data is often in a geo-stream format continuously changing input with a spatial extent. Researchers developing geo-streaming management systems (GSMS) require a benchmark system for evaluation, which is currently lacking. This thesis presents GSMark, a benchmark for evaluating GSMSs. GSMark provides a data generator that creates a combination of synthetic and real geo-streaming data, a workload simulator to present the data to the GSMS as a data stream, and a set of benchmark queries that evaluate typical GSMS functionality and query performance. In particular, GSMark generates both moving points and evolving spatial regions, two fundamental data types for a broad range of geo-stream applications, and the geo-streaming queries on this data. digital.library.unt.edu/ark:/67531/metadc103392/
Developing a Test Bed for Interactive Narrative in Virtual Environments
Access: Use of this item is restricted to the UNT Community.
As Virtual Environments (VE) become a more commonly used method of interaction and presentation, supporting users as they navigate and interact with scenarios presented in VE will be a significant issue. A key step in understanding the needs of users in these situations will be observing them perform representative tasks in a fully developed environment. In this paper, we describe the development of a test bed for interactive narrative in a virtual environment. The test bed was specifically developed to present multiple, simultaneous sequences of events (scenarios or narratives) and to support user navigation through these scenarios. These capabilities will support the development of multiple users testing scenarios, allowing us to study and better understand the needs of users of narrative VEs. digital.library.unt.edu/ark:/67531/metadc3176/
Development, Implementation, and Analysis of a Contact Model for an Infectious Disease
With a growing concern of an infectious diseases spreading in a population, epidemiology is becoming more important for the future of public health. In the past epidemiologist used existing data of an outbreak to help them determine how an infectious disease might spread in the future. Now with computational models, they able to analysis data produced by these models to help with prevention and intervention plans. This paper looks at the design, implementation, and analysis of a computational model based on the interactions of the population between individuals. The design of the working contact model looks closely at the SEIR model used as the foundation and the two timelines of a disease. The implementation of the contact model is reviewed while looking closely at data structures. The analysis of the experiments provide evidence this contact model can be used to help epidemiologist study the spread of an infectious disease based on the contact rate of individuals. digital.library.unt.edu/ark:/67531/metadc9824/
DICOM Image Scrubbing Software Library/Utility
This software is aimed at providing user-friendly, easy-to-use environment for the user to scrub (de-identify/modify) the DICOM header information. Some tools either anonymize or default the values without the user interaction. The user doesn't have the flexibility to edit the header information. One cannot scrub set of images simultaneously (batch scrubbing). This motivated to develop a tool/utility that can scrub a set of images in a single step more efficiently. This document also addresses security issues of the patient confidentiality to achieve protection of patient identifying information and some technical requirements digital.library.unt.edu/ark:/67531/metadc4189/
Direct Online/Offline Digital Signature Schemes.
Online/offline signature schemes are useful in many situations, and two such scenarios are considered in this dissertation: bursty server authentication and embedded device authentication. In this dissertation, new techniques for online/offline signing are introduced, those are applied in a variety of ways for creating online/offline signature schemes, and five different online/offline signature schemes that are proved secure under a variety of models and assumptions are proposed. Two of the proposed five schemes have the best offline or best online performance of any currently known technique, and are particularly well-suited for the scenarios that are considered in this dissertation. To determine if the proposed schemes provide the expected practical improvements, a series of experiments were conducted comparing the proposed schemes with each other and with other state-of-the-art schemes in this area, both on a desktop class computer, and under AVR Studio, a simulation platform for an 8-bit processor that is popular for embedded systems. Under AVR Studio, the proposed SGE scheme using a typical key size for the embedded device authentication scenario, can complete the offline phase in about 24 seconds and then produce a signature (the online phase) in 15 milliseconds, which is the best offline performance of any known signature scheme that has been proven secure in the standard model. In the tests on a desktop class computer, the proposed SGS scheme, which has the best online performance and is designed for the bursty server authentication scenario, generated 469,109 signatures per second, and the Schnorr scheme (the next best scheme in terms of online performance) generated only 223,548 signatures. The experimental results demonstrate that the SGE and SGS schemes are the most efficient techniques for embedded device authentication and bursty server authentication, respectively. digital.library.unt.edu/ark:/67531/metadc9717/
DirectShow Approach to Low-Cost Multimedia Security Surveillance System
In response to the recent intensive needs for civilian security surveillance, both full and compact versions of a Multimedia Security Surveillance (MSS) system have been built up. The new Microsoft DirectShow technology was applied in implementing the multimedia stream-processing module. Through Microsoft Windows Driver Model interface, the chosen IEEE1394 enabled Fire-i cameras as external sensors are integrated with PC based continuous storage unit. The MSS application also allows multimedia broadcasting and remote controls. Cost analysis is included. digital.library.unt.edu/ark:/67531/metadc3297/
Dynamic Grid-Based Data Distribution Management in Large Scale Distributed Simulations
Distributed simulation is an enabling concept to support the networked interaction of models and real world elements that are geographically distributed. This technology has brought a new set of challenging problems to solve, such as Data Distribution Management (DDM). The aim of DDM is to limit and control the volume of the data exchanged during a distributed simulation, and reduce the processing requirements of the simulation hosts by relaying events and state information only to those applications that require them. In this thesis, we propose a new DDM scheme, which we refer to as dynamic grid-based DDM. A lightweight UNT-RTI has been developed and implemented to investigate the performance of our DDM scheme. Our results clearly indicate that our scheme is scalable and it significantly reduces both the number of multicast groups used, and the message overhead, when compared to previous grid-based allocation schemes using large-scale and real-world scenarios. digital.library.unt.edu/ark:/67531/metadc2699/
Dynamic Resource Management in RSVP- Controlled Unicast Networks
Resources are said to be fragmented in the network when they are available in non-contiguous blocks, and calls are dropped as they may not end sufficient resources. Hence, available resources may remain unutilized. In this thesis, the effect of resource fragmentation (RF) on RSVP-controlled networks was studied and new algorithms were proposed to reduce the effect of RF. In order to minimize the effect of RF, resources in the network are dynamically redistributed on different paths to make them available in contiguous blocks. Extra protocol messages are introduced to facilitate resource redistribution in the network. The Dynamic Resource Redistribution (DRR) algorithm when used in conjunction with RSVP, not only increased the number of calls accommodated into the network but also increased the overall resource utilization of the network. Issues such as how many resources need to be redistributed and of which call(s), and how these choices affect the redistribution process were investigated. Further, various simulation experiments were conducted to study the performance of the DRR algorithm on different network topologies with varying traffic characteristics. digital.library.unt.edu/ark:/67531/metadc3048/
Elicitation of Protein-Protein Interactions from Biomedical Literature Using Association Rule Discovery
Extracting information from a stack of data is a tedious task and the scenario is no different in proteomics. Volumes of research papers are published about study of various proteins in several species, their interactions with other proteins and identification of protein(s) as possible biomarker in causing diseases. It is a challenging task for biologists to keep track of these developments manually by reading through the literatures. Several tools have been developed by computer linguists to assist identification, extraction and hypotheses generation of proteins and protein-protein interactions from biomedical publications and protein databases. However, they are confronted with the challenges of term variation, term ambiguity, access only to abstracts and inconsistencies in time-consuming manual curation of protein and protein-protein interaction repositories. This work attempts to attenuate the challenges by extracting protein-protein interactions in humans and elicit possible interactions using associative rule mining on full text, abstracts and captions from figures available from publicly available biomedical literature databases. Two such databases are used in our study: Directory of Open Access Journals (DOAJ) and PubMed Central (PMC). A corpus is built using articles based on search terms. A dataset of more than 38,000 protein-protein interactions from the Human Protein Reference Database (HPRD) is cross-referenced to validate discovered interactive pairs. A set of an optimal size of possible binary protein-protein interactions is generated to be made available for clinician or biological validation. A significant change in the number of new associations was found by altering the thresholds for support and confidence metrics. This study narrows down the limitations for biologists in keeping pace with discovery of protein-protein interactions via manually reading the literature and their needs to validate each and every possible interaction. digital.library.unt.edu/ark:/67531/metadc30508/
Embedded monitors for detecting and preventing intrusions in cryptographic and application protocols.
There are two main approaches for intrusion detection: signature-based and anomaly-based. Signature-based detection employs pattern matching to match attack signatures with observed data making it ideal for detecting known attacks. However, it cannot detect unknown attacks for which there is no signature available. Anomaly-based detection builds a profile of normal system behavior to detect known and unknown attacks as behavioral deviations. However, it has a drawback of a high false alarm rate. In this thesis, we describe our anomaly-based IDS designed for detecting intrusions in cryptographic and application-level protocols. Our system has several unique characteristics, such as the ability to monitor cryptographic protocols and application-level protocols embedded in encrypted sessions, a very lightweight monitoring process, and the ability to react to protocol misuse by modifying protocol response directly. digital.library.unt.edu/ark:/67531/metadc4414/
An Empirical Evaluation of Communication and Coordination Effectiveness in Autonomous Reactive Multiagent Systems
This thesis describes experiments designed to measure the effect of collaborative communication on task performance of a multiagent system. A discrete event simulation was developed to model a multi-agent system completing a task to find and collect food resources, with the ability to substitute various communication and coordination methods. Experiments were conducted to find the effects of the various communication methods on completion of the task to find and harvest the food resources. Results show that communication decreases the time required to complete the task. However, all communication methods do not fare equally well. In particular, results indicate that the communication model of the bee is a particularly effective method of agent communication and collaboration. Furthermore, results indicate that direct communication with additional information content provides better completion results. Cost-benefit models show some conflicting information, indicating that the increased performance may not offset the additional cost of achieving that performance. digital.library.unt.edu/ark:/67531/metadc4770/
End of Insertion Detection in Colonoscopy Videos
Colorectal cancer is the second leading cause of cancer-related deaths behind lung cancer in the United States. Colonoscopy is the preferred screening method for detection of diseases like Colorectal Cancer. In the year 2006, American Society for Gastrointestinal Endoscopy (ASGE) and American College of Gastroenterology (ACG) issued guidelines for quality colonoscopy. The guidelines suggest that on average the withdrawal phase during a screening colonoscopy should last a minimum of 6 minutes. My aim is to classify the colonoscopy video into insertion and withdrawal phase. The problem is that currently existing shot detection techniques cannot be applied because colonoscopy is a single camera shot from start to end. An algorithm to detect phase boundary has already been developed by the MIGLAB team. Existing method has acceptable levels of accuracy but the main issue is dependency on MPEG (Moving Pictures Expert Group) 1/2. I implemented exhaustive search for motion estimation to reduce the execution time and improve the accuracy. I took advantages of the C/C++ programming languages with multithreading which helped us get even better performances in terms of execution time. I propose a method for improving the current method of colonoscopy video analysis and also an extension for the same to make it usable for real time videos. The real time version we implemented is capable of handling streams coming directly from the camera in the form of uncompressed bitmap frames. Existing implementation could not be applied to real time scenario because of its dependency on MPEG 1/2. Future direction of this research includes improved motion search and GPU parallel computing techniques. digital.library.unt.edu/ark:/67531/metadc12159/
The enhancement of machine translation for low-density languages using Web-gathered parallel texts.
The majority of the world's languages are poorly represented in informational media like radio, television, newspapers, and the Internet. Translation into and out of these languages may offer a way for speakers of these languages to interact with the wider world, but current statistical machine translation models are only effective with a large corpus of parallel texts - texts in two languages that are translations of one another - which most languages lack. This thesis describes the Babylon project which attempts to alleviate this shortage by supplementing existing parallel texts with texts gathered automatically from the Web -- specifically targeting pages that contain text in a pair of languages. Results indicate that parallel texts gathered from the Web can be effectively used as a source of training data for machine translation and can significantly improve the translation quality for text in a similar domain. However, the small quantity of high-quality low-density language parallel texts on the Web remains a significant obstacle. digital.library.unt.edu/ark:/67531/metadc5140/
Ensuring Authenticity and Integrity of Critical Information Using XML Digital Signatures
Access: Use of this item is restricted to the UNT Community.
It has been noticed in the past five years that the Internet use has been troubled by the lack of sufficient security and a legal framework to enable electronic commerce to flourish. Despite these shortcomings, governments, businesses and individuals are using the Internet more often as an inexpensive and ubiquitous means to disseminate and obtain information, goods and services. The Internet is insecure -- potentially millions of people have access, and "hackers" can intercept anything traveling over the wire. There is no way to make it a secure environment; it is, after all, a public network, hence the availability and affordability. In order for it to serve our purposes as a vehicle for legally binding transactions, efforts must be directed at securing the message itself, as opposed to the transport mechanism. Digital signatures have been evolved in the recent years as the best tool for ensuring the authenticity and integrity of critical information in the so called "paperless office". A model using XML digital signatures is developed and the level of security provided by this model in the real world scenario is outlined. digital.library.unt.edu/ark:/67531/metadc3282/
Evaluating the Scalability of SDF Single-chip Multiprocessor Architecture Using Automatically Parallelizing Code
Advances in integrated circuit technology continue to provide more and more transistors on a chip. Computer architects are faced with the challenge of finding the best way to translate these resources into high performance. The challenge in the design of next generation CPU (central processing unit) lies not on trying to use up the silicon area, but on finding smart ways to make use of the wealth of transistors now available. In addition, the next generation architecture should offer high throughout performance, scalability, modularity, and low energy consumption, instead of an architecture that is suitable for only one class of applications or users, or only emphasize faster clock rate. A program exhibits different types of parallelism: instruction level parallelism (ILP), thread level parallelism (TLP), or data level parallelism (DLP). Likewise, architectures can be designed to exploit one or more of these types of parallelism. It is generally not possible to design architectures that can take advantage of all three types of parallelism without using very complex hardware structures and complex compiler optimizations. We present the state-of-art architecture SDF (scheduled data flowed) which explores the TLP parallelism as much as that is supplied by that application. We implement a SDF single-chip multiprocessor constructed from simpler processors and execute the automatically parallelizing application on the single-chip multiprocessor. SDF has many desirable features such as high throughput, scalability, and low power consumption, which meet the requirements of the next generation of CPU design. Compared with superscalar, VLIW (very long instruction word), and SMT (simultaneous multithreading), the experiment results show that for application with very little parallelism SDF is comparable to other architectures, for applications with large amounts of parallelism SDF outperforms other architectures. digital.library.unt.edu/ark:/67531/metadc4673/
Evaluation of MPLS Enabled Networks
Access: Use of this item is restricted to the UNT Community.
Recent developments in the Internet have inspired a wide range of business and consumer applications. The deployment of multimedia-based services has driven the demand for increased and guaranteed bandwidth requirements over the network. The diverse requirements of the wide range of users demand differentiated classes of service and quality assurance. The new technology of Multi-protocol label switching (MPLS) has emerged as a high performance and reliable option to address these challenges apart from the additional features that were not addressed before. This problem in lieu of thesis describes how the new paradigm of MPLS is advantageous over the conventional architecture. The motivation for this paradigm is discussed in the first part, followed by a detailed description of this new architecture. The information flow, the underlying protocols and the MPLS extensions to some of the traditional protocols are then discussed followed by the description of the simulation. The simulation results are used to show the advantages of the proposed technology. digital.library.unt.edu/ark:/67531/metadc5797/
Exploring Trusted Platform Module Capabilities: A Theoretical and Experimental Study
Trusted platform modules (TPMs) are hardware modules that are bound to a computer's motherboard, that are being included in many desktops and laptops. Augmenting computers with these hardware modules adds powerful functionality in distributed settings, allowing us to reason about the security of these systems in new ways. In this dissertation, I study the functionality of TPMs from a theoretical as well as an experimental perspective. On the theoretical front, I leverage various features of TPMs to construct applications like random oracles that are impossible to implement in a standard model of computation. Apart from random oracles, I construct a new cryptographic primitive which is basically a non-interactive form of the standard cryptographic primitive of oblivious transfer. I apply this new primitive to secure mobile agent computations, where interaction between various entities is typically required to ensure security. I prove these constructions are secure using standard cryptographic techniques and assumptions. To test the practicability of these constructions and their applications, I performed an experimental study, both on an actual TPM and a software TPM simulator which has been enhanced to make it reflect timings from a real TPM. This allowed me to benchmark the performance of the applications and test the feasibility of the proposed extensions to standard TPMs. My tests also show that these constructions are practical. digital.library.unt.edu/ark:/67531/metadc6101/
Extensions to Jinni Mobile Agent Architecture
We extend the Jinni mobile agent architecture with a multicast network transport layer, an agent-to-agent delegation mechanism and a reflection based Prolog-to-Java interface. To ensure that our agent infrastructure runs efficiently, independently of router-level multicast support, we describe a blackboard based algorithm for locating a randomly roaming agent. As part of the agent-to-agent delegation mechanism, we describe an alternative to code-fetching mechanism for stronger mobility of mobile agents with less network overhead. In the context of direct and reflection based extension mechanisms for Jinni, we describe the design and the implementation of a reflection based Prolog-to-Java interface. The presence of subtyping and method overloading makes finding the most specific method corresponding to a Prolog call pattern fairly difficult. We describe a run-time algorithm which provides accurate handling of overloaded methods beyond Java's reflection package's limitations. digital.library.unt.edu/ark:/67531/metadc2773/
FIRST PREV 1 2 3 NEXT LAST