DICOM Image Scrubbing Software Library/Utility

DICOM Image Scrubbing Software Library/Utility

Date: May 2003
Creator: Ponnam, Bhavani Srikanth
Description: This software is aimed at providing user-friendly, easy-to-use environment for the user to scrub (de-identify/modify) the DICOM header information. Some tools either anonymize or default the values without the user interaction. The user doesn't have the flexibility to edit the header information. One cannot scrub set of images simultaneously (batch scrubbing). This motivated to develop a tool/utility that can scrub a set of images in a single step more efficiently. This document also addresses security issues of the patient confidentiality to achieve protection of patient identifying information and some technical requirements
Contributing Partner: UNT Libraries
Agent-based architecture for web deployment of multi-agents as conversational interfaces.

Agent-based architecture for web deployment of multi-agents as conversational interfaces.

Date: May 2003
Creator: Pothuru, Ranjit Kumar
Description: Agent-based architecture explains the rationale and basis for developing agents that can interact with users through natural language query/answer patterns developed systematically using AIML (artificial intelligence mark-up language) scripts. This thesis research document also explains the architecture for VISTA (virtual interactive story-telling agents), which is used for interactive querying in educational and recreational purposes. Agents are very effective as conversational interfaces when used along side with graphical user interface (GUI) in applications and Web pages. This architecture platform can support multiple agents with or with out sharing of knowledgebase. They are very useful as chat robots for recreational purposes, customer service and educational purposes. This platform is powered by Java servlet implementation of Program D and contained in Apache Tomcat server. The AIML scripting language defined here in is a generic form of XML language and forms the knowledgebase of the bot. Animation is provided with Microsoft® Agent technology and text-to-speech support engine.
Contributing Partner: UNT Libraries
Self-Optimizing Dynamic Finite Functions

Self-Optimizing Dynamic Finite Functions

Access: Use of this item is restricted to the UNT Community.
Date: December 2003
Creator: Jeripothula, Ramesh
Description: Finite functions (also called maps) are used to describe a number of key computations and storage mechanisms used in software and hardware interpreters. Their presence spread over various memory and speed hierarchies in hardware and through various optimization processes (algorithmic and compilation based) in software, suggests encapsulating dynamic size changes and representation optimizations in a unique abstraction to be used across traditional computation mechanisms. We developed a memory allocator for testing the finite functions. We have implemented some dynamic finite functions and performed certain experiments to see the performance speed of these finite functions. We have developed some simple but powerful application programming interfaces (API) for these finite functions.
Contributing Partner: UNT Libraries
Machine Language Techniques for Conversational Agents

Machine Language Techniques for Conversational Agents

Date: December 2003
Creator: Sule, Manisha D.
Description: Machine Learning is the ability of a machine to perform better at a given task, using its previous experience. Various algorithms like decision trees, Bayesian learning, artificial neural networks and instance-based learning algorithms are used widely in machine learning systems. Current applications of machine learning include credit card fraud detection, customer service based on history of purchased products, games and many more. The application of machine learning techniques to natural language processing (NLP) has increased tremendously in recent years. Examples are handwriting recognition and speech recognition. The problem we tackle in this Problem in Lieu of Thesis is applying machine-learning techniques to improve the performance of a conversational agent. The OpenMind repository of common sense, in the form of question-answer pairs is treated as the training data for the machine learning system. WordNet is interfaced with to capture important semantic and syntactic information about the words in the sentences. Further, k-closest neighbors algorithm, an instance based learning algorithm is used to simulate a case based learning system. The resulting system is expected to be able to answer new queries with knowledge gained from the training data it was fed with.
Contributing Partner: UNT Libraries
Refactoring FrameNet for Efficient Relational Queries

Refactoring FrameNet for Efficient Relational Queries

Date: December 2003
Creator: Ahmad, Zeeshan Asim
Description: The FrameNet database is being used in a variety of NLP research and applications such as word sense disambiguation, machine translation, information extraction and question answering. The database is currently available in XML format. The XML database though a wholesome way of distributing data in its entireness, is not practical for use unless converted to a more application friendly database. In light of this we have successfully converted the XML database to a relational MySQL™ database. This conversion reduced the amount of data storage amount to less than half. Most importantly the new database enables us to perform fast complex querying and facilitates use by applications and research. We show the steps taken to ensure relational integrity of the data during the refactoring process and a simple demo application demonstrating ease of use.
Contributing Partner: UNT Libraries
Hopfield Networks as an Error Correcting Technique for Speech Recognition

Hopfield Networks as an Error Correcting Technique for Speech Recognition

Access: Use of this item is restricted to the UNT Community.
Date: May 2004
Creator: Bireddy, Chakradhar
Description: I experimented with Hopfield networks in the context of a voice-based, query-answering system. Hopfield networks are used to store and retrieve patterns. I used this technique to store queries represented as natural language sentences and I evaluated the accuracy of the technique for error correction in a spoken question-answering dialog between a computer and a user. I show that the use of an auto-associative Hopfield network helps make the speech recognition system more fault tolerant. I also looked at the available encoding schemes to convert a natural language sentence into a pattern of zeroes and ones that can be stored in the Hopfield network reliably, and I suggest scalable data representations which allow storing a large number of queries.
Contributing Partner: UNT Libraries
Memory Management and Garbage Collection Algorithms for Java-Based Prolog

Memory Management and Garbage Collection Algorithms for Java-Based Prolog

Access: Use of this item is restricted to the UNT Community.
Date: August 2001
Creator: Zhou, Qinan
Description: Implementing a Prolog Runtime System in a language like Java which provides its own automatic memory management and safety features such as built--in index checking and array initialization requires a consistent approach to memory management based on a simple ultimate goal: minimizing total memory management time and extra space involved. The total memory management time for Jinni is made up of garbage collection time both for Java and Jinni itself. Extra space is usually requested at Jinni's garbage collection. This goal motivates us to find a simple and practical garbage collection algorithm and implementation for our Prolog engine. In this thesis we survey various algorithms already proposed and offer our own contribution to the study of garbage collection by improvements and optimizations for some classic algorithms. We implemented these algorithms based on the dynamic array algorithm for an all--dynamic Prolog engine (JINNI 2000). The comparisons of our implementations versus the originally proposed algorithm allow us to draw informative conclusions on their theoretical complexity model and their empirical effectiveness.
Contributing Partner: UNT Libraries
Logic Programming Tools for Dynamic Content Generation and Internet Data Mining

Logic Programming Tools for Dynamic Content Generation and Internet Data Mining

Access: Use of this item is restricted to the UNT Community.
Date: December 2000
Creator: Gupta, Anima
Description: The phenomenal growth of Information Technology requires us to elicit, store and maintain huge volumes of data. Analyzing this data for various purposes is becoming increasingly important. Data mining consists of applying data analysis and discovery algorithms that under acceptable computational efficiency limitations, produce a particular enumeration of patterns over the data. We present two techniques based on using Logic programming tools for data mining. Data mining analyzes data by extracting patterns which describe its structure and discovers co-relations in the form of rules. We distinguish analysis methods as visual and non-visual and present one application of each. We explain that our focus on the field of Logic Programming makes some of the very complex tasks related to Web based data mining and dynamic content generation, simple and easy to implement in a uniform framework.
Contributing Partner: UNT Libraries
Multi-Agent Architecture for Internet Information Extraction and Visualization

Multi-Agent Architecture for Internet Information Extraction and Visualization

Access: Use of this item is restricted to the UNT Community.
Date: August 2000
Creator: Gollapally, Devender R.
Description: The World Wide Web is one of the largest sources of information; more and more applications are being developed daily to make use of this information. This thesis presents a multi-agent architecture that deals with some of the issues related to Internet data extraction. The primary issue addresses the reliable, efficient and quick extraction of data through the use of HTTP performance monitoring agents. A second issue focuses on how to make use of available data to take decisions and alert the user when there is change in data; this is done with the help of user agents that are equipped with a Defeasible reasoning interpreter. An additional issue is the visualization of extracted data; this is done with the aid of VRML visualization agents. The cited issues are discussed using stock portfolio management as an example application.
Contributing Partner: UNT Libraries
Dynamic Resource Management in RSVP- Controlled Unicast Networks

Dynamic Resource Management in RSVP- Controlled Unicast Networks

Date: December 2001
Creator: Iyengar Prasanna, Venkatesan
Description: Resources are said to be fragmented in the network when they are available in non-contiguous blocks, and calls are dropped as they may not end sufficient resources. Hence, available resources may remain unutilized. In this thesis, the effect of resource fragmentation (RF) on RSVP-controlled networks was studied and new algorithms were proposed to reduce the effect of RF. In order to minimize the effect of RF, resources in the network are dynamically redistributed on different paths to make them available in contiguous blocks. Extra protocol messages are introduced to facilitate resource redistribution in the network. The Dynamic Resource Redistribution (DRR) algorithm when used in conjunction with RSVP, not only increased the number of calls accommodated into the network but also increased the overall resource utilization of the network. Issues such as how many resources need to be redistributed and of which call(s), and how these choices affect the redistribution process were investigated. Further, various simulation experiments were conducted to study the performance of the DRR algorithm on different network topologies with varying traffic characteristics.
Contributing Partner: UNT Libraries
FIRST PREV 1 2 3 4 5 NEXT LAST