UNT Libraries - Browse

ABOUT BROWSE FEED

Freeform Cursive Handwriting Recognition Using a Clustered Neural Network

Description: Optical character recognition (OCR) software has advanced greatly in recent years. Machine-printed text can be scanned and converted to searchable text with word accuracy rates around 98%. Reasonably neat hand-printed text can be recognized with about 85% word accuracy. However, cursive handwriting still remains a challenge, with state-of-the-art performance still around 75%. Algorithms based on hidden Markov models have been only moderately successful, while recurrent neural networks have delivered the best results to date. This thesis explored the feasibility of using a special type of feedforward neural network to convert freeform cursive handwriting to searchable text. The hidden nodes in this network were grouped into clusters, with each cluster being trained to recognize a unique character bigram. The network was trained on writing samples that were pre-segmented and annotated. Post-processing was facilitated in part by using the network to identify overlapping bigrams that were then linked together to form words and sentences. With dictionary assisted post-processing, the network achieved word accuracy of 66.5% on a small, proprietary corpus. The contributions in this thesis are threefold: 1) the novel clustered architecture of the feed-forward neural network, 2) the development of an expanded set of observers combining image masks, modifiers, and feature characterizations, and 3) the use of overlapping bigrams as the textual working unit to assist in context analysis and reconstruction.
Date: August 2015
Creator: Bristow, Kelly H.

SEM Predicting Success of Student Global Software Development Teams

Description: The extensive use of global teams to develop software has prompted researchers to investigate various factors that can enhance a team’s performance. While a significant body of research exists on global software teams, previous research has not fully explored the interrelationships and collective impact of various factors on team performance. This study explored a model that added the characteristics of a team’s culture, ability, communication frequencies, response rates, and linguistic categories to a central framework of team performance. Data was collected from two student software development projects that occurred between teams located in the United States, Panama, and Turkey. The data was obtained through online surveys and recorded postings of team activities that occurred throughout the global software development projects. Partial least squares path modeling (PLS-PM) was chosen as the analytic technique to test the model and identify the most influential factors. Individual factors associated with response rates and linguistic characteristics proved to significantly affect a team’s activity related to grade on the project, group cohesion, and the number of messages received and sent. Moreover, an examination of possible latent homogeneous segments in the model supported the existence of differences among groups based on leadership style. Teams with assigned leaders tended to have stronger relationships between linguistic characteristics and team performance factors, while teams with emergent leaders had stronger. Relationships between response rates and team performance factors. The contributions in this dissertation are three fold. 1) Novel analysis techniques using PLS-PM and clustering, 2) Use of new, quantifiable variables in analyzing team activity, 3) Identification of plausible causal indicators for team performance and analysis of the same.
Date: May 2015
Creator: Brooks, Ian Robert

Computerized Analysis of Radiograph Images of Embedded Objects as Applied to Bone Location and Mineral Content Measurement

Description: This investigation dealt with locating and measuring x-ray absorption of radiographic images. The methods developed provide a fast, accurate, minicomputer control, for analysis of embedded objects. A PDP/8 computer system was interfaced with a Joyce Loebl 3CS Microdensitometer and a Leeds & Northrup Recorder. Proposed algorithms for bone location and data smoothing work on a twelve-bit minicomputer. Designs of a software control program and operational procedure are presented. The filter made wedge and limb scans monotonic from minima to maxima. It was tested for various convoluted intervals. Ability to resmooth the same data in multiple passes was tested. An interval size of fifteen works well in one pass.
Date: August 1976
Creator: Buckner, Richard L.

A Design Approach for Digital Computer Peripheral Controllers, Case Study Design and Construction

Description: The purpose of this project was to describe a novel design approach for a digital computer peripheral controller, then design and construct a case study controller. This document consists of three chapters and an appendix. Chapter II presents the design approach chosen; a variation to a design presented by Charles R. Richards in an article published in Electronics magazine. Richards' approach consists of a finite state machine circuitry controlling all the functions of a controller. The variation to Richards' approach consists of considering the various logically independent processes which a controller carries out and assigning control of each process to a separate finite state machine. The appendix contains the documentation of the design and construction of the controller.
Date: May 1976
Creator: Cabrera, A. L.

Practical Cursive Script Recognition

Description: This research focused on the off-line cursive script recognition application. The problem is very large and difficult and there is much room for improvement in every aspect of the problem. Many different aspects of this problem were explored in pursuit of solutions to create a more practical and usable off-line cursive script recognizer than is currently available.
Date: August 1995
Creator: Carroll, Johnny Glen, 1953-

Automated Testing of Interactive Systems

Description: Computer systems which interact with human users to collect, update or provide information are growing more complex. Additionally, users are demanding more thorough testing of all computer systems. Because of the complexity and thoroughness required, automation of interactive systems testing is desirable, especially for functional testing. Many currently available testing tools, like program proving, are impractical for testing large systems. The solution presented here is the development of an automated test system which simulates human users. This system incorporates a high-level programming language, ATLIS. ATLIS programs are compiled and interpretively executed. Programs are selected for execution by operator command, and failures are reported to the operator's console. An audit trail of all activity is provided. This solution provides improved efficiency and effectiveness over conventional testing methods.
Date: May 1977
Creator: Cartwright, Stephen C.

Investigating the Extractive Summarization of Literary Novels

Description: Abstract Due to the vast amount of information we are faced with, summarization has become a critical necessity of everyday human life. Given that a large fraction of the electronic documents available online and elsewhere consist of short texts such as Web pages, news articles, scientific reports, and others, the focus of natural language processing techniques to date has been on the automation of methods targeting short documents. We are witnessing however a change: an increasingly larger number of books become available in electronic format. This means that the need for language processing techniques able to handle very large documents such as books is becoming increasingly important. This thesis addresses the problem of summarization of novels, which are long and complex literary narratives. While there is a significant body of research that has been carried out on the task of automatic text summarization, most of this work has been concerned with the summarization of short documents, with a particular focus on news stories. However, novels are different in both length and genre, and consequently different summarization techniques are required. This thesis attempts to close this gap by analyzing a new domain for summarization, and by building unsupervised and supervised systems that effectively take into account the properties of long documents, and outperform the traditional extractive summarization systems typically addressing news genre.
Date: December 2011
Creator: Ceylan, Hakan

Using Reinforcement Learning in Partial Order Plan Space

Description: Partial order planning is an important approach that solves planning problems without completely specifying the orderings between the actions in the plan. This property provides greater flexibility in executing plans; hence making the partial order planners a preferred choice over other planning methodologies. However, in order to find partially ordered plans, partial order planners perform a search in plan space rather than in space of world states and an uninformed search in plan space leads to poor efficiency. In this thesis, I discuss applying a reinforcement learning method, called First-visit Monte Carlo method, to partial order planning in order to design agents which do not need any training data or heuristics but are still able to make informed decisions in plan space based on experience. Communicating effectively with the agent is crucial in reinforcement learning. I address how this task was accomplished in plan space and the results from an evaluation of a blocks world test bed.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2006
Creator: Ceylan, Hakan

Natural Language Interfaces to Databases

Description: Natural language interfaces to databases (NLIDB) are systems that aim to bridge the gap between the languages used by humans and computers, and automatically translate natural language sentences to database queries. This thesis proposes a novel approach to NLIDB, using graph-based models. The system starts by collecting as much information as possible from existing databases and sentences, and transforms this information into a knowledge base for the system. Given a new question, the system will use this knowledge to analyze and translate the sentence into its corresponding database query statement. The graph-based NLIDB system uses English as the natural language, a relational database model, and SQL as the formal query language. In experiments performed with natural language questions ran against a large database containing information about U.S. geography, the system showed good performance compared to the state-of-the-art in the field.
Date: December 2006
Creator: Chandra, Yohan

Measuring Vital Signs Using Smart Phones

Description: Smart phones today have become increasingly popular with the general public for its diverse abilities like navigation, social networking, and multimedia facilities to name a few. These phones are equipped with high end processors, high resolution cameras, built-in sensors like accelerometer, orientation-sensor, light-sensor, and much more. According to comScore survey, 25.3% of US adults use smart phones in their daily lives. Motivated by the capability of smart phones and their extensive usage, I focused on utilizing them for bio-medical applications. In this thesis, I present a new application for a smart phone to quantify the vital signs such as heart rate, respiratory rate and blood pressure with the help of its built-in sensors. Using the camera and a microphone, I have shown how the blood pressure and heart rate can be determined for a subject. People sometimes encounter minor situations like fainting or fatal accidents like car crash at unexpected times and places. It would be useful to have a device which can measure all vital signs in such an event. The second part of this thesis demonstrates a new mode of communication for next generation 9-1-1 calls. In this new architecture, the call-taker will be able to control the multimedia elements in the phone from a remote location. This would help the call-taker or first responder to have a better control over the situation. Transmission of the vital signs measured using the smart phone can be a life saver in critical situations. In today's voice oriented 9-1-1 calls, the dispatcher first collects critical information (e.g., location, call-back number) from caller, and assesses the situation. Meanwhile, the dispatchers constantly face a "60-second dilemma"; i.e., within 60 seconds, they need to make a complicated but important decision, whether to dispatch and, if so, what to dispatch. The dispatchers often feel that ...
Date: December 2010
Creator: Chandrasekaran, Vikram

An Adaptive Linearization Method for a Constraint Satisfaction Problem in Semiconductor Device Design Optimization

Description: The device optimization is a very important element in semiconductor technology advancement. Its objective is to find a design point for a semiconductor device so that the optimized design goal meets all specified constraints. As in other engineering fields, a nonlinear optimizer is often used for design optimization. One major drawback of using a nonlinear optimizer is that it can only partially explore the design space and return a local optimal solution. This dissertation provides an adaptive optimization design methodology to allow the designer to explore the design space and obtain a globally optimal solution. One key element of our method is to quickly compute the set of all feasible solutions, also called the acceptability region. We described a polytope-based representation for the acceptability region and an adaptive linearization technique for device performance model approximation. These efficiency enhancements have enabled significant speed-up in estimating acceptability regions and allow acceptability regions to be estimated for a larger class of device design tasks. Our linearization technique also provides an efficient mechanism to guarantee the global accuracy of the computed acceptability region. To visualize the acceptability region, we study the orthogonal projection of high-dimensional convex polytopes and propose an output sensitive algorithm for projecting polytopes into two dimensions.
Date: May 1999
Creator: Chang, Chih-Hui, 1967-

An Interpreter for the Basic Programming Language

Description: In this thesis, the first chapter provides the general description of this interpreter. The second chapter contains a formal definition of the syntax of BASIC along with an introduction to the semantics. The third chapter contains the design of data structure. The fourth chapter contains the description of algorithms along with stages for testing the interpreter and the design of debug output. The stages and actions-are represented internally to the computer in tabular forms. For statement parsing working syntax equations are established. They serve as standards for the conversion of source statements into object pseudocodes. As the statement is parsed for legal form, pseudocodes for this statement are created. For pseudocode execution, pseudocodes are represented internally to the computer in tabular forms.
Date: May 1975
Creator: Chang, Min-Jye S.

Generating Machine Code for High-Level Programming Languages

Description: The purpose of this research was to investigate the generation of machine code from high-level programming language. The following steps were undertaken: 1) Choose a high-level programming language as the source language and a computer as the target computer. 2) Examine all stages during the compiling of a high-level programming language and all data sets involved in the compilation. 3) Discover the mechanism for generating machine code and the mechanism to generate more efficient machine code from the language. 3) Construct an algorithm for generating machine code for the target computer. The results suggest that compiler is best implemented in a high-level programming language, and that SCANNER and PARSER should be independent of target representations, if possible.
Date: December 1976
Creator: Chao, Chia-Huei

Higher Compression from the Burrows-Wheeler Transform with New Algorithms for the List Update Problem

Description: Burrows-Wheeler compression is a three stage process in which the data is transformed with the Burrows-Wheeler Transform, then transformed with Move-To-Front, and finally encoded with an entropy coder. Move-To-Front, Transpose, and Frequency Count are some of the many algorithms used on the List Update problem. In 1985, Competitive Analysis first showed the superiority of Move-To-Front over Transpose and Frequency Count for the List Update problem with arbitrary data. Earlier studies due to Bitner assumed independent identically distributed data, and showed that while Move-To-Front adapts to a distribution faster, incurring less overwork, the asymptotic costs of Frequency Count and Transpose are less. The improvements to Burrows-Wheeler compression this work covers are increases in the amount, not speed, of compression. Best x of 2x-1 is a new family of algorithms created to improve on Move-To-Front's processing of the output of the Burrows-Wheeler Transform which is like piecewise independent identically distributed data. Other algorithms for both the middle stage of Burrows-Wheeler compression and the List Update problem for which overwork, asymptotic cost, and competitive ratios are also analyzed are several variations of Move One From Front and part of the randomized algorithm Timestamp. The Best x of 2x - 1 family includes Move-To-Front, the part of Timestamp of interest, and Frequency Count. Lastly, a greedy choosing scheme, Snake, switches back and forth as the amount of compression that two List Update algorithms achieves fluctuates, to increase overall compression. The Burrows-Wheeler Transform is based on sorting of contexts. The other improvements are better sorting orders, such as “aeioubcdf...” instead of standard alphabetical “abcdefghi...” on English text data, and an algorithm for computing orders for any data, and Gray code sorting instead of standard sorting. Both techniques lessen the overwork incurred by whatever List Update algorithms are used by reducing the difference between adjacent sorted ...
Date: August 2001
Creator: Chapin, Brenton

Video Analytics with Spatio-Temporal Characteristics of Activities

Description: As video capturing devices become more ubiquitous from surveillance cameras to smart phones, the demand of automated video analysis is increasing as never before. One obstacle in this process is to efficiently locate where a human operator’s attention should be, and another is to determine the specific types of activities or actions without ambiguity. It is the special interest of this dissertation to locate spatial and temporal regions of interest in videos and to develop a better action representation for video-based activity analysis. This dissertation follows the scheme of “locating then recognizing” activities of interest in videos, i.e., locations of potentially interesting activities are estimated before performing in-depth analysis. Theoretical properties of regions of interest in videos are first exploited, based on which a unifying framework is proposed to locate both spatial and temporal regions of interest with the same settings of parameters. The approach estimates the distribution of motion based on 3D structure tensors, and locates regions of interest according to persistent occurrences of low probability. Two contributions are further made to better represent the actions. The first is to construct a unifying model of spatio-temporal relationships between reusable mid-level actions which bridge low-level pixels and high-level activities. Dense trajectories are clustered to construct mid-level actionlets, and the temporal relationships between actionlets are modeled as Action Graphs based on Allen interval predicates. The second is an effort for a novel and efficient representation of action graphs based on a sparse coding framework. Action graphs are first represented using Laplacian matrices and then decomposed as a linear combination of primitive dictionary items following sparse coding scheme. The optimization is eventually formulated and solved as a determinant maximization problem, and 1-nearest neighbor is used for action classification. The experiments have shown better results than existing approaches for regions-of-interest detection and action ...
Date: May 2015
Creator: Cheng, Guangchun

Automatic Speech Recognition Using Finite Inductive Sequences

Description: This dissertation addresses the general problem of recognition of acoustic signals which may be derived from speech, sonar, or acoustic phenomena. The specific problem of recognizing speech is the main focus of this research. The intention is to design a recognition system for a definite number of discrete words. For this purpose specifically, eight isolated words from the T1MIT database are selected. Four medium length words "greasy," "dark," "wash," and "water" are used. In addition, four short words are considered "she," "had," "in," and "all." The recognition system addresses the following issues: filtering or preprocessing, training, and decision-making. The preprocessing phase uses linear predictive coding of order 12. Following the filtering process, a vector quantization method is used to further reduce the input data and generate a finite inductive sequence of symbols representative of each input signal. The sequences generated by the vector quantization process of the same word are factored, and a single ruling or reference template is generated and stored in a codebook. This system introduces a new modeling technique which relies heavily on the basic concept that all finite sequences are finitely inductive. This technique is used in the training stage. In order to accommodate the variabilities in speech, the training is performed casualty, and a large number of training speakers is used from eight different dialect regions. Hence, a speaker independent recognition system is realized. The matching process compares the incoming speech with each of the templates stored, and a closeness ration is computed. A ratio table is generated anH the matching word that corresponds to the smallest ratio (i.e. indicating that the ruling has removed most of the symbols) is selected. Promising results were obtained for isolated words, and the recognition rates ranged between 50% and 100%.
Date: August 1996
Creator: Cherri, Mona Youssef, 1956-

Performance Evaluation of MPLS on Quality of Service in Voice Over IP (VoIP) Networks

Description: The transmission of voice data over Internet Protocol (IP) networks is rapidly gaining acceptance in the field of networking. The major voice transmissions in the IP networks are involved in Internet telephony, which is also known as IP telephony or Voice Over IP (VoIP). VoIP is undergoing many enhancements to provide the end users with same quality as in the public switched telephone networks (PSTN). These enhancements are mostly required in quality of service (QoS) for the transmission of voice data over the IP networks. As with recent developments in the networking field, various protocols came into market to provide the QoS in IP networks - of them, multi protocol label switching (MPLS) is the most reliable and upcoming protocol for working on QoS. The problem of the thesis is to develop an IP-based virtual network, with end hosts and routers, implement MPLS on the network, and analyze its QoS for voice data transmission.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2002
Creator: Chetty, Sharath

Temporal Connectionist Expert Systems Using a Temporal Backpropagation Algorithm

Description: Representing time has been considered a general problem for artificial intelligence research for many years. More recently, the question of representing time has become increasingly important in representing human decision making process through connectionist expert systems. Because most human behaviors unfold over time, any attempt to represent expert performance, without considering its temporal nature, can often lead to incorrect results. A temporal feedforward neural network model that can be applied to a number of neural network application areas, including connectionist expert systems, has been introduced. The neural network model has a multi-layer structure, i.e. the number of layers is not limited. Also, the model has the flexibility of defining output nodes in any layer. This is especially important for connectionist expert system applications. A temporal backpropagation algorithm which supports the model has been developed. The model along with the temporal backpropagation algorithm makes it extremely practical to define any artificial neural network application. Also, an approach that can be followed to decrease the memory space used by weight matrix has been introduced. The algorithm was tested using a medical connectionist expert system to show how best we describe not only the disease but also the entire course of the disease. The system, first, was trained using a pattern that was encoded from the expert system knowledge base rules. Following then, series of experiments were carried out using the temporal model and the temporal backpropagation algorithm. The first series of experiments was done to determine if the training process worked as predicted. In the second series of experiments, the weight matrix in the trained system was defined as a function of time intervals before presenting the system with the learned patterns. The result of the two experiments indicate that both approaches produce correct results. The only difference between the two results ...
Date: December 1993
Creator: Civelek, Ferda N. (Ferda Nur)

Modeling the Impact and Intervention of a Sexually Transmitted Disease: Human Papilloma Virus

Description: Many human papilloma virus (HPV) types are sexually transmitted and HPV DNA types 16, 18, 31, and 45 account for more than 75% if all cervical dysplasia. Candidate vaccines are successfully completing US Federal Drug Agency (FDA) phase III testing and several drug companies are in licensing arbitration. Once this vaccine become available it is unlikely that 100% vaccination coverage will be probable; hence, the need for vaccination strategies that will have the greatest reduction on the endemic prevalence of HPV. This thesis introduces two discrete-time models for evaluating the effect of demographic-biased vaccination strategies: one model incorporates temporal demographics (i.e., age) in population compartments; the other non-temporal demographics (i.e., race, ethnicity). Also presented is an intuitive Web-based interface that was developed to allow the user to evaluate the effects on prevalence of a demographic-biased intervention by tailoring the model parameters to specific demographics and geographical region.
Date: May 2006
Creator: Corley, Courtney D.

An Approach Towards Self-Supervised Classification Using Cyc

Description: Due to the long duration required to perform manual knowledge entry by human knowledge engineers it is desirable to find methods to automatically acquire knowledge about the world by accessing online information. In this work I examine using the Cyc ontology to guide the creation of Naïve Bayes classifiers to provide knowledge about items described in Wikipedia articles. Given an initial set of Wikipedia articles the system uses the ontology to create positive and negative training sets for the classifiers in each category. The order in which classifiers are generated and used to test articles is also guided by the ontology. The research conducted shows that a system can be created that utilizes statistical text classification methods to extract information from an ad-hoc generated information source like Wikipedia for use in a formal semantic ontology like Cyc. Benefits and limitations of the system are discussed along with future work.
Date: December 2006
Creator: Coursey, Kino High

A Parallel Programming Language

Description: The problem of programming a parallel processor is discussed. Previous methods of programming a parallel processor, analyzing a program for parallel paths, and special language features are discussed. Graph theory is used to define the three basic programming constructs: choice, sequence, repetition. The concept of mechanized programming is expanded to allow for total separation of control and computational sections of a program. A definition of a language is presented which provides for this separation. A method for developing the program graph is discussed. The control graph and data graph are developed separately. The two graphs illustrate control and data predecessor relationships used in determining parallel elements of a program.
Date: May 1979
Creator: Cox, Richard D.

DADS - A Distributed Agent Delivery System

Description: Mobile agents require an appropriate platform that can facilitate their migration and execution. In particular, the design and implementation of such a system must balance several factors that will ensure that its constituent agents are executed without problems. Besides the basic requirements of migration and execution, an agent system must also provide mechanisms to ensure the security and survivability of an agent when it migrates between hosts. In addition, the system should be simple enough to facilitate its widespread use across large scale networks (i.e Internet). To address these issues, this thesis discusses the design and implementation of the Distributed Agent Delivery System (DADS). The DADS provides a de-coupled design that separates agent acceptance from agent execution. Using functional modules, the DADS provides services ranging from language execution and security to fault-tolerance and compression. Modules allow the administrator(s) of hosts to declare, at run-time, the services that they want to provide. Since each administrative domain is different, the DADS provides a platform that can be adapted to exchange heterogeneous blends of agents across large scale networks.
Date: December 2002
Creator: Cozzolino, Clifford Joseph

Information Storage and Retrieval Systems

Description: This thesis describes the implementation of a general purpose personal information storage and retrieval system. Chapter one contains an introduction to information storage and retrieval. Chapter two contains a description of the features a useful personal information retrieval system should contain. This description forms the basis for the implementation of the personal information storage and retrieval system described in chapter three. The system is implemented in UCSD Pascal on an Apple II microcomputer.
Date: May 1983
Creator: Creech, Teresa Adams

Keywords in the mist: Automated keyword extraction for very large documents and back of the book indexing.

Description: This research addresses the problem of automatic keyphrase extraction from large documents and back of the book indexing. The potential benefits of automating this process are far reaching, from improving information retrieval in digital libraries, to saving countless man-hours by helping professional indexers creating back of the book indexes. The dissertation introduces a new methodology to evaluate automated systems, which allows for a detailed, comparative analysis of several techniques for keyphrase extraction. We introduce and evaluate both supervised and unsupervised techniques, designed to balance the resource requirements of an automated system and the best achievable performance. Additionally, a number of novel features are proposed, including a statistical informativeness measure based on chi statistics; an encyclopedic feature that taps into the vast knowledge base of Wikipedia to establish the likelihood of a phrase referring to an informative concept; and a linguistic feature based on sophisticated semantic analysis of the text using current theories of discourse comprehension. The resulting keyphrase extraction system is shown to outperform the current state of the art in supervised keyphrase extraction by a large margin. Moreover, a fully automated back of the book indexing system based on the keyphrase extraction system was shown to lead to back of the book indexes closely resembling those created by human experts.
Date: May 2008
Creator: Csomai, Andras