UNT Libraries - 234 Matching Results

Search Results

Study and Sample Implementation of the Secure Shell Protocol (SSH)

Description: Security is one of the main concerns of users who need to connect to a remote computer for various purposes, such as checking e-mails or viewing files. However in today's computer networks, privacy, transmission to intended client is not guaranteed. If data is transmitted over the Internet or a local network as plain text it may be captured and viewed by anyone with little technical knowledge. This may include sensitive data such as passwords. Big businesses use firewalls, virtual private networks and encrypt their transmissions to counter this at high costs. Secure shell protocol (SSH) provides an answer to this. SSH is a software protocol for secure communication over an insecure network. SSH not only offers authentication of hosts but also encrypts the sessions between the client and the server and is transparent to the end user. This Problem in Lieu of Thesis makes a study of SSH and creates a sample secure client and server which follows SSH and examines its performance.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2003
Creator: Subramanyam, Udayakiran

User Modeling Tools for Virtual Architecture

Description: As the use of virtual environments (VEs) is becoming more widespread, user needs are becoming a more significant part in those environments. In order to adapt to the needs of the user, a system should be able to infer user interests and goals. I developed an architecture for user modeling to understand users' interests in a VE by monitoring their actions. In this paper, I discussed the architecture and the virtual environment that was created to test it. This architecture employs sensors to keep track of all the users' actions, data structures that can store a record of significant events that have occurred in the environment, and a rule base. The rule base continually monitors the data collected from the sensors, world state, and event history in order to update the user goal inferences. These inferences can then be used to modify the flow of events within a VE.
Date: May 2003
Creator: Uppuluri, Raja

Voting Operating System (VOS)

Description: The electronic voting machine (EVM) plays a very important role in a country where government officials are elected into office. Throughout the world, a specific operating system that tends to the specific requirement of the EVM does not exist. Existing EVM technology depends upon the various operating systems currently available, thus ignoring the basic needs of the system. There is a compromise over the basic requirements in order to develop the systems on the basis on an already available operating system, thus having a lot of scope for error. It is necessary to know the specific details of the particular device for which the operating system is being developed. In this document, I evaluate existing EVMs and identify flaws and shortcomings. I propose a solution for a new operating system that meets the specific requirements of the EVM, calling it Voting Operating System (VOS, pronounced 'voice'). The identification technique can be simplified by using the fingerprint technology that determines the identity of a person based on two fingerprints. I also discuss the various parts of the operating system that have to be implemented that can tend to all the basic requirements of an EVM, including implementation of the memory manager, process manager and file system of the proposed operating system.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2004
Creator: Venkatadusumelli, Kiran

Refactoring FrameNet for Efficient Relational Queries

Description: The FrameNet database is being used in a variety of NLP research and applications such as word sense disambiguation, machine translation, information extraction and question answering. The database is currently available in XML format. The XML database though a wholesome way of distributing data in its entireness, is not practical for use unless converted to a more application friendly database. In light of this we have successfully converted the XML database to a relational MySQL™ database. This conversion reduced the amount of data storage amount to less than half. Most importantly the new database enables us to perform fast complex querying and facilitates use by applications and research. We show the steps taken to ensure relational integrity of the data during the refactoring process and a simple demo application demonstrating ease of use.
Date: December 2003
Creator: Ahmad, Zeeshan Asim

Web-based airline ticket booking system.

Description: Online airline ticket booking system is one of the essential applications of E-commerce. With the development of Internet and security technology, more and more people begin to consume online, which is more convenient and personal than traditional way. The goal of this system is to make people purchase airline tickets easily. The system is written in JAVATM. Chapter 1 will introduce some basic conception of the technologies have been used in this system. Chapter 2 shows how the database and the system are designed. Chapter 3 shows the logic of the Web site. In Chapter 4 the interface of the system will be given. Chapter 5 tells the platform of this system.
Date: August 2004
Creator: Yu, Jianming

Web Services for Libraries

Description: Library information systems use different software applications and automated systems to gain access to distributed information. Rapid application development, changes made to existing software applications and development of new software on different platforms can make it difficult for library information systems to interoperate. Web services are used to offer better information access and retrieval solutions and hence make it more cost effective for libraries. This research focuses on how web services are implemented with the standard protocols like SOAP, WSDL and UDDI using different programming languages and platforms to achieve interoperability for libraries. It also shows how libraries can make use of this new technology. Since web services built on different platforms can interact with each other, libraries can access information with more efficiency and flexibility.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2003
Creator: Manikonda, Sunil Prasad

Evaluation of MPLS Enabled Networks

Description: Recent developments in the Internet have inspired a wide range of business and consumer applications. The deployment of multimedia-based services has driven the demand for increased and guaranteed bandwidth requirements over the network. The diverse requirements of the wide range of users demand differentiated classes of service and quality assurance. The new technology of Multi-protocol label switching (MPLS) has emerged as a high performance and reliable option to address these challenges apart from the additional features that were not addressed before. This problem in lieu of thesis describes how the new paradigm of MPLS is advantageous over the conventional architecture. The motivation for this paradigm is discussed in the first part, followed by a detailed description of this new architecture. The information flow, the underlying protocols and the MPLS extensions to some of the traditional protocols are then discussed followed by the description of the simulation. The simulation results are used to show the advantages of the proposed technology.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2003
Creator: Ratnakaram, Archith

Server load balancing.

Description: Server load balancing technology has obtained much attention as much business proceeded towards e-commerce. The idea behind is to have set of clustered servers that share the load as against a single server to achieve better performance and throughput. In this problem in lieu of thesis, I propose and evaluate an implementation of a prototype scalable server. The prototype consists of a load-balanced cluster of hosts that collectively accept and service TCP connections. The host IP addresses are advertised using the Round Robin DNS technique, allowing any host to receive requests from any client. Once a client attempts to establish a TCP connection with one of the hosts, a decision is made as to whether or not the connection should be redirected to a different host namely, the host with the lowest number of established connections. This problem in lieu of thesis outlines the history of load balancing, various options available today and finally approach for implementing the prototype and the corresponding findings.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2002
Creator: Kanuri, Jaichandra

Machine Language Techniques for Conversational Agents

Description: Machine Learning is the ability of a machine to perform better at a given task, using its previous experience. Various algorithms like decision trees, Bayesian learning, artificial neural networks and instance-based learning algorithms are used widely in machine learning systems. Current applications of machine learning include credit card fraud detection, customer service based on history of purchased products, games and many more. The application of machine learning techniques to natural language processing (NLP) has increased tremendously in recent years. Examples are handwriting recognition and speech recognition. The problem we tackle in this Problem in Lieu of Thesis is applying machine-learning techniques to improve the performance of a conversational agent. The OpenMind repository of common sense, in the form of question-answer pairs is treated as the training data for the machine learning system. WordNet is interfaced with to capture important semantic and syntactic information about the words in the sentences. Further, k-closest neighbors algorithm, an instance based learning algorithm is used to simulate a case based learning system. The resulting system is expected to be able to answer new queries with knowledge gained from the training data it was fed with.
Date: December 2003
Creator: Sule, Manisha D.

Automatic Software Test Data Generation

Description: In software testing, it is often desirable to find test inputs that exercise specific program features. Finding these inputs manually, is extremely time consuming, especially, when the software being tested is complex. Therefore, there have been numerous attempts automate this process. Random test data generation consists of generating test inputs at random, in the hope that they will exercise the desired software features. Often the desired inputs must satisfy complex constraints, and this makes a random approach seem unlikely to succeed. In contrast, combinatorial optimization techniques, such as those using genetic algorithms, are meant to solve difficult problems involving simultaneous satisfaction of many constraints.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2002
Creator: Munugala, Ajay Kumar

An Algorithm for the PLA Equivalence Problem

Description: The Programmable Logic Array (PLA) has been widely used in the design of VLSI circuits and systems because of its regularity, flexibility, and simplicity. The equivalence problem is typically to verify that the final description of a circuit is functionally equivalent to its initial description. Verifying the functional equivalence of two descriptions is equivalent to proving their logical equivalence. This problem of pure logic is essential to circuit design. The most widely used technique to solve the problem is based on Binary Decision Diagram or BDD, proposed by Bryant in 1986. Unfortunately, BDD requires too much time and space to represent moderately large circuits for equivalence testing. We design and implement a new algorithm called the Cover-Merge Algorithm for the equivalence problem based on a divide-and-conquer strategy using the concept of cover and a derivational method. We prove that the algorithm is sound and complete. Because of the NP-completeness of the problem, we emphasize simplifications to reduce the search space or to avoid redundant computations. Simplification techniques are incorporated into the algorithm as an essential part to speed up the the derivation process. Two different sets of heuristics are developed for two opposite goals: one for the proof of equivalence and the other for its disproof. Experiments on a large scale of data have shown that big speed-ups can be achieved by prioritizing the heuristics and by choosing the most favorable one at each iteration of the Algorithm. Results are compared with those for BDD on standard benchmark problems as well as on random PLAs to perform an unbiased way of testing algorithms. It has been shown that the Cover-Merge Algorithm outperforms BDD in nearly all problem instances in terms of time and space. The algorithm has demonstrated fairly stabilized and practical performances especially for big PLAs under a wide ...
Date: December 1995
Creator: Moon, Gyo Sik

Convexity-Preserving Scattered Data Interpolation

Description: Surface fitting methods play an important role in many scientific fields as well as in computer aided geometric design. The problem treated here is that of constructing a smooth surface that interpolates data values associated with scattered nodes in the plane. The data is said to be convex if there exists a convex interpolant. The problem of convexity-preserving interpolation is to determine if the data is convex, and construct a convex interpolant if it exists.
Date: December 1995
Creator: Leung, Nim Keung

Study of Parallel Algorithms Related to Subsequence Problems on the Sequent Multiprocessor System

Description: The primary purpose of this work is to study, implement and analyze the performance of parallel algorithms related to subsequence problems. The problems include string to string correction problem, to determine the longest common subsequence problem and solving the sum-range-product, 1 —D pattern matching, longest non-decreasing (non-increasing) (LNS) and maximum positive subsequence (MPS) problems. The work also includes studying the techniques and issues involved in developing parallel applications. These algorithms are implemented on the Sequent Multiprocessor System. The subsequence problems have been defined, along with performance metrics that are utilized. The sequential and parallel algorithms have been summarized. The implementation issues which arise in the process of developing parallel applications have been identified and studied.
Date: August 1994
Creator: Pothuru, Surendra

A Machine Learning Method Suitable for Dynamic Domains

Description: The efficacy of a machine learning technique is domain dependent. Some machine learning techniques work very well for certain domains but are ill-suited for other domains. One area that is of real-world concern is the flexibility with which machine learning techniques can adapt to dynamic domains. Currently, there are no known reports of any system that can learn dynamic domains, short of starting over (i.e., re-running the program). Starting over is neither time nor cost efficient for real-world production environments. This dissertation studied a method, referred to as Experience Based Learning (EBL), that attempts to deal with conditions related to learning dynamic domains. EBL is an extension of Instance Based Learning methods. The hypothesis of the study related to this research was that the EBL method would automatically adjust to domain changes and still provide classification accuracy similar to methods that require starting over. To test this hypothesis, twelve widely studied machine learning datasets were used. A dynamic domain was simulated by presenting these datasets in an uninterrupted cycle of train, test, and retrain. The order of the twelve datasets and the order of records within each dataset were randomized to control for order biases in each of ten runs. As a result, these methods provided datasets that represent extreme levels of domain change. Using the above datasets, EBL's mean classification accuracies for each dataset were compared to the published static domain results of other machine learning systems. The results indicated that the EBL's system performance was not statistically different (p>0.30) from the other machine learning methods. These results indicate that the EBL system is able to adjust to an extreme level of domain change and yet produce satisfactory results. This finding supports the use of the EBL method in real-world environments that incur rapid changes to both variables and ...
Date: July 1996
Creator: Rowe, Michael C. (Michael Charles)

Practical Cursive Script Recognition

Description: This research focused on the off-line cursive script recognition application. The problem is very large and difficult and there is much room for improvement in every aspect of the problem. Many different aspects of this problem were explored in pursuit of solutions to create a more practical and usable off-line cursive script recognizer than is currently available.
Date: August 1995
Creator: Carroll, Johnny Glen, 1953-

Quantifying Design Principles in Reusable Software Components

Description: Software reuse can occur in various places during the software development cycle. Reuse of existing source code is the most commonly practiced form of software reuse. One of the key requirements for software reuse is readability, thus the interest in the use of data abstraction, inheritance, modularity, and aspects of the visible portion of module specifications. This research analyzed the contents of software reuse libraries to answer the basic question of what makes a good reusable software component. The approach taken was to measure and analyze various software metrics as mapped to design characteristics. A related research question investigated the change in the design principles over time. This was measured by comparing sets of Ada reuse libraries categorized into two time periods. It was discovered that recently developed Ada reuse components scored better on readability than earlier developed components. A benefit of this research has been the development of a set of "design for reuse" guidelines. These guidelines address coding practices as well as design principles for an Ada implementation. C++ software reuse libraries were also analyzed to determine if design principles can be applied in a language independent fashion. This research used cyclomatic complexity metrics, software science metrics, and traditional static code metrics to measure design features. This research provides at least three original contributions. First it collects empirical data about existing reuse libraries. Second, it develops a readability measure for software libraries which can aid in comparing libraries. And third, this research developed a set of coding and design guidelines for developers of reusable software. Future research can investigate how design principles for C++ change over time. Another topic for research is the investigation of systems employing reused components to determine which libraries are more successfully used than others.
Date: December 1995
Creator: Moore, Freeman Leroy

The Design and Implementation of a Prolog Parser Using Javacc

Description: Operatorless Prolog text is LL(1) in nature and any standard LL parser generator tool can be used to parse it. However, the Prolog text that conforms to the ISO Prolog standard allows the definition of dynamic operators. Since Prolog operators can be defined at run-time, operator symbols are not present in the grammar rules of the language. Unless the parser generator allows for some flexibility in the specification of the grammar rules, it is very difficult to generate a parser for such text. In this thesis we discuss the existing parsing methods and their modified versions to parse languages with dynamic operator capabilities. Implementation details of a parser using Javacc as a parser generator tool to parse standard Prolog text is provided. The output of the parser is an “Abstract Syntax Tree” that reflects the correct precedence and associativity rules among the various operators (static and dynamic) of the language. Empirical results are provided that show that a Prolog parser that is generated by the parser generator like Javacc is comparable in efficiency to a hand-coded parser.
Date: August 2002
Creator: Gupta, Pankaj

Generating Machine Code for High-Level Programming Languages

Description: The purpose of this research was to investigate the generation of machine code from high-level programming language. The following steps were undertaken: 1) Choose a high-level programming language as the source language and a computer as the target computer. 2) Examine all stages during the compiling of a high-level programming language and all data sets involved in the compilation. 3) Discover the mechanism for generating machine code and the mechanism to generate more efficient machine code from the language. 3) Construct an algorithm for generating machine code for the target computer. The results suggest that compiler is best implemented in a high-level programming language, and that SCANNER and PARSER should be independent of target representations, if possible.
Date: December 1976
Creator: Chao, Chia-Huei

An Empirical Study of How Novice Programmers Use the Web

Description: Students often use the web as a source of help for problems that they encounter on programming assignments.In this work, we seek to understand how students use the web to search for help on their assignments.We used a mixed methods approach with 344 students who complete a survey and 41 students who participate in a focus group meetings and helped in recording data about their search habits.The survey reveals data about student reported search habits while the focus group uses a web browser plug-in to record actual search patterns.We examine the results collectively and as broken down by class year.Survey results show that at least 2/3 of the students from each class year rely on search engines to locate resources for help with their programming bugs in at least half of their assignments;search habits vary by class year;and the value of different types of resources such as tutorials and forums varies by class year.Focus group results exposes the high frequency web sites used by the students in solving their programming assignments.
Date: May 2016
Creator: Tula, Naveen

Learning from small data set for object recognition in mobile platforms.

Description: Did you stand at a door with a bunch of keys and tried to find the right one to unlock the door? Did you hold a flower and wonder the name of it? A need of object recognition could rise anytime and any where in our daily lives. With the development of mobile devices object recognition applications become possible to provide immediate assistance. However, performing complex tasks in even the most advanced mobile platforms still faces great challenges due to the limited computing resources and computing power. In this thesis, we present an object recognition system that resides and executes within a mobile device, which can efficiently extract image features and perform learning and classification. To account for the computing constraint, a novel feature extraction method that minimizes the data size and maintains data consistency is proposed. This system leverages principal component analysis method and is able to update the trained classifier when new examples become available . Our system relieves users from creating a lot of examples and makes it user friendly. The experimental results demonstrate that a learning method trained with a very small number of examples can achieve recognition accuracy above 90% in various acquisition conditions. In addition, the system is able to perform learning efficiently.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2016
Creator: Liu, Siyuan

A Parallel Programming Language

Description: The problem of programming a parallel processor is discussed. Previous methods of programming a parallel processor, analyzing a program for parallel paths, and special language features are discussed. Graph theory is used to define the three basic programming constructs: choice, sequence, repetition. The concept of mechanized programming is expanded to allow for total separation of control and computational sections of a program. A definition of a language is presented which provides for this separation. A method for developing the program graph is discussed. The control graph and data graph are developed separately. The two graphs illustrate control and data predecessor relationships used in determining parallel elements of a program.
Date: May 1979
Creator: Cox, Richard D.

An Interpreter for the Basic Programming Language

Description: In this thesis, the first chapter provides the general description of this interpreter. The second chapter contains a formal definition of the syntax of BASIC along with an introduction to the semantics. The third chapter contains the design of data structure. The fourth chapter contains the description of algorithms along with stages for testing the interpreter and the design of debug output. The stages and actions-are represented internally to the computer in tabular forms. For statement parsing working syntax equations are established. They serve as standards for the conversion of source statements into object pseudocodes. As the statement is parsed for legal form, pseudocodes for this statement are created. For pseudocode execution, pseudocodes are represented internally to the computer in tabular forms.
Date: May 1975
Creator: Chang, Min-Jye S.

A Design Approach for Digital Computer Peripheral Controllers, Case Study Design and Construction

Description: The purpose of this project was to describe a novel design approach for a digital computer peripheral controller, then design and construct a case study controller. This document consists of three chapters and an appendix. Chapter II presents the design approach chosen; a variation to a design presented by Charles R. Richards in an article published in Electronics magazine. Richards' approach consists of a finite state machine circuitry controlling all the functions of a controller. The variation to Richards' approach consists of considering the various logically independent processes which a controller carries out and assigning control of each process to a separate finite state machine. The appendix contains the documentation of the design and construction of the controller.
Date: May 1976
Creator: Cabrera, A. L.

Data-Driven Decision-Making Framework for Large-Scale Dynamical Systems under Uncertainty

Description: Managing large-scale dynamical systems (e.g., transportation systems, complex information systems, and power networks, etc.) in real-time is very challenging considering their complicated system dynamics, intricate network interactions, large scale, and especially the existence of various uncertainties. To address this issue, intelligent techniques which can quickly design decision-making strategies that are robust to uncertainties are needed. This dissertation aims to conquer these challenges by exploring a data-driven decision-making framework, which leverages big-data techniques and scalable uncertainty evaluation approaches to quickly solve optimal control problems. In particular, following techniques have been developed along this direction: 1) system modeling approaches to simplify the system analysis and design procedures for multiple applications; 2) effective simulation and analytical based approaches to efficiently evaluate system performance and design control strategies under uncertainty; and 3) big-data techniques that allow some computations of control strategies to be completed offline. These techniques and tools for analysis, design and control contribute to a wide range of applications including air traffic flow management, complex information systems, and airborne networks.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: August 2016
Creator: Xie, Junfei