34 Matching Results

Search Results

A Real-Time Merging-Buffering Technique for MIDI Messages

Description: A powerful and efficient algorithm has been designed to deal with the critical timing problem of the MIDI messages. This algorithm can convert note events stored in a natural way to MIDI messages dynamically. Only limited memory space (the buffer) is required to finish the conversion work, and the size of the buffer is independent of the size of the original sequence (notes). This algorithm's real-time variable properties suggest not only the flexible real-time controls in the use of musical aspects, but also the expandability to interactive multi-media applications. A compositional environment called MusicSculptor has been implemented in terms of this algorithm.
Date: December 1991
Creator: Chang, Kuo-Lung
Partner: UNT Libraries

Improving Digital Circuit Simulation: A Knowledge-Based Approach

Description: This project focuses on a prototype system architecture which integrates features of an event-driven gate-level simulator and features of the multiple expert system architecture, HEARSAY-II. Combining artificial intelligence and simulation techniques, a knowledge-based simulator was designed and constructed to model non-standard circuit behavior. This non-standard circuit behavior is amplified by advances in integrated circuit technology. Currently available digital circuit simulators can not simulate this behavior. Circuit designer expertise on behavioral phenomena is used in the expert system to guide the base simulator by manipulating its events to achieve the desired behavior.
Date: August 1989
Creator: Benavides, John A. (John Anthony)
Partner: UNT Libraries

An Efficient Hybrid Heuristic and Probabilistic Model for the Gate Matrix Layout Problem in VLSI Design

Description: In this thesis, the gate matrix layout problem in VLSI design is considered where the goal is to minimize the number of tracks required to layout a given circuit and a taxonomy of approaches to its solution is presented. An efficient hybrid heuristic is also proposed for this combinatorial optimization problem, which is based on the combination of probabilistic hill-climbing technique and greedy method. This heuristic is tested experimentally with respect to four existing algorithms. As test cases, five benchmark problems from the literature as well as randomly generated problem instances are considered. The experimental results show that the proposed hybrid algorithm, on the average, performs better than other heuristics in terms of the required computation time and/or the quality of solution. Due to the computation-intensive nature of the problem, an exact solution within reasonable time limits is impossible. So, it is difficult to judge the effectiveness of any heuristic in terms of the quality of solution (number of tracks required). A probabilistic model of the gate matrix layout problem that computes the expected number of tracks from the given input parameters, is useful to this respect. Such a probabilistic model is proposed in this thesis, and its performance is experimentally evaluated.
Date: August 1993
Creator: Bagchi, Tanuj
Partner: UNT Libraries

Linearly Ordered Concurrent Data Structures on Hypercubes

Description: This thesis presents a simple method for the concurrent manipulation of linearly ordered data structures on hypercubes. The method is based on the existence of a pruned binomial search tree rooted at any arbitrary node of the binary hypercube. The tree spans any arbitrary sequence of n consecutive nodes containing the root, using a fan-out of at most [log₂ 𝑛] and a depth of [log₂ 𝑛] +1. Search trees spanning non-overlapping processor lists are formed using only local information, and can be used concurrently without contention problems. Thus, they can be used for performing broadcast and merge operations simultaneously on sets with non-uniform sizes. Extensions to generalized and faulty hypercubes and applications to image processing algorithms and for m-way search are discussed.
Date: August 1992
Creator: John, Ajita
Partner: UNT Libraries

Software and Hardware Interface of a VOTRAX Terminal for the Fairchild F24 Computer

Description: VOTRAX is a commercially available voice synthesizer for use with a digital computer. This thesis describes the design and implementation of a VOTRAX terminal for use with the Fairchild F24 computer. Chapters of the thesis consider the audio response technology, some characteristics of Phonetic English Speech, configuration of hardware, and describe the PHONO computer program which was developed. The last chapter discusses the advantages of the VOTRAX voice synthesizer and proposes a future version of the system with a time-sharing host computer.
Date: May 1979
Creator: Wu, Chun Hsiang
Partner: UNT Libraries

Computer Analysis of Amino Acid Chromatography

Description: The problem with which this research was done was that of applying the IBM360 computer to the analysis of waveforms from a Beckman model 120C liquid chromatograph. Software to interpret these waveforms was written in the PLl language. For a control run, input to the computer consisted of a digital tape containing the raw results of the chromatograph run. Output consisted of several graphs and charts giving the results of the analysis. In addition, punched output was provided which gave the name of each amino acid, its elution time and color constant. These punched cards were then input to the computer as input to the experimental run, along with the raw data on the digital tape. From the known amounts of amino acids in the control run and the ratio of control to experimental peak area, the amino acids of the unknown were quantified. The resulting programs provided a complete and easy to use solution to the problem of chromatographic data analysis.
Date: May 1978
Creator: Hayes, Michael D.
Partner: UNT Libraries

Simulation of the IBM System/7

Description: This thesis describes the simulation of the IBM SYSTEM/7. The research leading to this thesis involved the development of a PL/I computer program that runs on an IBM 360/50 computer and simulates the IBM SYSTEM/7. Various methods of simulation are examined and guidelines for computer simulation of another computer are established. The SYSTEM/7 simulator (SIM/7) is the heart of this thesis. SIM/7 simulates the IBM SYSTEM/7 entirely with software as opposed to an emulator which involves the combined use of hardware and software to perform the simulation process. This thesis contains a general introduction to computer simulation, reason for simulation, a user's guide for SIM/7 and a definition of the SYSTEM/7 processor using the Vienna Definition Language.
Date: May 1977
Creator: Lewis, Ted C.
Partner: UNT Libraries

The Design of Microcomputer-Based Sound Synthesis Hardware

Description: Microcomputer-based music synthesis hardware is being developed at North Texas State University (NTSU). The work described in this paper continues this effort to develop hardware designs for inexpensive, but good quality, sound synthesizers. In order to pursue their activities, researchers in computer assisted instruction in music theory, psychoacoustics, and music composition need quality sound sources. The ultimate goal of my research is to develop good quality sound synthesis hardware which can fill these needs economically. This paper explores three topics: 1) how a computer makes music--a short nontechnical description; 2) what has been done previously--a review of the literature; and 3) what factors bear on the quality of microcomputer-based systems, including encoding of musical passages, software development, and hardware design. These topics lead to the discussion of a particular sound synthesizer which the author has designed.
Date: May 1980
Creator: Hamilton, Richard L.
Partner: UNT Libraries

An Implementation of the IEEE Standard for Binary Floating-Point Arithmetic for the Motorola 6809 Microprocessor

Description: This thesis describes a software implementation of the IEEE Floating-Point Standard (IEEE Task P754), which is believed to be an effective system for reliable, accurate computer arithmetic. The standard is implemented as a set of procedures written in Motorola 6809 assembly language. Source listings of the procedures are contained in appendices.
Date: August 1983
Creator: Rosenblum, David Samuel
Partner: UNT Libraries

The Applications of Regression Analysis in Auditing and Computer Systems

Description: This thesis describes regression analysis and shows how it can be used in account auditing and in computer system performance analysis. The study first introduces regression analysis techniques and statistics. Then, the use of regression analysis in auditing to detect "out of line" accounts and to determine audit sample size is discussed. These applications led to the concept of using regression analysis to predict job completion times in a computer system. The feasibility of this application of regression analysis was tested by constructing a predictive model to estimate job completion times using a computer system simulator. The predictive model's performance for the various job streams simulated shows that job completion time prediction is a feasible application for regression analysis.
Date: May 1977
Creator: Hubbard, Larry D.
Partner: UNT Libraries

Computerized Analysis of Radiograph Images of Embedded Objects as Applied to Bone Location and Mineral Content Measurement

Description: This investigation dealt with locating and measuring x-ray absorption of radiographic images. The methods developed provide a fast, accurate, minicomputer control, for analysis of embedded objects. A PDP/8 computer system was interfaced with a Joyce Loebl 3CS Microdensitometer and a Leeds & Northrup Recorder. Proposed algorithms for bone location and data smoothing work on a twelve-bit minicomputer. Designs of a software control program and operational procedure are presented. The filter made wedge and limb scans monotonic from minima to maxima. It was tested for various convoluted intervals. Ability to resmooth the same data in multiple passes was tested. An interval size of fifteen works well in one pass.
Date: August 1976
Creator: Buckner, Richard L.
Partner: UNT Libraries

A Tool for Measuring the Size, Structure and Complexity of Software

Description: The problem addressed by this thesis is the need for a software measurement tool that enforces a uniform measurement algorithm on several programming languages. The introductory chapter discusses the concern for software measurement and provides background for the specific models and metrics that are studied. A multilingual software measurement tool is then introduced, that analyzes programs written in Ada, C, Pascal, or PL/I, and quantifies over thirty different program attributes. Metrics computed by the program include McCabe's measure of cyclomatic complexity and Halstead's software science metrics. Some results and conclusions of preliminary data analysis, using the tool, are also given. The appendices contain exhaustive counting algorithms for obtaining the metrics in each language.
Date: May 1984
Creator: Versaw, Larry
Partner: UNT Libraries

An On-Line Macro Processor for the Motorola 6800 Microprocessor

Description: The first chapter discusses the concept of macros: its definition, structure, usage, design goals, and the related prior work. This thesis principally concerns my work on OLMP (an On-Line Macro Processor for the Motorola 6800 Microprocessor), which is a macro processor which interacts with the user. It takes Motorola assembler source code and macro definitions as its input; after the appropriate editing and expansions, it outputs the expanded assembler source statements. The functional objectives, the design for implementation of OLMP, the basic macro format, and the macro definition construction are specified in Chapter Two. The software and the hardware environment of OLMP are discussed in the third chapter. The six modules of OLMP are the main spine of the fourth chapter. The comments on future improvement and how to link OLMP with the Motorola 6800 assembler are the major concern of the final chapter.
Date: May 1980
Creator: Hsieh, Chang-Boe
Partner: UNT Libraries

Macro Control Structures for Structured Programming in ALC

Description: This thesis describes a set of computer program control structures which permits the application of certain structured programming techniques to the IBM/360 assembly language (ALC). The control structures are implemented by programmerdefined instructions known as macros. A history of computer software is presented, providing a basis for the emergence of structured programming. A survey of the major concepts of structured programming with special attention to control structures and their significance to structured programming follows. The macros developed in this study include DO, ENDDO, LEAVE, CASE, and ENDCASE. They provide a looping control structure, a loop-escape construct, and a selective control structure. Examples of usage are given.
Date: December 1975
Creator: Connally, Kim G.
Partner: UNT Libraries

Design and Implementation of a TRAC Processor for Fairchild F24 Computer

Description: TRAC is a text-processing language for use with a reactive typewriter. The thesis describes the design and implementation of a TRAC processor for the Fairchild F24 computer. Chapter I introduces some text processing concepts, the TRAC operations, and the implementation procedures. Chapter II examines the history and -characteristics of the TRAC language. The next chapter specifies the TRAC syntax and primitive functions. Chapter IV covers the algorithms used by the processor. The last chapter discusses the design experience from programming the processor, examines the reactive action caused by the processor, and suggests adding external storage primitive functions for a future version of the processor.
Date: August 1974
Creator: Chi, Ping Ray
Partner: UNT Libraries

Multiresolution Signal Cross-correlation

Description: Signal Correlation is a digital signal processing technique which has a wide variety of applications, ranging from geophysical exploration to acoustic signal enhancements, or beamforming. This dissertation will consider this technique in an underwater acoustics perspective, but the algorithms illustrated here can be readily applied to other areas. Although beamforming techniques have been studied for the past fifty years, modern beamforming systems still have difficulty in operating in noisy environments, especially in shallow water.
Date: December 1994
Creator: Novaes, Marcos (Marcos Nogueira)
Partner: UNT Libraries

Recognition of Face Images

Description: The focus of this dissertation is a methodology that enables computer systems to classify different up-front images of human faces as belonging to one of the individuals to which the system has been exposed previously. The images can present variance in size, location of the face, orientation, facial expressions, and overall illumination. The approach to the problem taken in this dissertation can be classified as analytic as the shapes of individual features of human faces are examined separately, as opposed to holistic approaches to face recognition. The outline of the features is used to construct signature functions. These functions are then magnitude-, period-, and phase-normalized to form a translation-, size-, and rotation-invariant representation of the features. Vectors of a limited number of the Fourier decomposition coefficients of these functions are taken to form the feature vectors representing the features in the corresponding vector space. With this approach no computation is necessary to enforce the translational, size, and rotational invariance at the stage of recognition thus reducing the problem of recognition to the k-dimensional clustering problem. A recognizer is specified that can reliably classify the vectors of the feature space into object classes. The recognizer made use of the following principle: a trial vector is classified into a class with the greatest number of closest vectors (in the sense of the Euclidean distance) among all vectors representing the same feature in the database of known individuals. A system based on this methodology is implemented and tried on a set of 50 pictures of 10 individuals (5 pictures per individual). The recognition rate is comparable to that of most recent results in the area of face recognition. The methodology presented in this dissertation is also applicable to any problem of pattern recognition where patterns can be represented as a collection of black ...
Date: December 1994
Creator: Pershits, Edward
Partner: UNT Libraries

A New Framework for Classification and Comparative Study of Congestion Control Schemes of ATM Networks

Description: In our work, we have proposed a new framework for the classification and comparative study of ATM congestion control schemes. The different aspects on which we have classified the algorithms are control theoretic approach, action and congestion notification. These three aspects present of the classification present a coherent framework on which congestion control algorithms are to be classified. Such a classification will also help in developing new algorithms.
Date: May 1996
Creator: Chandra, Umesh, 1971-
Partner: UNT Libraries

Rollback Reduction Techniques Through Load Balancing in Optimistic Parallel Discrete Event Simulation

Description: Discrete event simulation is an important tool for modeling and analysis. Some of the simulation applications such as telecommunication network performance, VLSI logic circuits design, battlefield simulation, require enormous amount of computing resources. One way to satisfy this demand for computing power is to decompose the simulation system into several logical processes (Ip) and run them concurrently. In any parallel discrete event simulation (PDES) system, the events are ordered according to their time of occurrence. In order for the simulation to be correct, this ordering has to be preserved. There are three approaches to maintain this ordering. In a conservative system, no lp executes an event unless it is certain that all events with earlier time-stamps have been executed. Such systems are prone to deadlock. In an optimistic system on the other hand, simulation progresses disregarding this ordering and saves the system states regularly. Whenever a causality violation is detected, the system rolls back to a state saved earlier and restarts processing after correcting the error. There is another approach in which all the lps participate in the computation of a safe time-window and all events with time-stamps within this window are processed concurrently. In optimistic simulation systems, there is a global virtual time (GVT), which is the minimum of the time-stamps of all the events existing in the system. The system can not rollback to a state prior to GVT and hence all such states can be discarded. GVT is used for memory management, load balancing, termination detection and committing of events. However, GVT computation introduces additional overhead. In optimistic systems, large number of rollbacks can degrade the system performance considerably. We have studied the effect of load balancing in reducing the number of rollbacks in such systems. We have designed three load balancing algorithms and implemented two of ...
Date: May 1996
Creator: Sarkar, Falguni
Partner: UNT Libraries

A Mechanism for Facilitating Temporal Reasoning in Discrete Event Simulation

Description: This research establishes the feasibility and potential utility of a software mechanism which employs artificial intelligence techniques to enhance the capabilities of standard discrete event simulators. As background, current methods of integrating artificial intelligence with simulation and relevant research are briefly reviewed.
Date: May 1992
Creator: Legge, Gaynor W.
Partner: UNT Libraries

A Highly Fault-Tolerant Distributed Database System with Replicated Data

Description: Because of the high cost and impracticality of a high connectivity network, most recent research in transaction processing has focused on a distributed replicated database system. In such a system, multiple copies of a data item are created and stored at several sites in the network, so that the system is able to tolerate more crash and communication failures and attain higher data availability. However, the multiple copies also introduce a global inconsistency problem, especially in a partitioned network. In this dissertation a tree quorum algorithm is proposed to solve this problem, imposing a logical tree structure along with dynamic system reconfiguration on all the copies of each data item. The proposed algorithm can be viewed as a dynamic voting technique which, with the help of an appropriate concurrency control algorithm, exhibits the major advantages of quorum-based replica control algorithms and of the available copies algorithm, so that a single copy is read for a read operation and a quorum of copies is written for a write operation. In addition, read and write quorums are computed dynamically and independently. As a result expensive read operations, like those that require several copies of a data item to be read in most quorum schemes, are eliminated. Furthermore, the message costs of read and write operations are reduced by the use of smaller quorum sizes. Quorum sizes can be reduced to a constant in a lightly loaded system, and log n in a failure-free network, as well as [n +1/2] in a partitioned network in a heavily loaded system. On average, our algorithm requires fewer messages than the best known tree quorum algorithm, while still maintaining the same upper bound on quorum size. One-copy serializability is guaranteed with higher data availability and highest degree of fault tolerance (up to n - 1 site ...
Date: December 1994
Creator: Lin, Tsai S. (Tsai Shooumeei)
Partner: UNT Libraries

A New Scheduling Algorithm for Multimedia Communication

Description: The primary purpose of this work is to propose a new scheduling approach of multimedia data streams in real-time communication and also to study and analyze the various existing scheduling approaches.
Date: May 1995
Creator: Alapati, Venkata Somi Reddy
Partner: UNT Libraries

Using Extended Logic Programs to Formalize Commonsense Reasoning

Description: In this dissertation, we investigate how commonsense reasoning can be formalized by using extended logic programs. In this investigation, we first use extended logic programs to formalize inheritance hierarchies with exceptions by adopting McCarthy's simple abnormality formalism to express uncertain knowledge. In our representation, not only credulous reasoning can be performed but also the ambiguity-blocking inheritance and the ambiguity-propagating inheritance in skeptical reasoning are simulated. In response to the anomalous extension problem, we explore and discover that the intuition underlying commonsense reasoning is a kind of forward reasoning. The unidirectional nature of this reasoning is applied by many reformulations of the Yale shooting problem to exclude the undesired conclusion. We then identify defeasible conclusions in our representation based on the syntax of extended logic programs. A similar idea is also applied to other formalizations of commonsense reasoning to achieve such a purpose.
Date: May 1992
Creator: Horng, Wen-Bing
Partner: UNT Libraries