Search Results

Diagnosing Learner Deficiencies in Algorithmic Reasoning

Description: It is hypothesized that useful diagnostic information can reside in the wrong answers of multiple-choice tests, and that properly designed distractors can yield indications of misinformation and missing information in algorithmic reasoning on the part of the test taker. In addition to summarizing the literature regarding diagnostic research as opposed to scoring research, this study proposes a methodology for analyzing test results and compares the findings with those from the research of Birenbaum and Tatsuoka and others. The proposed method identifies the conditions of misinformation and missing information, and it contains a statistical compensation for careless errors. Strengths and weaknesses of the method are explored, and suggestions for further research are offered.
Date: May 1995
Creator: Hubbard, George U.
Partner: UNT Libraries

Memory Management and Garbage Collection Algorithms for Java-Based Prolog

Description: Implementing a Prolog Runtime System in a language like Java which provides its own automatic memory management and safety features such as built--in index checking and array initialization requires a consistent approach to memory management based on a simple ultimate goal: minimizing total memory management time and extra space involved. The total memory management time for Jinni is made up of garbage collection time both for Java and Jinni itself. Extra space is usually requested at Jinni's garbage collection. This goal motivates us to find a simple and practical garbage collection algorithm and implementation for our Prolog engine. In this thesis we survey various algorithms already proposed and offer our own contribution to the study of garbage collection by improvements and optimizations for some classic algorithms. We implemented these algorithms based on the dynamic array algorithm for an all--dynamic Prolog engine (JINNI 2000). The comparisons of our implementations versus the originally proposed algorithm allow us to draw informative conclusions on their theoretical complexity model and their empirical effectiveness.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: August 2001
Creator: Zhou, Qinan
Partner: UNT Libraries

Clustering Algorithms for Time Series Gene Expression in Microarray Data

Description: Clustering techniques are important for gene expression data analysis. However, efficient computational algorithms for clustering time-series data are still lacking. This work documents two improvements on an existing profile-based greedy algorithm for short time-series data; the first one is implementation of a scaling method on the pre-processing of the raw data to handle some extreme cases; the second improvement is modifying the strategy to generate better clusters. Simulation data and real microarray data were used to evaluate these improvements; this approach could efficiently generate more accurate clusters. A new feature-based algorithm was also developed in which steady state value; overshoot, rise time, settling time and peak time are generated by the 2nd order control system for the clustering purpose. This feature-based approach is much faster and more accurate than the existing profile-based algorithm for long time-series data.
Date: August 2012
Creator: Zhang, Guilin
Partner: UNT Libraries

Algorithmic Music Analysis: a Case Study of a Prelude From David Cope’s “From Darkness, Light”

Description: The use of algorithms in compositional practice has been in use for centuries. With the advent of computers, formalized procedures have become an important part of computer music. David Cope is an American composer that has pioneered systems that make use of artificial intelligence programming techniques. In this dissertation one of David Cope’s compositions that was generated with one of his processes is examined in detail. A general timeline of algorithmic compositional practice is outlined from a historical perspective, and realized in the Common Lisp programming language as a musicological tool. David Cope’s compositional output is summarized with an explanation of what types of systems he has utilized in the analyses of other composers’ music, and the composition of his own music. Twentieth century analyses techniques are formalized within Common Lisp as algorithmic analyses tools. The tools are then combined with techniques developed within other computational music analyses tools, and applied toward the analysis of Cope’s prelude. A traditional music theory analysis of the composition is provided, and outcomes of computational analyses augment the traditional analysis. The outcome of the computational analyses, or algorithmic analyses, is represented in statistical data, and corresponding probabilities. From the resulting data sets part of a machine-learning technique algorithm devises semantic networks. The semantic networks represent chord succession and voice leading rules that underlie the framework of Cope’s prelude.
Date: May 2015
Creator: Krämer, Reiner
Partner: UNT Libraries