UNT Theses and Dissertations - 217 Matching Results

Search Results

A Multi-Time Scale Learning Mechanism for Neuromimic Processing

Description: Learning and representing and reasoning about temporal relations, particularly causal relations, is a deep problem in artificial intelligence (AI). Learning such representations in the real world is complicated by the fact that phenomena are subject to multiple time scale influences and may operate with a strange attractor dynamic. This dissertation proposes a new computational learning mechanism, the adaptrode, which, used in a neuromimic processing architecture may help to solve some of these problems. The adaptrode is shown to emulate the dynamics of real biological synapses and represents a significant departure from the classical weighted input scheme of conventional artificial neural networks. Indeed the adaptrode is shown, by analysis of the deep structure of real synapses, to have a strong structural correspondence with the latter in terms of multi-time scale biophysical processes. Simulations of an adaptrode-based neuron and a small network of neurons are shown to have the same learning capabilities as invertebrate animals in classical conditioning. Classical conditioning is considered a fundamental learning task in animals. Furthermore, it is subject to temporal ordering constraints that fulfill the criteria of causal relations in natural systems. It may offer clues to the learning of causal relations and mechanisms for causal reasoning. The adaptrode is shown to solve an advanced problem in classical conditioning that addresses the problem of real world dynamics. A network is able to learn multiple, contrary associations that separate in time domains, that is a long-term memory can co-exist with a short-term contrary memory without destroying the former. This solves the problem of how to deal with meaningful transients while maintaining long-term memories. Possible applications of adaptrode-based neural networks are explored and suggestions for future research are made.
Date: August 1994
Creator: Mobus, George E. (George Edward)
Partner: UNT Libraries

Study of Parallel Algorithms Related to Subsequence Problems on the Sequent Multiprocessor System

Description: The primary purpose of this work is to study, implement and analyze the performance of parallel algorithms related to subsequence problems. The problems include string to string correction problem, to determine the longest common subsequence problem and solving the sum-range-product, 1 —D pattern matching, longest non-decreasing (non-increasing) (LNS) and maximum positive subsequence (MPS) problems. The work also includes studying the techniques and issues involved in developing parallel applications. These algorithms are implemented on the Sequent Multiprocessor System. The subsequence problems have been defined, along with performance metrics that are utilized. The sequential and parallel algorithms have been summarized. The implementation issues which arise in the process of developing parallel applications have been identified and studied.
Date: August 1994
Creator: Pothuru, Surendra
Partner: UNT Libraries

A Highly Fault-Tolerant Distributed Database System with Replicated Data

Description: Because of the high cost and impracticality of a high connectivity network, most recent research in transaction processing has focused on a distributed replicated database system. In such a system, multiple copies of a data item are created and stored at several sites in the network, so that the system is able to tolerate more crash and communication failures and attain higher data availability. However, the multiple copies also introduce a global inconsistency problem, especially in a partitioned network. In this dissertation a tree quorum algorithm is proposed to solve this problem, imposing a logical tree structure along with dynamic system reconfiguration on all the copies of each data item. The proposed algorithm can be viewed as a dynamic voting technique which, with the help of an appropriate concurrency control algorithm, exhibits the major advantages of quorum-based replica control algorithms and of the available copies algorithm, so that a single copy is read for a read operation and a quorum of copies is written for a write operation. In addition, read and write quorums are computed dynamically and independently. As a result expensive read operations, like those that require several copies of a data item to be read in most quorum schemes, are eliminated. Furthermore, the message costs of read and write operations are reduced by the use of smaller quorum sizes. Quorum sizes can be reduced to a constant in a lightly loaded system, and log n in a failure-free network, as well as [n +1/2] in a partitioned network in a heavily loaded system. On average, our algorithm requires fewer messages than the best known tree quorum algorithm, while still maintaining the same upper bound on quorum size. One-copy serializability is guaranteed with higher data availability and highest degree of fault tolerance (up to n - 1 site ...
Date: December 1994
Creator: Lin, Tsai S. (Tsai Shooumeei)
Partner: UNT Libraries

Recognition of Face Images

Description: The focus of this dissertation is a methodology that enables computer systems to classify different up-front images of human faces as belonging to one of the individuals to which the system has been exposed previously. The images can present variance in size, location of the face, orientation, facial expressions, and overall illumination. The approach to the problem taken in this dissertation can be classified as analytic as the shapes of individual features of human faces are examined separately, as opposed to holistic approaches to face recognition. The outline of the features is used to construct signature functions. These functions are then magnitude-, period-, and phase-normalized to form a translation-, size-, and rotation-invariant representation of the features. Vectors of a limited number of the Fourier decomposition coefficients of these functions are taken to form the feature vectors representing the features in the corresponding vector space. With this approach no computation is necessary to enforce the translational, size, and rotational invariance at the stage of recognition thus reducing the problem of recognition to the k-dimensional clustering problem. A recognizer is specified that can reliably classify the vectors of the feature space into object classes. The recognizer made use of the following principle: a trial vector is classified into a class with the greatest number of closest vectors (in the sense of the Euclidean distance) among all vectors representing the same feature in the database of known individuals. A system based on this methodology is implemented and tried on a set of 50 pictures of 10 individuals (5 pictures per individual). The recognition rate is comparable to that of most recent results in the area of face recognition. The methodology presented in this dissertation is also applicable to any problem of pattern recognition where patterns can be represented as a collection of black ...
Date: December 1994
Creator: Pershits, Edward
Partner: UNT Libraries

Intrinsic and Extrinsic Adaptation in a Simulated Combat Environment

Description: Genetic algorithm and artificial life techniques are applied to the development of challenging and interesting opponents in a combat-based computer game. Computer simulations are carried out against an idealized human player to gather data on the effectiveness of the computer generated opponents.
Date: May 1995
Creator: Dombrowsky, Steven P. (Steven Paul)
Partner: UNT Libraries

Practical Cursive Script Recognition

Description: This research focused on the off-line cursive script recognition application. The problem is very large and difficult and there is much room for improvement in every aspect of the problem. Many different aspects of this problem were explored in pursuit of solutions to create a more practical and usable off-line cursive script recognizer than is currently available.
Date: August 1995
Creator: Carroll, Johnny Glen, 1953-
Partner: UNT Libraries

An Algorithm for the PLA Equivalence Problem

Description: The Programmable Logic Array (PLA) has been widely used in the design of VLSI circuits and systems because of its regularity, flexibility, and simplicity. The equivalence problem is typically to verify that the final description of a circuit is functionally equivalent to its initial description. Verifying the functional equivalence of two descriptions is equivalent to proving their logical equivalence. This problem of pure logic is essential to circuit design. The most widely used technique to solve the problem is based on Binary Decision Diagram or BDD, proposed by Bryant in 1986. Unfortunately, BDD requires too much time and space to represent moderately large circuits for equivalence testing. We design and implement a new algorithm called the Cover-Merge Algorithm for the equivalence problem based on a divide-and-conquer strategy using the concept of cover and a derivational method. We prove that the algorithm is sound and complete. Because of the NP-completeness of the problem, we emphasize simplifications to reduce the search space or to avoid redundant computations. Simplification techniques are incorporated into the algorithm as an essential part to speed up the the derivation process. Two different sets of heuristics are developed for two opposite goals: one for the proof of equivalence and the other for its disproof. Experiments on a large scale of data have shown that big speed-ups can be achieved by prioritizing the heuristics and by choosing the most favorable one at each iteration of the Algorithm. Results are compared with those for BDD on standard benchmark problems as well as on random PLAs to perform an unbiased way of testing algorithms. It has been shown that the Cover-Merge Algorithm outperforms BDD in nearly all problem instances in terms of time and space. The algorithm has demonstrated fairly stabilized and practical performances especially for big PLAs under a wide ...
Date: December 1995
Creator: Moon, Gyo Sik
Partner: UNT Libraries

Convexity-Preserving Scattered Data Interpolation

Description: Surface fitting methods play an important role in many scientific fields as well as in computer aided geometric design. The problem treated here is that of constructing a smooth surface that interpolates data values associated with scattered nodes in the plane. The data is said to be convex if there exists a convex interpolant. The problem of convexity-preserving interpolation is to determine if the data is convex, and construct a convex interpolant if it exists.
Date: December 1995
Creator: Leung, Nim Keung
Partner: UNT Libraries

Quantifying Design Principles in Reusable Software Components

Description: Software reuse can occur in various places during the software development cycle. Reuse of existing source code is the most commonly practiced form of software reuse. One of the key requirements for software reuse is readability, thus the interest in the use of data abstraction, inheritance, modularity, and aspects of the visible portion of module specifications. This research analyzed the contents of software reuse libraries to answer the basic question of what makes a good reusable software component. The approach taken was to measure and analyze various software metrics as mapped to design characteristics. A related research question investigated the change in the design principles over time. This was measured by comparing sets of Ada reuse libraries categorized into two time periods. It was discovered that recently developed Ada reuse components scored better on readability than earlier developed components. A benefit of this research has been the development of a set of "design for reuse" guidelines. These guidelines address coding practices as well as design principles for an Ada implementation. C++ software reuse libraries were also analyzed to determine if design principles can be applied in a language independent fashion. This research used cyclomatic complexity metrics, software science metrics, and traditional static code metrics to measure design features. This research provides at least three original contributions. First it collects empirical data about existing reuse libraries. Second, it develops a readability measure for software libraries which can aid in comparing libraries. And third, this research developed a set of coding and design guidelines for developers of reusable software. Future research can investigate how design principles for C++ change over time. Another topic for research is the investigation of systems employing reused components to determine which libraries are more successfully used than others.
Date: December 1995
Creator: Moore, Freeman Leroy
Partner: UNT Libraries

Rollback Reduction Techniques Through Load Balancing in Optimistic Parallel Discrete Event Simulation

Description: Discrete event simulation is an important tool for modeling and analysis. Some of the simulation applications such as telecommunication network performance, VLSI logic circuits design, battlefield simulation, require enormous amount of computing resources. One way to satisfy this demand for computing power is to decompose the simulation system into several logical processes (Ip) and run them concurrently. In any parallel discrete event simulation (PDES) system, the events are ordered according to their time of occurrence. In order for the simulation to be correct, this ordering has to be preserved. There are three approaches to maintain this ordering. In a conservative system, no lp executes an event unless it is certain that all events with earlier time-stamps have been executed. Such systems are prone to deadlock. In an optimistic system on the other hand, simulation progresses disregarding this ordering and saves the system states regularly. Whenever a causality violation is detected, the system rolls back to a state saved earlier and restarts processing after correcting the error. There is another approach in which all the lps participate in the computation of a safe time-window and all events with time-stamps within this window are processed concurrently. In optimistic simulation systems, there is a global virtual time (GVT), which is the minimum of the time-stamps of all the events existing in the system. The system can not rollback to a state prior to GVT and hence all such states can be discarded. GVT is used for memory management, load balancing, termination detection and committing of events. However, GVT computation introduces additional overhead. In optimistic systems, large number of rollbacks can degrade the system performance considerably. We have studied the effect of load balancing in reducing the number of rollbacks in such systems. We have designed three load balancing algorithms and implemented two of ...
Date: May 1996
Creator: Sarkar, Falguni
Partner: UNT Libraries

Symplectic Integration of Nonseparable Hamiltonian Systems

Description: Numerical methods are usually necessary in solving Hamiltonian systems since there is often no closed-form solution. By utilizing a general property of Hamiltonians, namely the symplectic property, all of the qualities of the system may be preserved for indefinitely long integration times because all of the integral (Poincare) invariants are conserved. This allows for more reliable results and frequently leads to significantly shorter execution times as compared to conventional methods. The resonant triad Hamiltonian with one degree of freedom will be focused upon for most of the numerical tests because of its difficult nature and, moreover, analytical results exist whereby useful comparisons can be made.
Date: May 1996
Creator: Curry, David M. (David Mason)
Partner: UNT Libraries

A Machine Learning Method Suitable for Dynamic Domains

Description: The efficacy of a machine learning technique is domain dependent. Some machine learning techniques work very well for certain domains but are ill-suited for other domains. One area that is of real-world concern is the flexibility with which machine learning techniques can adapt to dynamic domains. Currently, there are no known reports of any system that can learn dynamic domains, short of starting over (i.e., re-running the program). Starting over is neither time nor cost efficient for real-world production environments. This dissertation studied a method, referred to as Experience Based Learning (EBL), that attempts to deal with conditions related to learning dynamic domains. EBL is an extension of Instance Based Learning methods. The hypothesis of the study related to this research was that the EBL method would automatically adjust to domain changes and still provide classification accuracy similar to methods that require starting over. To test this hypothesis, twelve widely studied machine learning datasets were used. A dynamic domain was simulated by presenting these datasets in an uninterrupted cycle of train, test, and retrain. The order of the twelve datasets and the order of records within each dataset were randomized to control for order biases in each of ten runs. As a result, these methods provided datasets that represent extreme levels of domain change. Using the above datasets, EBL's mean classification accuracies for each dataset were compared to the published static domain results of other machine learning systems. The results indicated that the EBL's system performance was not statistically different (p>0.30) from the other machine learning methods. These results indicate that the EBL system is able to adjust to an extreme level of domain change and yet produce satisfactory results. This finding supports the use of the EBL method in real-world environments that incur rapid changes to both variables and ...
Date: July 1996
Creator: Rowe, Michael C. (Michael Charles)
Partner: UNT Libraries

Automatic Speech Recognition Using Finite Inductive Sequences

Description: This dissertation addresses the general problem of recognition of acoustic signals which may be derived from speech, sonar, or acoustic phenomena. The specific problem of recognizing speech is the main focus of this research. The intention is to design a recognition system for a definite number of discrete words. For this purpose specifically, eight isolated words from the T1MIT database are selected. Four medium length words "greasy," "dark," "wash," and "water" are used. In addition, four short words are considered "she," "had," "in," and "all." The recognition system addresses the following issues: filtering or preprocessing, training, and decision-making. The preprocessing phase uses linear predictive coding of order 12. Following the filtering process, a vector quantization method is used to further reduce the input data and generate a finite inductive sequence of symbols representative of each input signal. The sequences generated by the vector quantization process of the same word are factored, and a single ruling or reference template is generated and stored in a codebook. This system introduces a new modeling technique which relies heavily on the basic concept that all finite sequences are finitely inductive. This technique is used in the training stage. In order to accommodate the variabilities in speech, the training is performed casualty, and a large number of training speakers is used from eight different dialect regions. Hence, a speaker independent recognition system is realized. The matching process compares the incoming speech with each of the templates stored, and a closeness ration is computed. A ratio table is generated anH the matching word that corresponds to the smallest ratio (i.e. indicating that the ruling has removed most of the symbols) is selected. Promising results were obtained for isolated words, and the recognition rates ranged between 50% and 100%.
Date: August 1996
Creator: Cherri, Mona Youssef, 1956-
Partner: UNT Libraries

Practical Parallel Processing

Description: The physical limitations of uniprocessors and the real-time requirements of numerous practical applications have made parallel processing an essential technology in military, industry and scientific research. In this dissertation, we investigate parallelizations of three practical applications using three parallel machine models. The algorithms are: Finitely inductive (FI) sequence processing is a pattern recognition technique used in many fields. We first propose four parallel FI algorithms on the EREW PRAM. The time complexity of the parallel factoring and following by bucket packing is O(sk^2 n/p), and they are optimal under some conditions. The parallel factoring and following by hashing requires O(sk^2 n/p) time when uniform hash functions are used and log(p) ≤ k n/p and pm ≈ n. Their speedup is proportional to the number processors used. For these results, s is the number of levels, k is the size of the antecedents and n is the length of the input sequence and p is the number of processors. We also describe algorithms for raster/vector conversion based on the scan model to handle block-like connected components of arbitrary geometrical shapes with multi-level nested dough nuts for the IES (image exploitation system). Both the parallel raster-to-vector algorithm and parallel vector-to-raster algorithm require O(log(n2)) or O(log2(n2)) time (depending on the sorting algorithms used) for images of size n2 using p = n2 processors. Not only is the DWT (discrete wavelet transforms) useful in data compression, but also has it potentials in signal processing, image processing, and graphics. Therefore, it is of great importance to investigate efficient parallelizations of the wavelet transforms. The time complexity of the parallel forward DWT on the parallel virtual machine with linear processor organization is O(((so+s1)mn)/p), where s0 and s1 are the lengths of the filters and p is the number of processors used. The time complexity of the ...
Date: August 1996
Creator: Zhang, Hua, 1954-
Partner: UNT Libraries

A Unifying Version Model for Objects and Schema in Object-Oriented Database System

Description: There have been a number of different versioning models proposed. The research in this area can be divided into two categories: object versioning and schema versioning. In this dissertation, both problem domains are considered as a single unit. This dissertation describes a unifying version model (UVM) for maintaining changes to both objects and schema. UVM handles schema versioning operations by using object versioning techniques. The result is that the UVM allows the OODBMS to be much smaller than previous systems. Also, programmers need know only one set of versioning operations; thus, reducing the learning time by half. This dissertation shows that UVM is a simple but semantically sound and powerful version model for both objects and schema.
Date: August 1997
Creator: Shin, Dongil
Partner: UNT Libraries

Efficient Algorithms and Framework for Bandwidth Allocation, Quality-of-Service Provisioning and Location Management in Mobile Wireless Computing

Description: The fusion of computers and communications has promised to herald the age of information super-highway over high speed communication networks where the ultimate goal is to enable a multitude of users at any place, access information from anywhere and at any time. This, in a nutshell, is the goal envisioned by the Personal Communication Services (PCS) and Xerox's ubiquitous computing. In view of the remarkable growth of the mobile communication users in the last few years, the radio frequency spectrum allocated by the FCC (Federal Communications Commission) to this service is still very limited and the usable bandwidth is by far much less than the expected demand, particularly in view of the emergence of the next generation wireless multimedia applications like video-on-demand, WWW browsing, traveler information systems etc. Proper management of available spectrum is necessary not only to accommodate these high bandwidth applications, but also to alleviate problems due to sudden explosion of traffic in so called hot cells. In this dissertation, we have developed simple load balancing techniques to cope with the problem of tele-traffic overloads in one or more hot cells in the system. The objective is to ease out the high channel demand in hot cells by borrowing channels from suitable cold cells and by proper assignment (or, re-assignment) of the channels among the users. We also investigate possible ways of improving system capacity by rescheduling bandwidth in case of wireless multimedia traffic. In our proposed scheme, traffic using multiple channels releases one or more channels to increase the carried traffic or throughput in the system. Two orthogonal QoS parameters, called carried traffic and bandwidth degradation, are identified and a cost function describing the total revenue earned by the system from a bandwidth degradation and call admission policy, is formulated. A channel sharing scheme is proposed for ...
Date: December 1997
Creator: Sen, Sanjoy Kumar
Partner: UNT Libraries

Exon/Intron Discrimination Using the Finite Induction Pattern Matching Technique

Description: DNA sequence analysis involves precise discrimination of two of the sequence's most important components: exons and introns. Exons encode the proteins that are responsible for almost all the functions in a living organism. Introns interrupt the sequence coding for a protein and must be removed from primary RNA transcripts before translation to protein can occur. A pattern recognition technique called Finite Induction (FI) is utilized to study the language of exons and introns. FI is especially suited for analyzing and classifying large amounts of data representing sequences of interest. It requires no biological information and employs no statistical functions. Finite Induction is applied to the exon and intron components of DNA by building a collection of rules based upon what it finds in the sequences it examines. It then attempts to match the known rule patterns with new rules formed as a result of analyzing a new sequence. A high number of matches predict a probable close relationship between the two sequences; a low number of matches signifies a large amount of difference between the two. This research demonstrates FI to be a viable tool for measurement when known patterns are available for the formation of rule sets.
Date: December 1997
Creator: Taylor, Pamela A., 1941-
Partner: UNT Libraries

Computational Complexity of Hopfield Networks

Description: There are three main results in this dissertation. They are PLS-completeness of discrete Hopfield network convergence with eight different restrictions, (degree 3, bipartite and degree 3, 8-neighbor mesh, dual of the knight's graph, hypercube, butterfly, cube-connected cycles and shuffle-exchange), exponential convergence behavior of discrete Hopfield network, and simulation of Turing machines by discrete Hopfield Network.
Date: August 1998
Creator: Tseng, Hung-Li
Partner: UNT Libraries

An Adaptive Linearization Method for a Constraint Satisfaction Problem in Semiconductor Device Design Optimization

Description: The device optimization is a very important element in semiconductor technology advancement. Its objective is to find a design point for a semiconductor device so that the optimized design goal meets all specified constraints. As in other engineering fields, a nonlinear optimizer is often used for design optimization. One major drawback of using a nonlinear optimizer is that it can only partially explore the design space and return a local optimal solution. This dissertation provides an adaptive optimization design methodology to allow the designer to explore the design space and obtain a globally optimal solution. One key element of our method is to quickly compute the set of all feasible solutions, also called the acceptability region. We described a polytope-based representation for the acceptability region and an adaptive linearization technique for device performance model approximation. These efficiency enhancements have enabled significant speed-up in estimating acceptability regions and allow acceptability regions to be estimated for a larger class of device design tasks. Our linearization technique also provides an efficient mechanism to guarantee the global accuracy of the computed acceptability region. To visualize the acceptability region, we study the orthogonal projection of high-dimensional convex polytopes and propose an output sensitive algorithm for projecting polytopes into two dimensions.
Date: May 1999
Creator: Chang, Chih-Hui, 1967-
Partner: UNT Libraries

The Design and Implementation of an Intelligent Agent-Based File System

Description: As bandwidth constraints on LAN/WAN environments decrease, the demand for distributed services will continue to increase. In particular, the proliferation of user-level applications requiring high-capacity distributed file storage systems will demand that such services be universally available. At the same time, the advent of high-speed networks have made the deployment of application and communication solutions based upon an Intelligent Mobile Agent (IMA) framework practical. Agents have proven to present an ideal development paradigm for the creation of autonomous large-scale distributed systems, and an agent-based communication scheme would facilitate the creation of independently administered distributed file services. This thesis thus outlines an architecture for such a distributed file system based upon an IMA communication framework.
Date: May 2000
Creator: Hopper, S. Andrew
Partner: UNT Libraries

Algorithms for Efficient Utilization of Wireless Bandwidth and to Provide Quality-of-Service in Wireless Networks

Description: This thesis presents algorithms to utilize the wireless bandwidth efficiently and at the same time meet the quality of service (QoS) requirements of the users. In the proposed algorithms we present an adaptive frame structure based upon the airlink frame loss probability and control the admission of call requests into the system based upon the load on the system and the QoS requirements of the incoming call requests. The performance of the proposed algorithms is studied by developing analytical formulations and simulation experiments. Finally we present an admission control algorithm which uses an adaptive delay computation algorithm to compute the queuing delay for each class of traffic and adapts the service rate and the reliability in the estimates based upon the deviation in the expected and obtained performance. We study the performance of the call admission control algorithm by simulation experiments. Simulation results for the adaptive frame structure algorithm show an improvement in the number of users in the system but there is a drop in the system throughput. In spite of the lower throughput the adaptive frame structure algorithm has fewer QoS delay violations. The adaptive call admission control algorithm adapts the call dropping probability of different classes of traffic and optimizes the system performance w.r.t the number of calls dropped and the reliability in meeting the QoS promised when the call is admitted into the system.
Date: August 2000
Creator: Kakani, Naveen Kumar
Partner: UNT Libraries

Multi-Agent Architecture for Internet Information Extraction and Visualization

Description: The World Wide Web is one of the largest sources of information; more and more applications are being developed daily to make use of this information. This thesis presents a multi-agent architecture that deals with some of the issues related to Internet data extraction. The primary issue addresses the reliable, efficient and quick extraction of data through the use of HTTP performance monitoring agents. A second issue focuses on how to make use of available data to take decisions and alert the user when there is change in data; this is done with the help of user agents that are equipped with a Defeasible reasoning interpreter. An additional issue is the visualization of extracted data; this is done with the aid of VRML visualization agents. The cited issues are discussed using stock portfolio management as an example application.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: August 2000
Creator: Gollapally, Devender R.
Partner: UNT Libraries

Dynamic Grid-Based Data Distribution Management in Large Scale Distributed Simulations

Description: Distributed simulation is an enabling concept to support the networked interaction of models and real world elements that are geographically distributed. This technology has brought a new set of challenging problems to solve, such as Data Distribution Management (DDM). The aim of DDM is to limit and control the volume of the data exchanged during a distributed simulation, and reduce the processing requirements of the simulation hosts by relaying events and state information only to those applications that require them. In this thesis, we propose a new DDM scheme, which we refer to as dynamic grid-based DDM. A lightweight UNT-RTI has been developed and implemented to investigate the performance of our DDM scheme. Our results clearly indicate that our scheme is scalable and it significantly reduces both the number of multicast groups used, and the message overhead, when compared to previous grid-based allocation schemes using large-scale and real-world scenarios.
Date: December 2000
Creator: Roy, Amber Joyce
Partner: UNT Libraries