62 Matching Results

Search Results

Advanced search parameters have been applied.

Cross-validation, learning set transformations, and generalization

Description: This paper discusses using cross-validation as an aid to generalization. It starts by showing how to use cross-validation to decide amongst a set of generalizer when the learning set consists of examples of the text-to-phoneme problem. In addition to reproducing the learning set perfectly, the generalizer so chosen by cross-validation has an error rate on the testing set (7%) close to that which NETtalk has on the learning set (5%). This paper then presents an example of using cross-validation as part of a front-end to generalizes, i.e., as part of an algorithm for pre-processing a learning set of input/output examples before trying to generalize from it. The results of thirty-six comparisons between the performance of a generalizer and the performance of the generalizer with this front-end are presented. These comparisons involve numerical, Boolean, and visual tasks. In all but one of the comparisons the front-end improves the generalization performance, often inducing perfect generalization. (The average ratio of the generalization error rate with the front-end to the generalization error rate without the front-end is 0.23, {plus minus} 0.05.) Finally, this paper ends by discussing some of the subtler mathematical issues involved in using cross-validation to help generalization. 7 refs., 3 figs.
Date: January 1, 1990
Creator: Wolpert, D.H.
Partner: UNT Libraries Government Documents Department

Improved selection in totally monotone arrays

Description: This paper's main result is an O(({radical}{bar m}lgm)(n lg n) + mlg n)-time algorithm for computing the kth smallest entry in each row of an m {times} n totally monotone array. (A two-dimensional A = a(i,j) is totally monotone if for all i{sub 1} < i{sub 2} and j{sub 1} < j{sup 2}, < a(i{sub 1},j{sub 2}) implies a(i{sub 2},j{sub 1})). For large values of k (in particular, for k=(n/2)), this algorithm is significantly faster than the O(k(m+n))-time algorithm for the same problem due to Kravets and Park. An immediate consequence of this result is an O(n{sup 3/2} lg{sup 2}n)-time algorithm for computing the kth nearest neighbor of each vertex of a convex n-gon. In addition to the main result, we also give an O(n lg m)-time algorithm for computing an approximate median in each row of an m {times} n totally monotone array; this approximate median is an entry whose rank in its row lies between (n/4) and (3n/4) {minus} 1. 20 refs., 3 figs.
Date: January 1, 1991
Creator: Mansour, Y. (Harvard Univ., Cambridge, MA (United States). Aiken Computation Lab.); Park, J.K. (Sandia National Labs., Albuquerque, NM (United States)); Schieber, B. (International Business Machines Corp., Yorktown Heights, NY (United States). Thomas J. Watson Research Center) & Sen, S. (AT and T Bell Labs., Murray Hill, NJ (United States))
Partner: UNT Libraries Government Documents Department

An introduction to chordal graphs and clique trees

Description: Clique trees and chordal graphs have carved out a niche for themselves in recent work on sparse matrix algorithms, due primarily to research questions associated with advanced computer architectures. This paper is a unified and elementary introduction to the standard characterizations of chordal graphs and clique trees. The pace is leisurely, as detailed proofs of all results are included. We also briefly discuss applications of chordal graphs and clique trees in sparse matrix computations.
Date: November 1, 1992
Creator: Blair, J.R.S. (Tennessee Univ., Knoxville, TN (United States). Dept. of Computer Science) & Peyton, B.W. (Oak Ridge National Lab., TN (United States))
Partner: UNT Libraries Government Documents Department

Graph-theoretic algorithm for hierarchial decomposition of dynamic systems

Description: A graph-theoretic scheme is proposed for partitioning of dynamic systems into hierarchially ordered subsystems having independent inputs and outputs. The resulting subsystems are input-output reachable as well as structurally controllable and observable, so that a piece-by-piece design of estimators and controllers can be accomplished for systems with large dimensions without excessive computer requirements.
Date: March 24, 1982
Creator: Pichai, V.; Sezer, M.E. & Siljak, D.D.
Partner: UNT Libraries Government Documents Department

Compiling fast partial derivatives of functions given by algorithms

Description: If the gradient of the function y = f(x/sub 1/,..., x/sub n/) is desired, where f is given by an algoritym Af(x, n, y), most numerical analysts will use numerical differencing. This is a sampling scheme that approximates derivatives by the slope of secants in closely spaced points. Symbolic methods that make full use of the program text of Af should be able to come up with a better way to evaluate the gradient of F. The system Jake described produces gradients significantly faster than numerical differencing. Jake can handle algorithms Af with arbitrary flow of control. Measurements performed on one particular machine suggest that Jake is faster than numerical differencing for n > 8. Somewhat weaker results were obtained for the problem of computing Jacobians of arbitrary shape.
Date: January 1, 1980
Creator: Speelpenning, B.
Partner: UNT Libraries Government Documents Department

Some numerical reslts on best uniform polynomial approximation of. chi. sup. alpha. on (0,1)

Description: Let {alpha} be a positive number, and let E{sub n}(chi{sup {alpha}}; (0,1)) denote the error of best uniform approximation to {chi}{sup {alpha}}, by polynomials of degree at most n, on the interval (0,1). The Russian mathematician S.N. Bernstein established the existence of a nonnegative constant {Beta}({alpha}) such that {Beta}({alpha}):= {sub n{yields}{infinity}lim(2n){sup 2{alpha}}E{sub n}({chi}{sup {alpha}};(0.1)). In addition, Bernstein showed that {Beta}{alpha} < {Gamma}(2{alpha}){vert bar}sin(pi}{alpha}){vert bar}/{pi} ({alpha} > 0) and that {Gamma}(2{alpha}){vert bar}sin({pi}{alpha}){vert bar}/{pi} (1{minus}1/2{alpha}{minus}1) < {Beta}({alpha}) ({alpha} > {1/2}), so that the asymptotic behavior of {Beta}({alpha}) is known when {alpha}{yields}{infinity}. Still, the problem of trying to determine {Beta}({alpha}) more precisely, for all {alpha} > 0, is intriguing. To this end, we have rigorously determined the numbers for thirteen values of {alpha}, where these numbers were calculated with a precision of at least 200 significant digits. For each of these thirteen values of {alpha}, Richardson's extrapolation was applied to the products to obtain estimates of {Beta}({alpha}) to approximately 40 decimal places. Included are graphs of the points ({alpha},{Beta}({alpha})) for the thirteen values of {alpha} that we considered.
Date: January 1, 1992
Creator: Carpenter, A.J. & Varga, R.S.
Partner: UNT Libraries Government Documents Department

Implementation of a hidden line and surface algorithm

Description: The implementation of a 3-dimensional hidden line and surface algorithm discussed by Hamlin and Gear (NASA Langley Research Report 77-1) is described, and its advantages and disadvantages are examined. This algorithm is organized so that data can be preprocessed to speed display computations and so that the display is generated in a form suitable for a raster scan, refresh display. The version of the program described here is a hidden-line program, but the underlying algorithm is the same for both. 11 figures, 3 tables.
Date: August 1, 1980
Creator: Schweitzer, M.L.
Partner: UNT Libraries Government Documents Department

A discounted-cost continuous-time flexible manufacturing and operator scheduling model solved by deconvexification over time

Description: A discounted-cost, continuous-time, infinite-horizon version of a flexible manufacturing and operator scheduling model is solved. The solution procedure is to convexify the discrete operator-assignment constraints to obtain a linear program, and then to regain the discreteness and obtain an approximate manufacturing schedule by deconvexification of the solution of the linear program over time. The strong features of the model are the accommodation of linear inequality relations among the manufacturing activities and the discrete manufacturing scheduling, whereas the weak features are intra-period relaxation of inventory availability constraints, and the absence of inventory costs, setup times, and setup charges.
Date: August 1, 1990
Creator: Eaves, B.C. & Rothblum, U.G.
Partner: UNT Libraries Government Documents Department

A one-class classifier for identifying urban areas in remotely-sensed data

Description: For many remote sensing applications, land cover can be determined by using spectral information alone. Identifying urban areas, however, requires the use of texture information since these areas are not generally characterized by a unique spectral signature. We have designed a one-class classifier to discriminate between urban and non-urban data. The advantage to using our classification technique is that principles of both statistical and adaptive pattern recognition are used simultaneously. This prevents new data that is completely dissimilar from the training data from being incorrectly classified. At the same time it allows decision boundary adaptation to reduce classification error in overlap areas of the feature space. Results will be illustrated using a LANDSAT scene of the city of Albuquerque.
Date: January 1, 1992
Creator: Kelly, P.M.; Hush, D.R. (New Mexico Univ., Albuquerque, NM (United States)) & White, J.M. (Los Alamos National Lab., NM (United States))
Partner: UNT Libraries Government Documents Department

Block sparse Cholesky algorithms on advanced uniprocessor computers

Description: As with many other linear algebra algorithms, devising a portable implementation of sparse Cholesky factorization that performs well on the broad range of computer architectures currently available is a formidable challenge. Even after limiting our attention to machines with only one processor, as we have done in this report, there are still several interesting issues to consider. For dense matrices, it is well known that block factorization algorithms are the best means of achieving this goal. We take this approach for sparse factorization as well. This paper has two primary goals. First, we examine two sparse Cholesky factorization algorithms, the multifrontal method and a blocked left-looking sparse Cholesky method, in a systematic and consistent fashion, both to illustrate the strengths of the blocking techniques in general and to obtain a fair evaluation of the two approaches. Second, we assess the impact of various implementation techniques on time and storage efficiency, paying particularly close attention to the work-storage requirement of the two methods and their variants.
Date: December 1, 1991
Creator: Ng, E.G. & Peyton, B.W.
Partner: UNT Libraries Government Documents Department

A direct method for string to deterministic finite automaton conversion for fast text searching

Description: This paper describes a simple technique for generating a minimum state deterministic finite automation (DFA) directly from a restricted set of regular expressions. The resulting DFA is used for string searches that do not alter the target text and require only a single pass through the input. The technique is used for very fast, mixed or same case, single or multiple string searches. The technique is also capable of directly converting multiple strings with wild card character specifiers by constructing parallel DFAs. Construction of the automation is performed in a time proportional to the length of the regular expression. Algorithms are given for construction of the automatons and recognizers. Although the regular expression to DFA parser does not support all classes of regular expressions, it supports a sufficient subset to make it useful for the most commonly encountered text searching functions.
Date: January 1, 1991
Creator: Berlin, G.J.
Partner: UNT Libraries Government Documents Department

Uniformly redundant arrays

Description: A recent development in coded aperture imaging removed the basic limitation on the quality of the reconstructed images when a correlation analysis is used. A pattern of holes for the aperture was designed that has an autocorrelation that is perfectly flat. The pattern is referred to as a uniformly redundant array (URA) and combines the high-transmission characteristics of the random arrays and Fresnel zone plate with the flat sidelobe advantage of the nonredundant pinhole array. An analysis procedure was developed (balanced correlation) that, when combined with the URA, gives a system point-spread function which is exactly a delta function. Computer simulations were used to demonstrate that, with random arrays, the balanced correlation is able to improve the reconstructed images by orders of magnitude. Additional orders of magnitude improvement are possible if the URA pattern is used. 6 figures.
Date: January 1, 1977
Creator: Fenimore, E.E. & Cannon, T.M.
Partner: UNT Libraries Government Documents Department

Sectioning in image processing

Description: For reasons of mathematical tractability, simplifying assumptions are usually made about the statistical nature of images that are to be processed. Among these assumptions are stationarity of the signal and noise processes and the space invariance of the blur. These assumptions rarely hold over a large image, but can be considered accurate over small subsections of an image. With this idea in mind, it is natural to divide an image into small, possibly overlapping, sections and process each section with consideration to its peculiar characteristics. This technique was used with success on problems involving iterative numerical image restorations, signal-dependent noise, and space-variant point spread functions.
Date: January 1, 1977
Creator: Trussell, H.J.
Partner: UNT Libraries Government Documents Department

Predicting structure in nonsymmetric sparse matrix factorizations

Description: Many computations on sparse matrices have a phase that predicts the nonzero structure of the output, followed by a phase that actually performs the numerical computation. We study structure prediction for computations that involve nonsymmetric row and column permutations and nonsymmetric or non-square matrices. Our tools are bipartite graphs, matchings, and alternating paths. Our main new result concerns LU factorization with partial pivoting. We show that if a square matrix A has the strong Hall property (i.e., is fully indecomposable) then an upper bound due to George and Ng on the nonzero structure of L + U is as tight as possible. To show this, we prove a crucial result about alternating paths in strong Hall graphs. The alternating-paths theorem seems to be of independent interest: it can also be used to prove related results about structure prediction for QR factorization that are due to Coleman, Edenbrandt, Gilbert, Hare, Johnson, Olesky, Pothen, and van den Driessche.
Date: October 1, 1992
Creator: Gilbert, J.R. (Xerox Palo Alto Research Center, CA (United States)) & Ng, E.G. (Oak Ridge National Lab., TN (United States))
Partner: UNT Libraries Government Documents Department

A search for good lattice rules based on the reciprocal lattice generator matrix

Description: The search for cost-effective lattice rules is a time-consuming and difficult process. After a brief overview of some of the lattice theory relevant to these rules, a new approach to this search is suggested. This approach is based on a classification of lattice rules using the upper triangular lattice form'' of the reciprocal lattice generator matrix. 18 refs., 1 tab.
Date: January 1, 1989
Creator: Lyness, J.N. & Newman, W.
Partner: UNT Libraries Government Documents Department

Bilingual parallel programming

Description: Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.
Date: January 1, 1990
Creator: Foster, I. & Overbeek, R.
Partner: UNT Libraries Government Documents Department

Numerical software: science or alchemy

Description: This is a summary of the Forsythe lecture presented at the Computer Science Conference, Dayton, Ohio, in February 1979. It examines the activity called Numerical Software, first to see what distinguishes numerical software from any other form of software and why numerical software is so much more difficult. Then it examines the scientific basis of such software and discusses that is lacking in that basis.
Date: June 1, 1979
Creator: Gear, C.W.
Partner: UNT Libraries Government Documents Department

Comparison of density estimators. [Estimation of probability density functions]

Description: Recent work in the field of probability density estimation has included the introduction of some new methods, such as the polynomial and spline methods and the nearest neighbor method, and the study of asymptotic properties in depth. This earlier work is summarized here. In addition, the computational complexity of the various algorithms is analyzed, as are some simulations. The object is to compare the performance of the various methods in small samples and their sensitivity to change in their parameters, and to attempt to discover at what point a sample is so small that density estimation can no longer be worthwhile. (RWR)
Date: September 1, 1977
Creator: Kao, S. & Monahan, J.F.
Partner: UNT Libraries Government Documents Department

A structured representation for parallel algorithm design on multicomputers

Description: Traditionally, parallel algorithms have been designed by brute force methods and fine-tuned on each architecture to achieve high performance. Rather than studying the design case by case, a systematic approach is proposed. A notation is first developed. Using this notation, most of the frequently used scientific and engineering applications can be presented by simple formulas. The formulas constitute the structured representation of the corresponding applications. The structured representation is simple, adequate and easy to understand. They also contain sufficient information about uneven allocation and communication latency degradations. With the structured representation, applications can be compared, classified and partitioned. Some of the basic building blocks, called computation models, of frequently used applications are identified and studied. Most applications are combinations of some computation models. The structured representation relates general applications to computation models. Studying computation models leads to a guideline for efficient parallel algorithm design for general applications. 6 refs., 7 figs.
Date: January 1, 1991
Creator: Sun, Xian-He (Ames Lab., IA (USA)) & Ni, L.M. (Michigan State Univ., East Lansing, MI (USA). Dept. of Computer Science)
Partner: UNT Libraries Government Documents Department

Use of a genetic algorithm to analyze robust stability problems

Description: This note resents a genetic algorithm technique for testing the stability of a characteristic polynomial whose coefficients are functions of unknown but bounded parameters. This technique is fast and can handle a large number of parametric uncertainties. We also use this method to determine robust stability margins for uncertain polynomials. Several benchmark examples are included to illustrate the two uses of the algorithm. 27 refs., 4 figs.
Date: January 1, 1990
Creator: Murdock, T.M.; Schmitendorf, W.E. & Forrest, S.
Partner: UNT Libraries Government Documents Department

A summary of block schemes for reducing a general matrix to Hessenberg form

Description: Various strategies have been proposed for arriving at block algorithms for reducing a general matrix to Hessenberg form by means of orthogonal similarity transformations. This paper reviews and systematically categorizes the various strategies and discusses their computational characteristics.
Date: February 1, 1993
Creator: Bischof, C.H.
Partner: UNT Libraries Government Documents Department

Optimal parameters for linear second-degree stationary iterative methods

Description: It is shown that the optimal parameters for linear second-degree stationary iterative methods applied to nonsymmetric linear systems can be found by solving the same minimax problem used to find optimal parameters for the Tchebychev iteration. In fact, the Tchebychev iteration is asymptotically equivalent to a linear second-degree stationary method. The method of finding optimal parameters for the Tchebychev iteration given by Manteuffel (Numer. Math., 28, 307-27 (1977)) can be used to find optimal parameters for the stationary method as well. 1 figure.
Date: November 1, 1980
Creator: Manteuffel, T. A.
Partner: UNT Libraries Government Documents Department

Numerical considerations in computing invariant subspaces

Description: This paper describes two methods for computing the invariant subspace of a matrix. The first involves using transformations to interchange the eigenvalues; the second involves direct computation of the vectors. 10 refs.
Date: November 1, 1990
Creator: Dongarra, J.J. (Tennessee Univ., Knoxville, TN (USA). Dept. of Computer Science Oak Ridge National Lab., TN (USA)); Hammarling, S. (Numerical Algorithms Group Ltd., Oxford (UK)) & Wilkinson, J.H. (Oak Ridge National Lab., TN (USA))
Partner: UNT Libraries Government Documents Department