73 Matching Results

Search Results

Advanced search parameters have been applied.

Entropy based comparison of neural networks for classification

Description: In recent years, multilayer feedforward neural networks (NN) have been shown to be very effective tools in many different applications. A natural and essential step in continuing the diffusion of these tools in day by day use is their hardware implementation which is by far the most cost effective solution for large scale use. When the hardware implementation is contemplated, the issue of the size of the NN becomes crucial because the size is directly proportional with the cost of the implementation. In this light, any theoretical results which establish bounds on the size of a NN for a given problem is extremely important. In the same context, a particularly interesting case is that of the neural networks using limited integer weights. These networks are particularly suitable for hardware implementation because they need less space for storing the weights and the fixed point, limited precision arithmetic has much cheaper implementations in comparison with its floating point counterpart. This paper presents an entropy based analysis which completes, unifies and correlates results partially presented in [Beiu, 1996, 1997a] and [Draghici, 1997]. Tight bounds for real and integer weight neural networks are calculated.
Date: April 1, 1997
Creator: Draghici, S. & Beiu, V.
Partner: UNT Libraries Government Documents Department

Phase unwrapping using discontinuity optimization

Description: In SAR interferometry, the periodicity of the phase must be removed using two-dimensional phase unwrapping. The goal of the procedure is to find a smooth surface in which large spatial phase differences, called discontinuities, are restricted to places where their presence is reasonable. The pioneering work of Goldstein et al. identified points of local unwrap inconsistency called residues, which must be connected by discontinuities. This paper presents an overview of recent work that treats phase unwrapping as a discrete optimization problem with the constraint that residues must be connected. Several algorithms use heuristic methods to reduce the total number of discontinuities. Constantini has introduced the weighted sum of discontinuity magnitudes as a criterion of unwrap error and shown how algorithms from optimization theory are used to minimize it. Pixels of low quality are given low weight to guide discontinuities away from smooth, high-quality regions. This method is generally robust, but if noise is severe it underestimates the steepness of slopes and the heights of peaks. This problem is mitigated by subtracting (modulo 2{pi}) a smooth estimate of the unwrapped phase from the data, then unwrapping the resulting residual phase. The unwrapped residual is added to the smooth estimate to produce the final unwrapped phase. The estimate can be computed by lowpass filtering of an existing unwrapped phase; this makes possible an iterative algorithm in which the result of each iteration provides the estimate for the next. An example illustrates the results of optimal discontinuity placement and the improvement from unwrapping of the residual phase.
Date: March 1, 1998
Creator: Flynn, T.J.
Partner: UNT Libraries Government Documents Department

On generalized hamming weighs for Galois ring linear codes

Description: The definition of generalized Hamming weights (GHW) for linear codes over Galois rings is discussed. The properties of GHW for Galois ring linear codes are stated. Upper and existence bounds for GHW of Z{sub 4}-linear codes and a lower bound for GHW of the Kerdock code over Z{sub 4} are derived. GHW of some Z{sub 4}-linear codes are determined.
Date: August 1, 1997
Creator: Ashikhmin, A.
Partner: UNT Libraries Government Documents Department

The impact of an observationally based surface emissivity dataset on the simulation of Microwave Sounding Unit Temperatures

Description: Relatively few studies have attempted to simulate synthetic MSU temperatures with use of a radiation model. Most employ the simpler and computationally less-expensive method of applying a static, global-mean weighting function to three-dimensional profiles of atmospheric temperature. Both approaches require a number of key assumptions. One of the major assumptions relates to surface emissivity. To date, two different strategies have been used for prescribing surface emissivity values. The first assumes a fixed global surface emissivity, while the second specifies separate (time-invariant) emissivity values for land and ocean. In this research, we introduce space- and time-dependence to the specified emissivity fields, using recent observationally-based estimates of surface emissivity changes over 1988 to 2000. We use a radiative transfer code to explore the impact of this more complex treatment of surface emissivity. This sensitivity analysis is performed with monthly-mean fields of surface temperature, atmospheric temperature, and moisture taken from multiple reanalyses. Our goal is to quantify the possible impact of emissivity changes on global-scale estimates of tropospheric temperature trends (e.g., trends estimated from MSU channel 2 and MSU 2LT), and to document the sensitivity of synthetic MSU temperatures to a variety of input data and processing choices.
Date: August 15, 2005
Creator: Hnilo, J J; Litten, L; Santer, B D & Christy, J R
Partner: UNT Libraries Government Documents Department

Three-dimensional gravity modeling and focusing inversion using rectangular meshes.

Description: Rectangular grid cells are commonly used for the geophysical modeling of gravity anomalies, owing to their flexibility in constructing complex models. The straightforward handling of cubic cells in gravity inversion algorithms allows for a flexible imposition of model regularization constraints, which are generally essential in the inversion of static potential field data. The first part of this paper provides a review of commonly used expressions for calculating the gravity of a right polygonal prism, both for gravity and gradiometry, where the formulas of Plouff and Forsberg are adapted. The formulas can be cast into general forms practical for implementation. In the second part, a weighting scheme for resolution enhancement at depth is presented. Modelling the earth using highly digitized meshes, depth weighting schemes are typically applied to the model objective functional, subject to minimizing the data misfit. The scheme proposed here involves a non-linear conjugate gradient inversion scheme with a weighting function applied to the non-linear conjugate gradient scheme's gradient vector of the objective functional. The low depth resolution due to the quick decay of the gravity kernel functions is counteracted by suppressing the search directions in the parameter space that would lead to near-surface concentrations of gravity anomalies. Further, a density parameter transformation function enabling the imposition of lower and upper bounding constraints is employed. Using synthetic data from models of varying complexity and a field data set, it is demonstrated that, given an adequate depth weighting function, the gravity inversion in the transform space can recover geologically meaningful models requiring a minimum of prior information and user interaction.
Date: March 1, 2011
Creator: Commer, M.
Partner: UNT Libraries Government Documents Department

When constants are important

Description: In this paper the authors discuss several complexity aspects pertaining to neural networks, commonly known as the curse of dimensionality. The focus will be on: (1) size complexity and depth-size tradeoffs; (2) complexity of learning; and (3) precision and limited interconnectivity. Results have been obtained for each of these problems when dealt with separately, but few things are known as to the links among them. They start by presenting known results and try to establish connections between them. These show that they are facing very difficult problems--exponential growth in either space (i.e. precision and size) and/or time (i.e., learning and depth)--when resorting to neural networks for solving general problems. The paper will present a solution for lowering some constants, by playing on the depth-size tradeoff.
Date: April 1, 1997
Creator: Beiu, V.
Partner: UNT Libraries Government Documents Department

Constrained minimization for monotonic reconstruction

Description: The authors present several innovations in a method for monotonic reconstructions. It is based on the application of constrained minimization techniques for the imposition of monotonicity on a reconstruction. In addition, they present extensions of several classical TVD limiters to a genuinely multidimensional setting. In this case the linear least squares reconstruction method is expanded upon. They also clarify data dependent weighting techniques used with the minimization process.
Date: August 20, 1996
Creator: Rider, W.J. & Kothe, D.B.
Partner: UNT Libraries Government Documents Department

Compact location problems with budget and communication constraints

Description: We consider the problem of placing a specified number p of facilities on the nodes of a given network with two nonnegative edge-weight functions so as to minimize the diameter of the placement with respect to the first distance function under diameter or sum-constraints with respect to the second weight function. Define an ({alpha}, {beta})-approximation algorithm as a polynomial-time algorithm that produces a solution within a times the optimal function value, violating the constraint with respect to the second distance function by a factor of at most {beta}. We observe that in general obtaining an ({alpha}, {beta})-approximation for any fixed {alpha}, {beta} {ge} 1 is NP-hard for any of these problems. We present efficient approximation algorithms for the case, when both edge-weight functions obey the triangle inequality. For the problem of minimizing the diameter under a diameter Constraint with respect to the second weight-function, we provide a (2,2)-approximation algorithm. We. also show that no polynomial time algorithm can provide an ({alpha},2 {minus} {var_epsilon})- or (2 {minus} {var_epsilon},{beta})-approximation for any fixed {var_epsilon} > 0 and {alpha},{beta} {ge} 1, unless P = NP. This result is proved to remain true, even if one fixes {var_epsilon}{prime} > 0 and allows the algorithm to place only 2p/{vert_bar}VI{vert_bar}/{sup 6 {minus} {var_epsilon}{prime}} facilities. Our techniques can be extended to the case, when either the objective or the constraint is of sum-type and also to handle additional weights on the nodes of the graph.
Date: May 1, 1995
Creator: Krumke, S.O.; Noltemeier, H.; Ravi, S.S. & Marathe, M.V.
Partner: UNT Libraries Government Documents Department

Enhanced lower entropy bounds with application to constructive learning

Description: In this paper the authors prove two new lower bounds for the number-of-bits required by neural networks for classification problems defined by m examples from R{sup n}. Because they are obtained in a constructive way, they can be used for designing a constructive algorithm. These results rely on techniques used for determining tight upper bounds, which start by upper bounding the space with an n-dimensional ball. Very recently, a better upper bound has been detailed by showing that the volume of the ball can always be replaced by the volume of the intersection of two balls. A first lower bound for the case of integer weights in the range [{minus}p,p] has been detailed: it is based on computing the logarithm of the quotient between the volume of the ball containing all the examples (rough approximation) and the maximum volume of a polyhedron. A first improvement over that bound will come from a tighter upper bound of the maximum volume of the polyhedron by two n-dimensional cones. An even tighter bound will be obtained by upper bounding the space by the intersection of two balls.
Date: April 1, 1997
Creator: Beiu, V.
Partner: UNT Libraries Government Documents Department

Improving minimum cost spanning trees by upgrading nodes

Description: The authors study budget constrained network upgrading problems. The authors are given an undirected edge weighted graph (G = V, E) where node v {element_of} V can be upgraded at a cost of c(v). This upgrade reduces the weight of each edge incident on v. The goal is to find a minimum cost set of nodes to be upgraded so that the resulting network has a minimum spanning tree of weight no more than a given budget D. The results obtained in the paper include the following: (1) on the positive side, they provide a polynomial time approximation algorithm for the above upgrading problem when the difference between the maximum and minimum edge weights is bounded by a polynomial in n, the number of nodes in the graph, the solution produced by the algorithm satisfies the budget constrain, and the cost of the upgrading set produced by the algorithm is O (log n) times the minimum upgrading cost needed to obtain a spanning tree of weight at most D; (2) in contrast , they show that, unless NP {improper_subset} DTIME (n{sup O(log log n)}), there can be no polynomial time approximation algorithm for the problem that produces a solution with upgrading cost at most {alpha} < ln n times the optimal upgrading cost even if the budget can be violated by a factor f(n), for any polynomial time computable function f(n), this result continues to hold, with f(n) = n{sup k} being any polynomial, even when the difference between the maximum and minimum edge weights is bounded by a polynomial in n; and (3) finally, they show that using a simple binary search over the set of admissible values, the dual problem can be solved with an appropriate performance guarantee.
Date: November 1, 1998
Creator: Krumke, S.O.; Noltemeier, H.; Wirth, H.C.; Marathe, M.V.; Ravi, R.; Ravi, S.S. et al.
Partner: UNT Libraries Government Documents Department

Constrained blackbox optimization: The SEARCH perspective

Description: Search and optimization in the context of blackbox objective function evaluation subject to blackbox constraints satisfaction is the thesis of this work. The SEARCH (Search Envisioned As Relation and Class Hierarchizing) framework introduced by Kargupta (1995) offered an alternate perspective of blackbox optimization in terms of relations, classes, and partial ordering. The primary motivation comes from the observation that sampling in blackbox optimization is essentially an inductive process and in the absence of any relation among the members of the search space, induction is no better than enumeration. SEARCH also offers conditions for polynomial complexity search and bounds on sample complexity using its ordinal, probabilistic, and approximate framework. In this work the authors extend the SEARCH framework to tackle constrained blackbox optimization problems. The methodology aims at characterizing the search domain into feasible and infeasible relations among which the feasible relations can be explored further to optimize an objective function. Both -- objective function and constraints -- can be in the form of blackboxes. The authors derive results for bounds on sample complexity. They demonstrate their methodology on several benchmark problems.
Date: February 2, 1996
Creator: Hanagandi, V.; Kargupta, H. & Buescher, K.
Partner: UNT Libraries Government Documents Department

Thermostatted delta f

Description: The delta f simulation method is revisited. Statistical coarse-graining is used to rigorously derive the equation for the fluctuation delta f in the particle distribution. It is argued that completely collisionless simulation is incompatible with the achievement of true statistically steady states with nonzero turbulent fluxes because the variance of the particle weights w grows with time. To ensure such steady states, it is shown that for dynamically collisionless situations a generalized thermostat or W-stat may be used in lieu of a full collision operator to absorb the flow of entropy to unresolved fine scales in velocity space. The simplest W-stat can be implemented as a self-consistently determined, time-dependent damping applied to w. A precise kinematic analogy to thermostatted nonequilibrium molecular dynamics (NEMD) is pointed out, and the justification of W-stats for simulations of turbulence is discussed. An extrapolation procedure is proposed such that the long-time, steady-state, collisionless flux can be deduced from several short W-statted runs with large effective collisionality, and a numerical demonstration is given.
Date: January 18, 2000
Creator: Krommes, J.A.
Partner: UNT Libraries Government Documents Department

Tradeoffs between measurement residual and reconstruction error in inverse problems with prior information

Description: In many inverse problems with prior information, the measurement residual and the reconstruction error are two natural metrics for reconstruction quality, where the measurement residual is defined as the weighted sum of the squared differences between the data actually measured and the data predicted by the reconstructed model, and the reconstruction error is defined as the sum of the squared differences between the reconstruction and the truth, averaged over some a priori probability space of possible solutions. A reconstruction method that minimizes only one of these cost functions may produce unacceptable results on the other. This paper develops reconstruction methods that control both residual and error, achieving the minimum residual for any fixed error or vice versa. These jointly optimal estimators can be obtained by minimizing a weighted sum of the residual and the error; the weights are determined by the slope of the tradeoff curve at the desired point and may be determined iteratively. These results generalize to other cost functions, provided that the cost functions are quadratic and have unique minimizers; some results are obtained under the weaker assumption that the cost functions are convex. This paper applies these results to a model problem from biomagnetic source imaging and exhibits the tradeoff curve for this problem.
Date: June 1, 1995
Creator: Hughett, P.
Partner: UNT Libraries Government Documents Department

Robust estimates of location called compromise estimates. Progress report

Description: Two types of robust estimates of location, psi-compromised M-estimates and gap-compromised estimates, are discussed and compared via a simulation. The relationships between the two types of estimators are seen to depend on both the sample size and the underlying distribution of the data. 11 figures, 6 tables.
Date: January 1, 1980
Creator: Guarino, R.
Partner: UNT Libraries Government Documents Department

An angular weighting approach for calculating gradients and divergences

Description: There are several desirable properties for a free Lagrange algorithm: a Lagrangian nature, reciprocity, minimization of numerical noise, numerical efficiency, and the ability to extend the algorithms to 3-D. In addition, some integral hydro formulations allow the mass points to drift among the other points because the divergences and gradients do not depend explicitly on the position of a mass point between two other mass points. Therefore, another desirable property is a restoring force that keeps the mesh regular. An algorithm based on the angles subtended by the Voronoi polygon sides satisfies all the above criteria, except the fourth; this is because of the necessity of using trigonometric functions. Nevertheless, this loss of efficiency may be compensated by the avoidance of reconnection noise. 3 refs., 3 figs.
Date: January 1, 1990
Creator: Kirkpatrick, R. C.
Partner: UNT Libraries Government Documents Department

Cubature rules of prescribed merit

Description: We introduce a criterion for the evaluation of multidimensional quadrature, or cubature, rules for the hypercube: this is the merit of a rule, which is closely related to its trigonometric degree, and which reduces to the Zaremba figure of merit in the case of a lattice rule. We derive a family of rules Q{sub k}{sup a} having dimension s and merit 2{sup k}. These rules seem to be competitive with lattice rules with respect to the merit that can be achieved with a given number of abscissas.
Date: March 1996
Creator: Lyness, J. N. & Sloan, I. H.
Partner: UNT Libraries Government Documents Department

On automatic synthesis of analog/digital circuits

Description: The paper builds on a recent explicit numerical algorithm for Kolmogorov`s superpositions, and will show that in order to synthesize minimum size (i.e., size-optimal) circuits for implementing any Boolean function, the nonlinear activation function of the gates has to be the identity function. Because classical and--or implementations, as well as threshold gate implementations require exponential size, it follows that size-optimal solutions for implementing arbitrary Boolean functions can be obtained using analog (or mixed analog/digital) circuits. Conclusions and several comments are ending the paper.
Date: December 31, 1998
Creator: Beiu, V.
Partner: UNT Libraries Government Documents Department

An experimental investigation of the interaction of primary and secondary stresses in fuel plates

Description: If the load is not relieved as a structure starts to yield, the induced stress is defined as primary stress. If the load relaxes, as a structure begins yield the induced stress is defined as secondary stress. In design it is not uncommon to give more weight to primary stresses than to secondary stresses. However, knowing when this is good design practice and when it is not good design practice represents a problem. In particular, the fuel plates in operating reactors contain both primary stresses and secondary stresses and to properly assess a design there is a need to assign design weights to the stresses. Tests were conducted on reactor fuel plates intended for the Advanced Neutron Source (ANS) to determine the potential of giving different design weights to the primary and secondary stresses. The results of these tests and the conclusion that the stresses should be weighted the same are given in this paper.
Date: February 1996
Creator: Swinson, W. F.; Battiste, R. L. & Yahr, G. T.
Partner: UNT Libraries Government Documents Department

The SI-Combiner: Making sense of results from multiple trained programs

Description: Many problems, such as Aroclor Interpretation, are ill-conditioned problems in which trained programs, or methods, must operate in scenarios outside their training ranges because it is intractable to train them completely. Consequently, they fail in ways related to the scenarios. Importantly, when multiple trained methods fail divergently, their patterns of failures provide insights into the true results. The SI-Combiner solves this problem of Integrating Multiple learned Models (IMLM) by automatically learning and using these insights to produce a solution more accurate than any single trained program. In application, the Aroclor Interpretation SI-Combiner improved on the accuracy of the most accurate individual trained program in the suite. This paper presents a new fuzzy IMLM method called the SI-Combiner and its application to Aroclor Interpretation. Additionally, this paper shows the improvement in accuracy that the SI-Combiner`s components show against Multicategory Classification (MCC), Dempster-Shafer (DS), and the best individual trained program in the Aroclor Interpretation suite (iMLR).
Date: December 31, 1998
Creator: Den Hartog, B.K.; Elling, J.W. & Kieckhafer, R.
Partner: UNT Libraries Government Documents Department

Hexahedron, wedge, tetrahedron, and pyramid diffusion operator discretization

Description: The diffusion equation, {phi}({rvec x}), is solved by finding the extrema of the functional, {Gamma}[{phi}] = {integral}({1/2}D{rvec {nabla}}{phi}{center_dot}{rvec {nabla}}{phi} + {1/2}{sigma}{sub a}{phi}{sup 2} - {ital Q}{phi}){ital d}{sup 3}{ital x}. A matrix is derived that is investigated for hexahedron, wedge, tetrahedron, and pyramid cells. The first term of the diffusion integration was concentrated and the others dropped; these dropped terms are also considered. Results are presented for hexahedral meshes and three weighting methods.
Date: August 6, 1996
Creator: Roberts, R. M.
Partner: UNT Libraries Government Documents Department

Comparison of the constant and linear boundary element method for EEG and MEG forward modeling

Description: We present a comparison of boundary element methods for solving the forward problem in EEG and MEG. We use the method of weighted residuals and focus on the collocation and Galerkin forms for constant and linear basis functions. We also examine the effect of the isolated skull approach for reducing numerical errors due to the low conductivity of the skull. We demonstrate the improvement that a linear Galerkin approach may yield in solving the forward problem.
Date: July 1, 1996
Creator: Mosher, J. C.; Chang, C. H. & Leahy, R. M.
Partner: UNT Libraries Government Documents Department