Search Results

Advanced search parameters have been applied.
open access

A pipelined IC architecture for radon transform computations in a multiprocessor array

Description: The amount of data generated by CT scanners is enormous, making the reconstruction operation slow, especially for 3-D and limited-data scans requiring iterative algorithms. The Radon transform and its inverse, commonly used for CT image reconstruction from projections, are computationally burdensome for today's single-processor computer architectures. If the processing times for the forward and inverse Radon transforms were comparatively small, a large set of new CT algorithms would become feas… more
Date: May 25, 1990
Creator: Agi, I.; Hurst, P.J. & Current, K.W. (California Univ., Davis, CA (USA). Dept. of Electrical Engineering and Computer Science)
Partner: UNT Libraries Government Documents Department
open access

On the Effects of Migration on the Fitness Distribution of Parallel Evolutionary Algorithms

Description: Migration of individuals between populations may increase the selection pressure. This has the desirable consequence of speeding up convergence, but it may result in an excessively rapid loss of variation that may cause the search to fail. This paper investigates the effects of migration on the distribution of fitness. It considers arbitrary migration rates and topologies with different number of neighbors, and it compares algorithms that are configured to have the same selection intensity. The… more
Date: April 25, 2000
Creator: Cantu-Paz, E.
Partner: UNT Libraries Government Documents Department
open access

Lattice QCD with commodity hardware and software

Description: Large scale QCD Monte Carlo calculations have typically been performed on either commercial supercomputers or specially built massively parallel computers such as Fermilab's ACPMAPS. Commodity computer systems offer impressive floating point performance-to-cost ratios which exceed those of commercial supercomputers. As high performance networking components approach commodity pricing, it becomes reasonable to assemble a massively parallel supercomputer from commodity parts. The authors describe… more
Date: January 25, 2000
Creator: Holmgren, D.J.
Partner: UNT Libraries Government Documents Department
open access

Predicting application run times using historical information.

Description: The authors present a technique for deriving predictions for the run times of parallel applications from the run times of similar applications that have executed in the past. The novel aspect of the work is the use of search techniques to determine those application characteristics that yield the best definition of similarity for the purpose of making predictions. They use four workloads recorded from parallel computers at Argonne National Laboratory, the Cornell Theory Center, and the San Dieg… more
Date: June 25, 1999
Creator: Foster, I.; Smith, W. & Taylor, V.
Partner: UNT Libraries Government Documents Department
open access

Stability Analysis of Large-Scale Incompressible Flow Calculations on Massively Parallel Computers

Description: A set of linear and nonlinear stability analysis tools have been developed to analyze steady state incompressible flows in 3D geometries. The algorithms have been implemented to be scalable to hundreds of parallel processors. The linear stability of steady state flows are determined by calculating the rightmost eigenvalues of the associated generalize eigenvalue problem. Nonlinear stability is studied by bifurcation analysis techniques. The boundaries between desirable and undesirable operating… more
Date: October 25, 1999
Creator: Lehoucq, Richard B.; Romero, Louis & Salinger, Andrew G.
Partner: UNT Libraries Government Documents Department
open access

Scalable rendering on PC clusters

Description: This case study presents initial results from research targeted at the development of cost-effective scalable visualization and rendering technologies. The implementations of two 3D graphics libraries based on the popular sort-last and sort-middle parallel rendering techniques are discussed. An important goal of these implementations is to provide scalable rendering capability for extremely large datasets (>> 5 million polygons). Applications can use these libraries for either run-time vi… more
Date: April 25, 2000
Creator: Wylie, Brian N.; Lewis, Vasily; Shirley, David Noyes & Pavlakos, Constantine
Partner: UNT Libraries Government Documents Department
open access

Simulation of a Batch Chemical Process Using Parallel Computing with PVM and Speedup

Description: Speedup, a commercial software package for the dynamic modeling of chemical processes, has been coupled with the PVM software to allow a single process model to be distributed over several computers running in parallel. As an initial application, this coarse distribution technique was applied to a batch chemical plant containing 16 unit operations. Computation time for this problem was reduced by a factor of 4.7 using only three parallel processors in the UNIX computing environment. Better than… more
Date: March 25, 2003
Creator: Smith, F. G., III
Partner: UNT Libraries Government Documents Department
Back to Top of Screen