676 Matching Results

Search Results

Application of a computationally efficient geostatistical approach to characterizing variably spaced water-table data

Description: Geostatistical analysis of hydraulic head data is useful in producing unbiased contour plots of head estimates and relative errors. However, at most sites being characterized, monitoring wells are generally present at different densities, with clusters of wells in some areas and few wells elsewhere. The problem that arises when kriging data at different densities is in achieving adequate resolution of the grid while maintaining computational efficiency and working within software limitations. For the site considered, 113 data points were available over a 14-mi{sup 2} study area, including 57 monitoring wells within an area of concern of 1.5 mi{sup 2}. Variogram analyses of the data indicate a linear model with a negligible nugget effect. The geostatistical package used in the study allows a maximum grid of 100 by 100 cells. Two-dimensional kriging was performed for the entire study area with a 500-ft grid spacing, while the smaller zone was modeled separately with a 100-ft spacing. In this manner, grid cells for the dense area and the sparse area remained small relative to the well separation distances, and the maximum dimensions of the program were not exceeded. The spatial head results for the detailed zone were then nested into the regional output by use of a graphical, object-oriented database that performed the contouring of the geostatistical output. This study benefitted from the two-scale approach and from very fine geostatistical grid spacings relative to typical data separation distances. The combining of the sparse, regional results with those from the finer-resolution area of concern yielded contours that honored the actual data at every measurement location. The method applied in this study can also be used to generate reproducible, unbiased representations of other types of spatial data.
Date: February 1, 1996
Creator: Quinn, J.J.
Partner: UNT Libraries Government Documents Department

Traffic characterization and modeling of wavelet-based VBR encoded video

Description: Wavelet-based video codecs provide a hierarchical structure for the encoded data, which can cater to a wide variety of applications such as multimedia systems. The characteristics of such an encoder and its output, however, have not been well examined. In this paper, the authors investigate the output characteristics of a wavelet-based video codec and develop a composite model to capture the traffic behavior of its output video data. Wavelet decomposition transforms the input video in a hierarchical structure with a number of subimages at different resolutions and scales. the top-level wavelet in this structure contains most of the signal energy. They first describe the characteristics of traffic generated by each subimage and the effect of dropping various subimages at the encoder on the signal-to-noise ratio at the receiver. They then develop an N-state Markov model to describe the traffic behavior of the top wavelet. The behavior of the remaining wavelets are then obtained through estimation, based on the correlations between these subimages at the same level of resolution and those wavelets located at an immediate higher level. In this paper, a three-state Markov model is developed. The resulting traffic behavior described by various statistical properties, such as moments and correlations, etc., is then utilized to validate their model.
Date: July 1, 1997
Creator: Kuo, Yu; Jabbari, B. & Zafar, S.
Partner: UNT Libraries Government Documents Department

The Compact Muon Solenoid Heavy Ion program

Description: The Pb-Pb center of mass energy at the LHC will exceed that of Au-Au collisions at RHIC (Relativistic Heavy Ion Collider) by nearly a factor of 30, providing exciting opportunities for addressing unique physics issues in a completely new energy domain. The interest of the Heavy Ion (HI) Physics at LHC is discussed in more detail in the LHC-USA white paper and the Compact Muon Solenoid (CMS) Heavy Ion proposal. A few highlights are presented in this document. Heavy ion collisions at LHC energies will explore regions of energy and particle density significantly beyond those reachable at RHIC. The energy density of the thermalized matter created at the LHC is estimated to be 20 times higher than at RHIC, implying an initial temperature, which is greater than at RHIC by more than a factor of two. The higher density of produced partons also allows a faster thermalization. As a consequence, the ratio of the quark-gluon plasma lifetime to the thermalization time increases by a factor of 10 over RHIC. Thus the hot, dense systems created in HI collisions at the LHC spend most of the time in a purely partonic state. The longer lifetime of the quark-gluon plasma state widens significantly the time window available to probe it experimentally. RHIC experiments have reported evidence for jet production in HI collisions and for suppression of high p{sub T} particle production. Those results open a new field of exploration of hot and dense nuclear matter. Even though RHIC has already broken ground, the production rates for jets with p{sub T} > 30 GeV are several orders of magnitude larger at the LHC than at RHIC, allowing for systematic studies with high statistics in a clean kinematic region. High p{sub T} quark and gluon jets can be used to study the hot hadronic ...
Date: December 15, 2005
Creator: Yepes, Dr. Pablo
Partner: UNT Libraries Government Documents Department

Data Processing Procedures and Methodology for Estimating Trip Distances for the 1995 American Travel Survey (ATS)

Description: The 1995 American Travel Survey (ATS) collected information from approximately 80,000 U.S. households about their long distance travel (one-way trips of 100 miles or more) during the year of 1995. It is the most comprehensive survey of where, why, and how U.S. residents travel since 1977. ATS is a joint effort by the U.S. Department of Transportation (DOT) Bureau of Transportation Statistics (BTS) and the U.S. Department of Commerce Bureau of Census (Census); BTS provided the funding and supervision of the project, and Census selected the samples, conducted interviews, and processed the data. This report documents the technical support for the ATS provided by the Center for Transportation Analysis (CTA) in Oak Ridge National Laboratory (ORNL), which included the estimation of trip distances as well as data quality editing and checking of variables required for the distance calculations.
Date: May 1, 2000
Creator: Hwang, H.-L. & Rollow, J.
Partner: UNT Libraries Government Documents Department

Bayesian stratified sampling to assess corpus utility

Description: This paper describes a method for asking statistical questions about a large text corpus. The authors exemplify the method by addressing the question, ``What percentage of Federal Register documents are real documents, of possible interest to a text researcher or analyst?`` They estimate an answer to this question by evaluating 200 documents selected from a corpus of 45,820 Federal Register documents. Bayesian analysis and stratified sampling are used to reduce the sampling uncertainty of the estimate from over 3,100 documents to fewer than 1,000. A possible application of the method is to establish baseline statistics used to estimate recall rates for information retrieval systems.
Date: December 1998
Creator: Hochberg, J.; Scovel, C.; Thomas, T. & Hall, S.
Partner: UNT Libraries Government Documents Department

A brief overview of kurtosis

Description: The standardized fourth central moment (standardized according to the variance) is often regarded as the definition of kurtosis and has a history of usage for testing normality and multivariate normality. Its relationship to other standard central moments is also documented. However, the poor performance of the standardized central moments skewness and kurtosis as discriminators for normality has led to seeking alternative statistics and definitions to capture distribution shape characteristics. Included in these kurtosis statistics are quantile based statistics and L-kurtosis, which comes from L-moments. Balanda and MacGillivray provide a more general definition if kurtosis as the location-free and scale-free movement of the probability mass from the shoulders of the distribution to its center and tails. Different scaling techniques and positionings of the shoulders result in different forms for kurtosis related to peakedness and tail weight. Often counter to popular view, kurtosis is both tailedness and peakedness, simultaneously. It is related to skewness, spread, tail weight, quantiles, and influence functions. Various kurtosis statistics have been developed as indicators for bimodality, tail weight, peakedness and normality. These are also useful for distribution comparisons, such as goodness-of-fit, distribution crossings and distribution orderings. Kurtosis measures are related to Box-Cox transformations and Kullback-Leibler information. In the following sections, the authors examining various kurtosis measures, and illustrate their performance using a case study where understanding distribution shape is important for comparing simulated and mixtures of distributions. The concluding remarks include drawing attention to areas for further study.
Date: December 1, 1998
Creator: Booker, J.M. & Ticknor, L.O.
Partner: UNT Libraries Government Documents Department

Compact location problems with budget and communication constraints

Description: The authors consider the problem of placing a specified number p of facilities on the nodes of a given network with two nonnegative edge-weight functions so as to minimize the diameter of the placement with respect to the first weight function subject to a diameter or sum-constraint with respect to the second weight function. Define an ({alpha}, {beta})-approximation algorithm as a polynomial-time algorithm that produces a solution within {alpha} times the optimal value with respect to the first weight function, violating the constraint with respect to the second weight function by a factor of at most {beta}. They show that in general obtaining an ({alpha}, {beta})-approximation for any fixed {alpha}, {beta} {ge} 1 is NP-hard for any of these problems. They also present efficient approximation algorithms for several of the problems studied, when both edge-weight functions obey the triangle inequality.
Date: July 1, 1995
Creator: Krumke, S.O.; Noltemeier, H.; Ravi, S.S. & Marathe, M.V.
Partner: UNT Libraries Government Documents Department

A comparative study of minimum norm inverse methods for MEG imaging

Description: The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we can use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.
Date: July 1, 1996
Creator: Leahy, R.M.; Mosher, J.C. & Phillips, J.W.
Partner: UNT Libraries Government Documents Department

Including uncertainty in hazard analysis through fuzzy measures

Description: This paper presents a method for capturing the uncertainty expressed by an Hazard Analysis (HA) expert team when estimating the frequencies and consequences of accident sequences and provides a sound mathematical framework for propagating this uncertainty to the risk estimates for these accident sequences. The uncertainty is readily expressed as distributions that can visually aid the analyst in determining the extent and source of risk uncertainty in HA accident sequences. The results also can be expressed as single statistics of the distribution in a manner analogous to expressing a probabilistic distribution as a point-value statistic such as a mean or median. The study discussed here used data collected during the elicitation portion of an HA on a high-level waste transfer process to demonstrate the techniques for capturing uncertainty. These data came from observations of the uncertainty that HA team members expressed in assigning frequencies and consequences to accident sequences during an actual HA. This uncertainty was captured and manipulated using ideas from possibility theory. The result of this study is a practical method for displaying and assessing the uncertainty in the HA team estimates of the frequency and consequences for accident sequences. This uncertainty provides potentially valuable information about accident sequences that typically is lost in the HA process.
Date: December 1, 1997
Creator: Bott, T.F. & Eisenhawer, S.W.
Partner: UNT Libraries Government Documents Department

Efficient maximum entropy algorithms for electronic structure

Description: Two Chebyshev recursion methods are presented for calculations with very large sparse Hamiltonians, the kernel polynomial method (KPM) and the maximum entropy method (MEM). If limited statistical accuracy and energy resolution are acceptable, they provide linear scaling methods for the calculation of physical properties involving large numbers of eigenstates such as densities of states, spectral functions, thermodynamics, total energies for Monte Carlo simulations and forces for molecular dynamics. KPM provides a uniform approximation to a DOS, with resolution inversely proportional to the number of Chebyshev moments, while MEM can achieve significantly higher, but non-uniform, resolution at the risk of possible artifacts. This paper emphasizes efficient algorithms.
Date: April 1, 1996
Creator: Silver, R.N.; Roeder, H.; Voter, A.F. & Kress, J.D.
Partner: UNT Libraries Government Documents Department

Force-limited vibration tests aplied to the FORTE` satellite

Description: A force limited random vibration test was conducted on a small satellite called FORTE{prime}. This type of vibration test reduces the over testing that can occur in a conventional vibration test. Two vibration specifications were used in the test: The conventional base acceleration specification, and an interface force specification. The vibration level of the shaker was controlled such that neither the table acceleration nor the force transmitted to the test item exceeded its specification. The effect of limiting the shake table vibration to the force specification was to reduce (or ``notch``) the shaker acceleration near some of the satellite`s resonance frequencies. This paper describes the force limited test conducted for the FORTE{prime} satellite. The satellite and its dynamic properties are discussed, and the concepts of force limiting theory are summarized. The hardware and setup of the test are then described, and the results of the force limited vibration test are discussed.
Date: February 1, 1996
Creator: Stevens, R.R. & Butler, T.A.
Partner: UNT Libraries Government Documents Department

A hazards analysis of a nuclear explosives dismantlement

Description: This paper describes the methodology used in a quantitative hazard assessment of a nuclear weapon disassembly process. Potential accident sequences were identified using an accident-sequence fault tree based on operational history, weapon safety studies, a hazard analysis team composed of weapons experts, and walkthroughs of the process. The experts provided an initial screening of the accident sequences to reduce the number of accident sequences that would be quantified. The accident sequences that survived the screening process were developed further using event trees. Spreadsheets were constructed for each event tree, the accident sequences associated with that event tree were entered as rows on the spreadsheet, and that spreadsheet was linked to spreadsheets with initiating-event frequencies, enabling event probabilities, and weapon response probabilities. The probability and frequency distribution estimates used in these spreadsheets were gathered from weapon process operational data, surrogate industrial data, expert judgment, and probability models. Frequency distributions were calculated for the sequences whose point-value frequency represented 99% of the total point-value frequency using a Monte Carlo simulation. Partial differential importances of events and distributions of accident frequency by weapon configuration, location, process, and other parameters were calculated.
Date: July 1, 1995
Creator: Bott, T.F. & Eisenhawer, S.W.
Partner: UNT Libraries Government Documents Department

Argonne National Laboratory institutional plan FY 2002 - FY 2007.

Description: The national laboratory system provides a unique resource for addressing the national needs inherent in the mission of the Department of Energy. Argonne, which grew out of Enrico Fermi's pioneering work on the development of nuclear power, was the first national laboratory and, in many ways, has set the standard for those that followed. As the Laboratory's new director, I am pleased to present the Argonne National Laboratory Institutional Plan for FY 2002 through FY 2007 on behalf of the extraordinary group of scientists, engineers, technicians, administrators, and others who are responsible for the Laboratory's distinguished record of achievement. Like our sister DOE laboratories, Argonne uses a multifaceted approach to advance U.S. R and D priorities. First, we assemble interdisciplinary teams of scientists and engineers to address complex problems. For example, our initiative in Functional Genomics will bring together biologists, computer scientists, environmental scientists, and staff of the Advanced Photon Source to develop complete maps of cellular function. Second, we cultivate specific core competencies in science and technology; this Institutional Plan discusses the many ways in which our core competencies support DOE's four mission areas. Third, we serve the scientific community by designing, building, and operating world-class user facilities, such as the Advanced Photon Source, the Intense Pulsed Neutron Source, and the Argonne Tandem-Linac Accelerator System. This Plan summarizes the visions, missions, and strategic plans for the Laboratory's existing major user facilities, and it explains our approach to the planned Rare Isotope Accelerator. Fourth, we help develop the next generation of scientists and engineers through educational programs, many of which involve bright young people in research. This Plan summarizes our vision, objectives, and strategies in the education area, and it gives statistics on student and faculty participation. Finally, we collaborate with other national laboratories, academia, and industry, both on scientific ...
Date: November 29, 2001
Creator: Beggs, S. D.
Partner: UNT Libraries Government Documents Department

Applying the LANL Statistical Pattern Recognition Paradigm for Structural Health Monitoring to Data from a Surface-Effect Fast Patrol Boat

Description: This report summarizes the analysis of fiber-optic strain gauge data obtained from a surface-effect fast patrol boat being studied by the staff at the Norwegian Defense Research Establishment (NDRE) in Norway and the Naval Research Laboratory (NRL) in Washington D.C. Data from two different structural conditions were provided to the staff at Los Alamos National Laboratory. The problem was then approached from a statistical pattern recognition paradigm. This paradigm can be described as a four-part process: (1) operational evaluation, (2) data acquisition & cleansing, (3) feature extraction and data reduction, and (4) statistical model development for feature discrimination. Given that the first two portions of this paradigm were mostly completed by the NDRE and NRL staff, this study focused on data normalization, feature extraction, and statistical modeling for feature discrimination. The feature extraction process began by looking at relatively simple statistics of the signals and progressed to using the residual errors from auto-regressive (AR) models fit to the measured data as the damage-sensitive features. Data normalization proved to be the most challenging portion of this investigation. A novel approach to data normalization, where the residual errors in the AR model are considered to be an unmeasured input and an auto-regressive model with exogenous inputs (ARX) is then fit to portions of the data exhibiting similar waveforms, was successfully applied to this problem. With this normalization procedure, a clear distinction between the two different structural conditions was obtained. A false-positive study was also run, and the procedure developed herein did not yield any false-positive indications of damage. Finally, the results must be qualified by the fact that this procedure has only been applied to very limited data samples. A more complete analysis of additional data taken under various operational and environmental conditions as well as other structural conditions is necessary before ...
Date: January 1, 2001
Creator: Sohn, Hoon; Farrar, Charles; Hunter, Norman & Worden, Keith
Partner: UNT Libraries Government Documents Department

Atmospheric Radiation Measurement Program facilities newsletter, November 2000.

Description: Winter Weather Outlook--With the chill of colder temperatures in the air, we can rest assured that the icy grips of winter are just around the corner. The Climate Prediction Center (CPC), a specialized part of the National Weather Service (NWS), has issued its annual winter outlook for the 2000-2001 winter season. The CPC, located in Camp Springs, Maryland, is a government agency that focuses its predictions on Earth's climate. In comparison to the NWS forecasts of short-term weather events, the CPC goes farther into the future (from a week to seasons). The CPC conducts real-time monitoring of Earth's climate and makes predictions of climate variability over land and ocean and in the atmosphere. The CPC also evaluates the sources of major climate anomalies. The operations branch of the CPC prepares long-range forecasts by applying dynamical, empirical, and statistical techniques. The analysis branch performs applied research to identify physical factors responsible for climate fluctuations. The two branches work jointly to test new forecast methods and models, with the goal of improving model output. The CPC also evaluates the outlook for floods, droughts, hurricanes, ozone depletion, and El Nino and La Nina environments. So, what is the CPC outlook for winter 2000-2001? For the most part, winter weather will return to ''normal'' this season, because the El Nino and La Nina anomalies that shaped our past three winters have dissipated. Normal winter weather statistics are based on data for 1961-1990. The strong influence of the sea surface temperature in the tropical Pacific Ocean during an El Nino or La Nina episode, which makes it easier for forecasters to predict the trend for weather events, has given way to more neutral conditions. This winter, we should be prepared for swings in temperature and precipitation. The CPC is forecasting a more normal winter in ...
Date: December 1, 2000
Creator: Sisterson, D. L.
Partner: UNT Libraries Government Documents Department

Experiences of fitting isotherms to data from batch sorption experiments for radionuclides on tuffs

Description: Laboratory experiments have been performed on the sorption of radionuclides on tuff as site characterization information for the Yucca Mountain Project. This paper presents general observations on the results of curve-fitting of sorption data by isotherm equations and the effects of experimental variables on their regressional analysis. Observations are specific to the effectiveness and problems associated with fitting isotherms, the calculation and value of isotherm parameters, and the significance of experimental variables such as replication, particle size, mode of sorption, and mineralogy. These observations are important in the design of laboratory experiments to ensure that collected data are adequate for effectively characterizing sorption of radionuclides on tuffs or other materials. 13 refs., 2 figs., 4 tabs.
Date: November 1989
Creator: Polzer, W.L. & Fuentes, H.R.
Partner: UNT Libraries Government Documents Department

Rethinking the learning of belief network probabilities

Description: Belief networks are a powerful tool for knowledge discovery that provide concise, understandable probabilistic models of data. There are methods grounded in probability theory to incrementally update the relationships described by the belief network when new information is seen, to perform complex inferences over any set of variables in the data, to incorporate domain expertise and prior knowledge into the model, and to automatically learn the model from data. This paper concentrates on part of the belief network induction problem, that of learning the quantitative structure (the conditional probabilities), given the qualitative structure. In particular, the current practice of rote learning the probabilities in belief networks can be significantly improved upon. We advance the idea of applying any learning algorithm to the task of conditional probability learning in belief networks, discuss potential benefits, and show results of applying neural networks and other algorithms to a medium sized car insurance belief network. The results demonstrate from 10 to 100% improvements in model error rates over the current approaches.
Date: March 1, 1996
Creator: Musick, R.
Partner: UNT Libraries Government Documents Department

The quantitative failure of human reliability analysis

Description: This philosophical treatise argues the merits of Human Reliability Analysis (HRA) in the context of the nuclear power industry. Actually, the author attacks historic and current HRA as having failed in informing policy makers who make decisions based on risk that humans contribute to systems performance. He argues for an HRA based on Bayesian (fact-based) inferential statistics, which advocates a systems analysis process that employs cogent heuristics when using opinion, and tempers itself with a rational debate over the weight given subjective and empirical probabilities.
Date: July 1, 1995
Creator: Bennett, C.T.
Partner: UNT Libraries Government Documents Department

Target induced, turbulence-modulated speckle noise

Description: Many papers on DIAL for remote sensing have been devoted to the averaging properties of speckle noise from diffuse-target returns; i.e., how many (N) return pulses can be averaged before the I/N reduction in signal variance expected from uncorrelated noise fails. An apparent limit of about 100 pulses or fewer has been the most important factor in determining the accuracy of DIAL measurements using diffusely-scattering targets in the field. The relevant literature is briefly reviewed, and various explanations for the apparent limit are summarized. Recent speckle experiments at LLNL`s Site 300 may suggest that the limit of {approximately}100 pulses is not fundamental. The speckle experiments very clearly show that the limit on signal averaging in this data was the result of long-term ({approximately}1 minute) drifts in the signal returns rather than of any more subtle statistical properties. The long-term drifts are completely removed to the useful limits of the data sets by working with the log-ratio of adjacent pulses. This procedure is analogous (but not identical) to processing the log-ratios of the powers at different wavelengths in a multi-line DIAL system. We think the Site 300 data therefore suggests that as long as the laser system is constructed to ensure that any long-term drifts are identical among the transmitted wavelengths, the log-ratio of the individual returns will provide a data set that does usefully average over a large number of pulses.
Date: July 1, 1994
Creator: Scharlemann, E.T.
Partner: UNT Libraries Government Documents Department

A comparison of risk assessment techniques from qualitative to quantitative

Description: Risk assessment techniques vary from purely qualitative approaches, through a regime of semi-qualitative to the more traditional quantitative. Constraints such as time, money, manpower, skills, management perceptions, risk result communication to the public, and political pressures all affect the manner in which risk assessments are carried out. This paper surveys some risk matrix techniques, examining the uses and applicability for each. Limitations and problems for each technique are presented and compared to the others. Risk matrix approaches vary from purely qualitative axis descriptions of accident frequency vs consequences, to fully quantitative axis definitions using multi-attribute utility theory to equate different types of risk from the same operation.
Date: February 13, 1995
Creator: Altenbach, T.J.
Partner: UNT Libraries Government Documents Department

Analysis of hyper-spectral data derived from an imaging Fourier transform: A statistical perspective

Description: Fourier transform spectrometers (FTS) using optical sensors are increasingly being used in various branches of science. Typically, a FTS generates a three-dimensional data cube with two spatial dimensions and one frequency/wavelength dimension. The number of frequency dimensions in such data cubes is generally very large, often in the hundreds, making data analytical procedures extremely complex. In the present report, the problem is viewed from a statistical perspective. A set of procedures based on the high degree of inter-channel correlation structure often present in such hyper-spectral data, has been identified and applied to an example data set of dimension 100 x 128 x 128 comprising 128 spectral bands. It is shown that in this case, the special eigen-structure of the correlation matrix has allowed the authors to extract just a few linear combinations of the channels (the significant principal vectors) that effectively contain almost all of the spectral information contained in the data set analyzed. This in turn, enables them to segment the objects in the given spatial frame using, in a parsimonious yet highly effective way, most of the information contained in the data set.
Date: January 10, 1996
Creator: Sengupta, S.K.; Clark, G.A. & Fields, D.J.
Partner: UNT Libraries Government Documents Department

STATTHERM: a statistical thermodynamics program for calculating thermochemical information

Description: A statistical thermodynamics program is presented which computes the thermochemical properties of a polyatomic molecule using statistical thermodynamic formulas. Thermodynamic data for substances involving C, H,O,N, and Cl elements are fitted into NASA polynomial form for use in combustion research or research where thermodynamical information is important.
Date: March 1, 1997
Creator: Marinov, N.M.
Partner: UNT Libraries Government Documents Department

Detection and track of a stochastic target using multiple measurements

Description: The authors are interested in search and tracking problems. In a search, the target might be located among a number of hiding places. Multiple measurements from various locations might be used to determine the likelihood that a particular hiding place is occupied. An obvious example would be a search for a weak radiation source in a building. Search teams might make many measurements with radiation detectors and analyze this data to determine likely areas for further searching. In this paper the authors present a statistical interpretation of the implications of measurements made on a stochastic system, one which makes random state transitions with known average rates. Knowledge of the system is represented as a statistical ensemble of instances which accord with measurements and prior information. The evolution of ratios of populations in this ensemble due to measurements and stochastic transitions may be calculated efficiently. Applied to target detection and tracking, this approach allows a rigorous definition of probability of detection and probability of false alarm and reveals a computationally useful functional relationship between the two. An example of a linear array of simple counters is considered in detail. For it, accurate analytic approximations are developed for detection and tracking statistics as functions of system parameters. A single measure of effectiveness for individual sensors is found which is a major determinant of system performance and which would be useful for initial sensor design.
Date: November 1, 1995
Creator: Cunningham, C.T.
Partner: UNT Libraries Government Documents Department