159 Matching Results

Search Results

Advanced search parameters have been applied.

Characterization of a mixed salt of 1-hydroxy-pyridin-2-one Pu(IV)complexes

Description: Most expert analyses of the projected world energy needs show utilization of nuclear energy will be essential for the next few decades, and hence the need to support this technology grows. But as one measure of the supporting science base of this field, as of December 2006, only 25 Pu containing structures were in the Cambridge Structural Database, as compared to 21,807 for Fe. A comparison of the rate of addition to this knowledge base reveals that approximately 500 Fe structures are registered with the Cambridge Structural Database every year, while in the same period only two or three Pu crystal structures are published. A continuing objective of this laboratory has been the development of new sequestering agents for actinide decorporation and selective extractions. This effort has been based on similarities in the properties of Pu(IV) and Fe(III), and the chelating groups in microbial Fe(III) sequestering agents, siderophores. The HOPO ligands (Figure 1) are one such class of chelating group which have been investigated as selective actinide extractants.
Date: January 9, 2007
Creator: Gorden, Anne E.V.; Xu, Jide; Szigethy, Geza; Oliver, Allen; Shuh,David K. & Raymond, Kenneth N.
Partner: UNT Libraries Government Documents Department

On the Inclusion of the Interfacial Area Between Phases in the Physical and Mathematical Description of Subsurface Multiphase Flow

Description: The scientific motivation for conducting this research lies in an assessment of the current state of modeling in the subsurface, where ''modeling'' refers to any systematic framework used to gain understanding of a system. At present, subsurface modeling addresses only the phases present, and even there considers the modeling process to be an extension of single phase flow. The shortcoming of this approach is that the governing flow equations do not account for some of the important physical phenomena. therefore accurate simulation is more of an art that a scientific exercise. Experimental and field programs designed to measure data in support of these equations may actually be seeking curve fitting coefficients rather than information characteristic of physical phenomena. By providing a more general framework, we can contribute to improving the knowledge base related to important processes in the subsurface.
Date: March 17, 2000
Creator: Gray, William G.; Soll, Wendy E. & Tompson, Andrew F.B.
Partner: UNT Libraries Government Documents Department

A Comparative Analysis of Guided vs. Query-Based Intelligent Tutoring Systems (ITS) Using a Class-Entity-Relationship-Attribute (CERA) Knowledge Base

Description: One of the greatest problems facing researchers in the sub field of Artificial Intelligence known as Intelligent Tutoring Systems (ITS) is the selection of a knowledge base designs that will facilitate the modification of the knowledge base. The Class-Entity-Relationship-Attribute (CERA), proposed by R. P. Brazile, holds certain promise as a more generic knowledge base design framework upon which can be built robust and efficient ITS. This study has a twofold purpose. The first is to demonstrate that a CERA knowledge base can be constructed for an ITS on a subset of the domain of Cretaceous paleontology and function as the "expert module" of the ITS. The second is to test the validity of the ideas that students guided through a lesson learn more factual knowledge, while those who explore the knowledge base that underlies the lesson through query at their own pace will be able to formulate their own integrative knowledge from the knowledge gained in their explorations and spend more time on the system. This study concludes that a CERA-based system can be constructed as an effective teaching tool. However, while an ITS - treatment provides for statistically significant gains in achievement test scores, the type of treatment seems not to matter as much as time spent on task. This would seem to indicate that a query-based system which allows the user to progress at their own pace would be a better type of system for the presentation of material due to the greater amount of on-line computer time exhibited by the users.
Date: August 1987
Creator: Hall, Douglas Lee
Partner: UNT Libraries

The DOE Model for Improving Seismic Event Locations Using Travel Time Corrections: Description and Demonstration

Description: The U.S. National Laboratories, under the auspices of the Department of Energy, have been tasked with improv- ing the capability of the United States National Data Center (USNDC) to monitor compliance with the Comprehen- sive Test Ban Trea~ (CTBT). One of the most important services which the USNDC must provide is to locate suspicious events, preferably as accurately as possible to help identify their origin and to insure the success of on-site inspections if they are deemed necessary. The seismic location algorithm used by the USNDC has the capability to generate accurate locations by applying geographically dependent travel time corrections, but to date, none of the means, proposed for generating and representing these corrections has proven to be entirely satisfactory. In this presentation, we detail the complete DOE model for how regional calibration travel time information gathered by the National Labs will be used to improve event locations and provide more realistic location error esti- mates. We begin with residual data and error estimates from ground truth events. Our model consists of three parts: data processing, data storage, and data retrieval. The former two are effectively one-time processes, executed in advance before the system is made operational. The last step is required every time an accurate event location is needed. Data processing involves applying non-stationary Bayesian kriging to the residwd data to densifi them, and iterating to find the optimal tessellation representation for the fast interpolation in the data retrieval task. Both the kriging and the iterative re-tessellation are slow, computationally-expensive processes but this is acceptable because they are performed off-line, before any events are to be located. In the data storage task, the densified data set is stored in a database and spatially indexed. Spatial indexing improves the access efficiency of the geographically-ori- ented data requests associated with event ...
Date: October 20, 1998
Creator: Hipp, J.R.; Moore, S.G.; Shepherd, E. & Young, C.J.
Partner: UNT Libraries Government Documents Department

Test of the generalizability of "KBIT" (an artificial intelligence-derived assessment instrument) across medical problems

Description: The purpose of this study was to determine if KBIT's (Knowledge Base Inference Tool) psychometric properties and the relationships between course and fine cognitive constructs could be shown to be generalizeable across medical problems Subsequently, this study represents the initial testing of KBIT's generalizeability via the use of the problem space called neurological "weakness".
Date: May 1991
Creator: Papa, Frank J.
Partner: UNT Libraries

The Knowledge Base Interface for Parametric Grid Information

Description: The parametric grid capability of the Knowledge Base (KBase) provides an efficient robust way to store and access interpolatable information that is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use an approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation. The method involves three basic steps: data preparation, data storage, and data access. In past presentations we have discussed in detail the first step. In this paper we focus on the latter two, describing in detail the type of information which must be stored and the interface used to retrieve parametric grid data from the Knowledge Base. Once data have been properly prepared, the information (tessellation and associated value surfaces) needed to support the interface functionality, can be entered into the KBase. The primary types of parametric grid data that must be stored include (1) generic header information; (2) base model, station, and phase names and associated ID's used to construct surface identifiers; (3) surface accounting information; (4) tessellation accounting information; (5) mesh data for each tessellation; (6) correction data defined for each surface at each node of the surfaces owning tessellation (7) mesh refinement calculation set-up and flag information; and (8) kriging calculation set-up and flag information. The eight data components not only represent the results of the data preparation process but also include all required input information for several population tools that would enable the complete regeneration of the data results if that should be necessary.
Date: August 3, 1999
Creator: Hipp, James R.; Simons, Randall W. & Young, Chris J.
Partner: UNT Libraries Government Documents Department

Migration of Research Results into Operational Monitoring Systems

Description: For the Department of Energy (DOE) Knowledge Base to support activities for monitoring nuclear explosions consistent with eventual verification activities under the Comprehensive Nuclear-Test-Ban Treaty (CTBT), a process is defined to ensure the integrity and utility of research results during the migration into information products for use in operational monitoring systems. The process of validating, verifying, and managing the information products ensures deliveries of high-quality Knowledge Base releases to the United States National Data Center(USNDC). These activities are critical to the successful integration of scientific research to support operational monitoring systems at the USNDC. A by-product of this process is that datasets, or components of information products, that have undergone the validation and verification process maybe distributed as operational calibration products to the International Data Centre. All contributors to information products, whether DOE-funded or not, will benefit from transparency of the integration process to effect successful participation in the process. As an information product passes through the steps necessary to become part of a delivery to the USNDC, domain experts, including the end-users, will provide validation -- a determination of relevance and scientific quality. The integration process continues with verification -- an assessment of completeness and correctness, provided by the Knowledge Base integrator, the information product coordinator, and the contributing organization. The information products and their constituent datasets are systematically tracked through the integration portion of their life cycle (Moore, et al, 2000; Carret al, 2000). Finally, the proposed delivery of the Knowledge Base and its constituent information products is reviewed by an Integration Board. The integration process is presented in this paper, with details described in Moore et al., (2000).
Date: August 7, 2000
Partner: UNT Libraries Government Documents Department

An object-based methodology for knowledge representation

Description: An object based methodology for knowledge representation is presented. The constructs and notation to the methodology are described and illustrated with examples. The ``blocks world,`` a classic artificial intelligence problem, is used to illustrate some of the features of the methodology including perspectives and events. Representing knowledge with perspectives can enrich the detail of the knowledge and facilitate potential lines of reasoning. Events allow example uses of the knowledge to be represented along with the contained knowledge. Other features include the extensibility and maintainability of knowledge represented in the methodology.
Date: November 1, 1997
Creator: Kelsey, R.L.; Hartley, R.T. & Webster, R.B.
Partner: UNT Libraries Government Documents Department

Knowledge base interpolation of path-dependent data using irregularly spaced natural neighbors

Description: This paper summarizes the requirements for the interpolation scheme needed for the CTBT Knowledge Base and discusses interpolation issues relative to the requirements. Based on these requirements, a methodology for providing an accurate and robust interpolation scheme for the CTBT Knowledge Base is proposed. The method utilizes a Delaunay triangle tessellation to mesh the Earth`s surface and employs the natural-neighbor interpolation technique to provide accurate evaluation of geophysical data that is important for CTBT verification. The natural-neighbor interpolation method is a local weighted average technique capable of modeling sparse irregular data sets as is commonly found in the geophysical sciences. This is particularly true of the data to be contained in the CTBT Knowledge Base. Furthermore, natural neighbor interpolation is first order continuous everywhere except at the data points. The non-linear form of the natural-neighbor interpolation method can provide continuous first and second order derivatives throughout the entire data domain. Since one of the primary support functions of the Knowledge Base is to provide event location capabilities, and the seismic event location algorithms typically require first and second order continuity, this is a prime requirement of any interpolation methodology chosen for use by the CTBT Knowledge Base.
Date: August 1, 1996
Creator: Hipp, J.; Keyser, R.; Young, C.; Shepard-Dombroski, E. & Chael, E.
Partner: UNT Libraries Government Documents Department

Optimization of Surfactant Mixtures and Their Interfacial Behavior for Advanced Oil Recovery

Description: The objective of this project was to develop a knowledge base that is helpful for the design of improved processes for mobilizing and producing oil left untapped using conventional techniques. The main goal was to develop and evaluate mixtures of new or modified surfactants for improved oil recovery. In this regard, interfacial properties of novel biodegradable n-alkyl pyrrolidones and sugar-based surfactants have been studied systematically. Emphasis was on designing cost-effective processes compatible with existing conditions and operations in addition to ensuring minimal reagent loss.
Date: March 4, 2002
Creator: Somasundaran, Prof. P.
Partner: UNT Libraries Government Documents Department

Microstructural Evolution in the 2219 Aluminum Alloy During Severe Plastic Deformation

Description: Numerous investigations have demonstrated that intense plastic deformation is an attractive procedure for producing an ultrafine grain size in metallic materials. Torsional deformation under high pressure and equal-channel angular extrusion are two techniques that can produce microstructures with grain sizes in the submicrometer and nanometer range. Materials with these microstructures have many attractive properties. The microstructures formed by these two processing techniques are essentially the same and thus the processes occurring during deformation should be the same. Most previous studies have examined the final microstructures produced as a result of severe plastic deformation and the resulting properties. Only a limited number of studies have examined the evolution of microstructure. As a result, some important aspects of ultra-fine grain formation during severe plastic deformation remain unknown. There is also limited data on the influence of the initial state of the material on the microstructural evolution and mechanisms of ultra-fine grain formation. This limited knowledge base makes optimization of processing routes difficult and retards commercial application of these techniques. The objective of the present work is to examine the microstructure evolution during severe plastic deformation of a 2219 aluminum alloy. Specific attention is given to the mechanism of ultrafine grain formation as a result of severe plastic deformation.
Date: March 29, 2000
Creator: Kaibyshev, R.O.; Safarov, I.M. & Lesuen, D.R.
Partner: UNT Libraries Government Documents Department

Technical Integration of Defense Threat Reduction Agency (DTRA) Location Related Funded Projects into the DOE Knowledge Base

Description: This document directly reviews the current Defense Threat Reduction Agency (DTRA) PRDA contracts and describes how they can best be integrated with the DOE CTBT R&D Knowledge Base. Contract descriptions and numbers listed below are based on the DOE CTBT R&D Web Site - http://www.ctbt.rnd.doe.gov. More detailed information on the nature of each contract can be found through this web site. In general, the location related PRDA contracts provide products over a set of categories. These categories can be divided into five areas, namely: Contextual map information; Reference event data; Velocity models; Phase detection/picking algorithms; and Location techniques.
Date: March 14, 2000
Creator: Schultz, C.A.; Bhattacharyya, J.; Flanaga, M.; Goldstein, P.; Myers, S. & Swenson, J.
Partner: UNT Libraries Government Documents Department

Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage and Access.

Description: The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis for accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process we call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fit the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for ...
Date: October 1, 1999
Creator: Hipp, J. R.; Young, C. J.; Moore, S. G.; Shepherd, E. R.; Schultz, C. A. & Myers, S. C.
Partner: UNT Libraries Government Documents Department

LDRD final report on enhanced edge detection techniques for manufacturing quality control and materials characterization

Description: Detecting object boundaries in the presence of cast shadows is a difficult task for machine vision systems. A new edge detector is presented which responds to shadow penumbras and abrupt object edges with distinguishable signals. The detector requires the use of spatially extended light sources and sufficient video resolution to resolve the shadow penumbras of interest. Detection of high frequency noise is suppressed without requiring image-dependent adjustment of signal thresholds. The ability of the edge operator to distinguish shadow penumbras from abrupt object boundaries while suppressing responses to high frequency noise and texture is illustrated with idealized shadow and object edge intensity profiles. Selective detection of object boundaries in a video scene with a cast shadow has also been demonstrated with this operator.
Date: January 1, 1997
Creator: Osbourn, G.C.
Partner: UNT Libraries Government Documents Department

National Nuclear Security Administration Knowledge Base Core Table Schema Document

Description: The National Nuclear Security Administration is creating a Knowledge Base to store technical information to support the United States nuclear explosion monitoring mission. This document defines the core database tables that are used in the Knowledge Base. The purpose of this document is to present the ORACLE database tables in the NNSA Knowledge Base that on modifications to the CSS3.0 Database Schema developed in 1990. (Anderson et al., 1990). These modifications include additional columns to the affiliation table, an increase in the internal ORACLE format from 8 integers to 9 integers for thirteen IDs, and new primary and unique key definitions for six tables. It is intended to be used as a reference by researchers inside and outside of NNSA/DOE as they compile information to submit to the NNSA Knowledge Base. These ''core'' tables are separated into two groups. The Primary tables are dynamic and consist of information that can be used in automatic and interactive processing (e.g. arrivals, locations). The Lookup tables change infrequently and are used for auxiliary information used by the processing. In general, the information stored in the core tables consists of: arrivals; events, origins, associations of arrivals; magnitude information; station information (networks, site descriptions, instrument responses); pointers to waveform data; and comments pertaining to the information. This document is divided into four sections, the first being this introduction. Section two defines the sixteen tables that make up the core tables of the NNSA Knowledge Base database. Both internal (ORACLE) and external formats for the attributes are defined, along with a short description of each attribute. In addition, the primary, unique and foreign keys are defined. Section three of the document shows the relationships between the different tables by using entity-relationship diagrams. The last section, defines the columns or attributes of the various tables. Information that ...
Date: September 1, 2002
Partner: UNT Libraries Government Documents Department

Development of a Model for Maaging Organizational Knowledge

Description: We created three models to represent a comprehensive knowledge model: · Stages of Knowledge Management Model (Forrester) · Expanded Life-Cycle Information Management Model · Organizational Knowledge Management Model. In building a series of models, we started with an attempt to create a graphical model that illustrates the ideas outlined in the Forrester article (Leadership Strategies, Vol. 3, No. 2, November/December 1997). We then expanded and detailed a life-cycle model. Neither of these effectively reflected how to manage the complexities involved in weaving local, enterprise, and global information into an easily navigated resource for end users. We finally began to synthesize these ideas into an Organizational Knowledge Management Model. This model acknowledges the relevance of life-cycle management for different granularities of information collections and places it in the context of the integrating infrastructure needed to assist end users.
Date: May 5, 1999
Creator: Ashdown, B. & Smith, K.
Partner: UNT Libraries Government Documents Department

Facilities Monitoring Project - AMISS. Annual report, March 14, 1997--March 13, 2000

Description: The objectives of this program are the design and development of knowledge based systems for increasing safety and security in nuclear facilities, to implement a graphic interface in G2, and to assess the performance of the system using validation data. Integration of the systems with sensors and sensor fusion systems is also a goal of this project. This project was designed as a team effort among LANL, FSU, and contractors within Allied Signal, and the collaboration has been a successful venture. Each part of the team has brought very valuable contributions to the project during the year, and the cooperation level among the sub-teams has been phenomenal. The subject of this report is to summarize the accomplishments of the FSU part of the team. Foundational work has been performed for all of the project goals during the past year at FSU. Susan Bassett (Hruska) spent the summer at LANL under this contract, picking up on much of the administrative oversight of the project while Paul Argo was on extended vacation. Chris Lacher, Chair of Computer Science, took on the responsibility of on site leadership and direction of FSU GRAs when Susan Hruska took a partial leave of absence from FSU beginning in January 1998. Kristin Adair spent one semester on site at LANL as a GRA, and all of the FSU team members have traveled at least once to LANL for team meetings during the year. The LANL and FSU teams also met at a technical conference in Orlando in the Fall to present a special session on knowledge-based systems at the conference, and for lengthy team meetings. A web page for the project has provided additional communication links and a forum for sharing information and reports of progress between the sub-teams.
Date: June 1, 1998
Creator: Hruska, S.I. & Lacher, R.C.
Partner: UNT Libraries Government Documents Department

Technical Status Report for US Wind Farmers Network

Description: The theme of the work in this quarter was community-based wind and locally owned wind projects. The work Windustry has done is just beginning to touch the heart of the matter for a hugely interested audience of rural landowners and rural communities. We revised and published a Windustry Newsletter on two farmer owned wind projects called Minwind I and Minwind II. This article was largely researched and written last quarter but the principal individuals that organized the wind projects didn't want any more farmers calling them up than they already had, so they urged us to put a hold on the article or not publish it. This presented a unique problem for Windustry. Up to this point, we had not dealt with generating too much attention for a wind energy project. The story of a group of farmers and individuals pooling their resources for two locally owned commercial-scale wind projects is very compelling and the organizers of the projects were getting a great deal of attention from other farmers that want more details on the project. However, the organizers committed a large amount of their own resources toward the set up of this project which took many hours with their legal counsel and they did not have the capacity or the desire to provide answers for all the other farmers and individuals who were requesting information. Windustry worked with the business entity and did not publish the newsletter until we resolved some of the problems with the high level of interest in this project. Windustry resolved to address this issue by creating a custom track in the state and regional wind energy conference held in Minneapolis, November 21-22, 2002. There were a few sessions in the Landowner and Citizen Workshops track that were specifically created to talk about the ''how-to's ...
Date: February 19, 2003
Creator: Daniels, Lisa
Partner: UNT Libraries Government Documents Department

Damage displacement phenomena in Si junction devices : mapping and interpreting a science and technology knowledge domain.

Description: As technical knowledge grows deeper, broader, and more interconnected, knowledge domains increasingly combine a number of sub-domains. More often than not, each of these sub-domains has its own community of specialists and forums for interaction. Hence, from a generalist's viewpoint, it is sometimes difficult to understand the relationships between the sub-domains within the larger domain; and, from a specialist's viewpoint, it may be difficult for those working in one sub-domain to keep abreast of knowledge gained in another sub-domain. These difficulties can be especially important in the initial stages of creating new projects aimed at adding knowledge either at the domain or sub-domain level. To circumvent these difficulties, one would ideally like to create a map of the knowledge domain--a map which would help clarify relationships between the various sub-domains, and a map which would help inform choices regarding investing in the production of knowledge either at the domain or sub-domain levels. In practice, creating such a map is non-trivial. First, relationships between knowledge subdomains are complex, and not likely to be easily simplified into a visualizable 2-or-few-dimensional map. Second, even if some of the relationships can be simplified, capturing them would require some degree of expert understanding of the knowledge domain, rendering impossible any fully automated method for creating the map. In this work, we accept these limitations, and within them, attempt to explore semi-automated methodologies for creating such a map. We chose as the knowledge domain for this case study 'displacement damage phenomena in Si junction devices'. This knowledge domain spans a particularly wide range of knowledge subdomains, and hence is a particularly challenging one.
Date: October 1, 2004
Creator: Tsao, Jeffrey Yeenien
Partner: UNT Libraries Government Documents Department

Development to Release of CTBT Knowledge Base Datasets

Description: For the CTBT Knowledge Base to be useful as a tool for improving U.S. monitoring capabilities, the contents of the Knowledge Base must be subjected to a well-defined set of procedures to ensure integrity and relevance of the con- stituent datasets. This paper proposes a possible set of procedures for datasets that are delivered to Sandia National Laboratories (SNL) for inclusion in the Knowledge Base. The proposed procedures include defining preliminary acceptance criteria, performing verification and validation activities, and subjecting the datasets to approvrd by domain experts. Preliminary acceptance criteria include receipt of the data, its metadata, and a proposal for its usability for U.S. National Data Center operations. Verification activi- ties establish the correctness and completeness of the data, while validation activities establish the relevance of the data to its proposed use. Results from these activities are presented to domain experts, such as analysts and peers for final approval of the datasets for release to the Knowledge Base. Formats and functionality will vary across datasets, so the procedures proposed herein define an overall plan for establishing integrity and relevance of the dataset. Specific procedures for verification, validation, and approval will be defined for each dataset, or for each type of dataset, as appropriate. Potential dataset sources including Los Alamos National Laboratories and Lawrence Livermore National Laborato- ries have contributed significantly to the development of thk process.
Date: October 20, 1998
Creator: Moore, S.G. & Shepherd, E.R.
Partner: UNT Libraries Government Documents Department

Biofuel alternatives to ethanol: pumping the microbial well

Description: Engineered microorganisms are currently used for the production of food products, pharmaceuticals, ethanol fuel and more. Even so, the enormous potential of this technology has yet to be fully exploited. The need for sustainable sources of transportation fuels has gener-ated a tremendous interest in technologies that enable biofuel production. Decades of work have produced a considerable knowledge-base for the physiology and pathway engineering of microbes, making microbial engineering an ideal strategy for producing biofuel. Although ethanol currently dominates the biofuel mar-ket, some of its inherent physical properties make it a less than ideal product. To highlight additional options, we review advances in microbial engineering for the production of other potential fuel molecules, using a variety of biosynthetic pathways.
Date: December 2, 2009
Creator: Fortman, J. L.; Chhabra, Swapnil; Mukhopadhyay, Aindrila; Chou, Howard; Lee, Taek Soon; Steen, Eric et al.
Partner: UNT Libraries Government Documents Department

Did you pave the road to project completion with gold, asphalt, or mud?

Description: The Simpson-Wilberg Technique for Expert Decision Tree Modeling (patent case number S-83,506) establishes and identifies the proper requirements governing effective business and operating practices through the use of knowledge-based engineering (KBE). The new application, Project Control Expert System (ProCon X) was created by the Simpson-Wilberg technique as a tool for project definition planning and control. ProCon X guides the project manager through an interview session asking project characteristic questions and establishes the proper project criteria. The criteria produced define the rigor applied to the project activities, i.e., level of work breakdown structure (WBS), type of scheduling, amount of cost estimating, etc. In this paper the use of the Simpson-Wilberg technique will be illustrated for solving the problems of identifying proper project criteria for project management. This technique has been used for other applications outside the project management field and can be used in any field with a need for consistent application of rules and procedures routinely used for business and operating practices.
Date: December 31, 1994
Creator: Wilberg, M. & Simpson, W.
Partner: UNT Libraries Government Documents Department

Rethinking the learning of belief network probabilities

Description: Belief networks are a powerful tool for knowledge discovery that provide concise, understandable probabilistic models of data. There are methods grounded in probability theory to incrementally update the relationships described by the belief network when new information is seen, to perform complex inferences over any set of variables in the data, to incorporate domain expertise and prior knowledge into the model, and to automatically learn the model from data. This paper concentrates on part of the belief network induction problem, that of learning the quantitative structure (the conditional probabilities), given the qualitative structure. In particular, the current practice of rote learning the probabilities in belief networks can be significantly improved upon. We advance the idea of applying any learning algorithm to the task of conditional probability learning in belief networks, discuss potential benefits, and show results of applying neural networks and other algorithms to a medium sized car insurance belief network. The results demonstrate from 10 to 100% improvements in model error rates over the current approaches.
Date: March 1, 1996
Creator: Musick, R.
Partner: UNT Libraries Government Documents Department