3,807 Matching Results

Search Results

Advanced search parameters have been applied.

The Application of Statistical Classification to Business Failure Prediction

Description: Bankruptcy is a costly event. Holders of publicly traded securities can rely on security prices to reflect their risk. Other stakeholders have no such mechanism. Hence, methods for accurately forecasting bankruptcy would be valuable to them. A large body of literature has arisen on bankruptcy forecasting with statistical classification since Beaver (1967) and Altman (1968). Reported total error rates typically are 10%-20%, suggesting that these models reveal information which otherwise is unavailable and has value after financial data is released. This conflicts with evidence on market efficiency which indicates that securities markets adjust rapidly and actually anticipate announcements of financial data. Efforts to resolve this conflict with event study methodology have run afoul of market model specification difficulties. A different approach is taken here. Most extant criticism of research design in this literature concerns inferential techniques but not sampling design. This paper attempts to resolve major sampling design issues. The most important conclusion concerns the usual choice of the individual firm as the sampling unit. While this choice is logically inconsistent with how a forecaster observes financial data over time, no evidence of bias could be found. In this paper, prediction performance is evaluated in terms of expected loss. Most authors calculate total error rates, which fail to reflect documented asymmetries in misclassification costs and prior probabilities. Expected loss overcomes this weakness and also offers a formal means to evaluate forecasts from the perspective of stakeholders other than investors. This study shows that cost of misclassifying bankruptcy must be at least an order of magnitude greater than cost of misclassifying nonbankruptcy before discriminant analysis methods have value. This conclusion follows from both sampling experiments on historical financial data and Monte Carlo experiments on simulated data. However, the Monte Carlo experiments reveal that as the cost ratio increases, robustness of linear ...
Date: December 1994
Creator: Haensly, Paul J.
Partner: UNT Libraries

IMG ER: A System for Microbial Genome Annotation Expert Review and Curation

Description: A rapidly increasing number of microbial genomes are sequenced by organizations worldwide and are eventually included into various public genome data resources. The quality of the annotations depends largely on the original dataset providers, with erroneous or incomplete annotations often carried over into the public resources and difficult to correct. We have developed an Expert Review (ER) version of the Integrated Microbial Genomes (IMG) system, with the goal of supporting systematic and efficient revision of microbial genome annotations. IMG ER provides tools for the review and curation of annotations of both new and publicly available microbial genomes within IMG's rich integrated genome framework. New genome datasets are included into IMG ER prior to their public release either with their native annotations or with annotations generated by IMG ER's annotation pipeline. IMG ER tools allow addressing annotation problems detected with IMG's comparative analysis tools, such as genes missed by gene prediction pipelines or genes without an associated function. Over the past year, IMG ER was used for improving the annotations of about 150 microbial genomes.
Date: May 25, 2009
Creator: Markowitz, Victor M.; Mavromatis, Konstantinos; Ivanova, Natalia N.; Chen, I-Min A.; Chu, Ken & Kyrpides, Nikos C.
Partner: UNT Libraries Government Documents Department

The Development and Evaluation of a Forecasting System that Incorporates ARIMA Modeling with Autoregression and Exponential Smoothing

Description: This research was designed to develop and evaluate an automated alternative to the Box-Jenkins method of forecasting. The study involved two major phases. The first phase was the formulation of an automated ARIMA method; the second was the combination of forecasts from the automated ARIMA with forecasts from two other automated methods, the Holt-Winters method and the Stepwise Autoregressive method. The development of the automated ARIMA, based on a decision criterion suggested by Akaike, borrows heavily from the work of Ang, Chuaa and Fatema. Seasonality and small data set handling were some of the modifications made to the original method to make it suitable for use with a broad range of time series. Forecasts were combined by means of both the simple average and a weighted averaging scheme. Empirical and generated data were employed to perform the forecasting evaluation. The 111 sets of empirical data came from the M-Competition. The twenty-one sets of generated data arose from ARIMA models that Box, Taio and Pack analyzed using the Box-Jenkins method. To compare the forecasting abilities of the Box-Jenkins and the automated ARIMA alone and in combination with the other two methods, two accuracy measures were used. These measures, which are free of magnitude bias, are the mean absolute percentage error (MAPE) and the median absolute percentage error (Md APE).
Date: May 1985
Creator: Simmons, Laurette Poulos
Partner: UNT Libraries

A Theoretical and Empirical Investigation into the Application af a Cash-Flow Accounting System

Description: The objective of this research is to make a theoretical and empirical investigation into the application of a cashflow accounting system. The theoretical investigation provides a definition for cash-flow accounting; it also examines the major arguments for a cash-flow reporting system. Three hypotheses are proposed for testing. The first states that during periods of changing prices, performance indicators that are based on the conventional accrual accounting will diverge from performance indicators that are based on cash-flow accounting and will continue to diverge over time. The second states that this divergence will disappear if the effects of inflation are partialled out. The third states that cash-flow statements, properly interpreted, will enable users to predict business failure.
Date: December 1983
Creator: Habib, Abo-El-Yazeed Tawfik
Partner: UNT Libraries

Affective Forecasting: the Effects of Immune Neglect and Surrogation

Description: Studies of affective forecasting examine people’s ability to predict (forecast) their emotional (affective) responses to future events. Affective forecasts underlie nearly all decisions people make and the actions they take. However, people engage in systematic cognitive errors when making affective forecasts and most often overestimate the intensity and duration of their emotional responses. Understanding the mechanisms that lead to affective forecasting errors (e.g., immune neglect) and examining the utility of methods for improving affective forecasting errors (e.g., surrogation) can provide highly valuable information for clinicians as they assist clients in determining their goals both for therapy and for life. The first purpose of the current study was to determine if affective forecasting errors due to immune neglect lead to misjudgments about the relative emotional impact of minor versus moderate negative experiences (i.e., trauma severity). The second purpose was to examine the utility of surrogation for improving affective forecasts. Potential interaction effects between these two variables were also examined. The current study utilized a 2 (Trauma Severity: minor, moderate) X 3 (Prediction Information: surrogation information only, simulation information only, both types of information) experimental design. Undergraduates were recruited via the SONA system and randomly assigned to one of the six experimental conditions. A preliminary study was conducted to obtain surrogation information for use in the main study. All participants in the main study predicted how they would feel 10 minutes after receiving negative personality feedback, using a 10-point scale ranging from (1) very unhappy to (10) very happy. These predictions constitute their affective forecasts. All participants then actually received the negative personality feedback (ostensibly from another participant, a peer, in a nearby room) and reported their actual affective states ten minutes later, using the same scale. These ratings constitute their affective reports. Affective forecasting error was calculated as the difference between ...
Date: August 2012
Creator: Burkman, Summer Dae
Partner: UNT Libraries

527 Organizations and Campaign Activity: Timing of Reporting Requirements under Tax and Campaign Finance Laws

Description: This report compares the timing of election activity reporting requirements under the Internal Revenue Code (IRC) and Federal Election Campaign Act (FECA), and discusses H.R. 1204, which would amend the timing of the IRC’s reporting requirements.
Date: July 25, 2008
Creator: Lunder, Erika & Whitaker, L. Paige
Partner: UNT Libraries Government Documents Department

Workshop on Monsoon Climate Systems: Toward Better Prediction of the Monsoon

Description: The Earth's monsoon systems are the life-blood of more than two-thirds of the world's population through the rainfall they provide to the mainly agrarian societies they influence. More than 60 experts gathered to assess the current understanding of monsoon variability and to highlight outstanding problems simulating the monsoon.
Date: December 20, 2005
Creator: Sperber, K R & Yasunari, T
Partner: UNT Libraries Government Documents Department

Forecasting Quarterly Sales Tax Revenues: A Comparative Study

Description: The purpose of this study is to determine which of three forecasting methods provides the most accurate short-term forecasts, in terms of absolute and mean absolute percentage error, for a unique set of data. The study applies three forecasting techniques--the Box-Jenkins or ARIMA method, cycle regression analysis, and multiple regression analysis--to quarterly sales tax revenue data. The final results show that, with varying success, each model identifies the direction of change in the future, but does not closely identify the period to period fluctuations. Indeed, each model overestimated revenues for every period forecasted. Cycle regression analysis, with a mean absolute percentage error of 7.21, is the most accurate model. Multiple regression analysis has the smallest absolute percentage error of 3.13.
Date: August 1986
Creator: Renner, Nancy A. (Nancy Ann)
Partner: UNT Libraries

Validation and Investigation of the Four Aspects of Cycle Regression: A New Algorithm for Extracting Cycles

Description: The cycle regression analysis algorithm is the most recent addition to a group of techniques developed to detect "hidden periodicities." This dissertation investigates four major aspects of the algorithm. The objectives of this research are 1. To develop an objective method of obtaining an initial estimate of the cycle period? the present procedure of obtaining this estimate involves considerable subjective judgment; 2. To validate the algorithm's success in extracting cycles from multi-cylical data; 3. To determine if a consistent relationship exists among the smallest amplitude, the error standard deviation, and the number of replications of a cycle contained in the data; 4. To investigate the behavior of the algorithm in the predictions of major drops.
Date: December 1982
Creator: Mehta, Mayur Ravishanker
Partner: UNT Libraries

Job embeddedness versus traditional models of voluntary turnover: A test of voluntary turnover prediction.

Description: Voluntary turnover has historically been a problem for today's organizations. Traditional models of turnover continue to be utilized in a number of ways in both academia and industry. A newer model of turnover, job embeddedness, has recently been developed in an attempt to better predict voluntary turnover than existing models. Job embeddedness consists of organizational fit, organizational sacrifice, and organizational links. The purpose of this study is to two fold. First, psychometric analyses were conducted on the job embeddedness model. Exploratory factor analyses were conducted on the dimensions of job embeddedness, which revealed a combined model consisting of five factors. This structure was then analyzed using confirmatory factor analysis, assessing a 1, 3, and 5 factor model structure. The confirmatory factor analysis established the use of the 5 factor model structure in subsequent analysis in this study. The second purpose of this study is to compare the predictive power of the job embeddedness model versus that of the traditional models of turnover. The traditional model of turnover is comprised of job satisfaction, organizational commitment, and perceived job alternatives. In order to compare the predictive power of the job embeddedness and traditional model of voluntary turnover, a series of structural equation model analyses were conducting using LISREL. The job embeddedness model, alone, was found to be the best fit with the sample data. This fit was improved over the other two models tested (traditional model and the combination of the traditional and job embeddedness model). In addition to assessing which model better predicts voluntary turnover, it was tested which age group and gender is a better fit with the job embeddedness model. It was found that the job embeddedness model better predicts turnover intention for older respondents and males.
Date: December 2005
Creator: Besich, John
Partner: UNT Libraries

Quantitative Methods for Reservoir Characterization and Improved Recovery: Application to Heavy Oil Sands

Description: Improved prediction of interwell reservoir heterogeneity was needed to increase productivity and to reduce recovery cost for California's heavy oil sands, which contain approximately 2.3 billion barrels of remaining reserves in the Temblor Formation and in other formations of the San Joaquin Valley. This investigation involved application of advanced analytical property-distribution methods conditioned to continuous outcrop control for improved reservoir characterization and simulation.
Date: February 7, 2003
Creator: Castle, James W.; Molz, Fred J.; Brame, Scott & Current, Caitlin J.
Partner: UNT Libraries Government Documents Department

Using artifical neutral networks and the genetic algorithm to optimize well-field design: Phase I

Description: Reservoir simulation is a well-established component of reservoir management throughout much of the petroleum industry. Black oil simulators and more complex compositional, thermal, and chemical models are used as forecasting tools in both the day-to-day operational management of production facilities and longer-term field development planning. As yet, however, little use has been made of reservoir simulation coupled with systematic optimization techniques. The main advantage of applying these mathematical tools to decision- making problems is that they are less restricted by human imagination than conventional case-by- case comparisons. As the number of competing engineering, economic, and environmental planning objectives and constraints increases, it becomes difficult for human planners to track complex interactions and select a manageable set of promising scenarios for examination. Using optimization techniques, the search can range over all possible combinations of variables, locating strategies whose effectiveness is not always obvious to planners. Optimization also generates large sets of promising scenarios from which planners can choose:
Date: March 1, 1998
Creator: Johnson, V. M. & Rogers, L. L.
Partner: UNT Libraries Government Documents Department

Predicting cancer outcome

Description: We read with interest the paper by Michiels et al on the prediction of cancer with microarrays and the commentary by Ioannidis listing the potential as well as the limitations of this approach (February 5, p 488 and 454). Cancer is a disease characterized by complex, heterogeneous mechanisms and studies to define factors that can direct new drug discovery and use should be encouraged. However, this is easier said than done. Casti teaches that a better understanding does not necessarily extrapolate to better prediction, and that useful prediction is possible without complete understanding (1). To attempt both, explanation and prediction, in a single nonmathematical construct, is a tall order (Figure 1).
Date: March 24, 2005
Creator: Gardner, S N & Fernandes, M
Partner: UNT Libraries Government Documents Department

Quantitative Methods for Reservoir Characterization and Improved Recovery: Application to Heavy Oil Sands

Description: Improved prediction of interwell reservoir heterogeneity was needed to increase productivity and to reduce recovery cost for California's heavy oil sands, which contain approximately 2.3 billion barrels of remaining reserves in the Temblor Formation and in other formations of the San Joaquin Valley. This investigation involved application of advanced analytical property-distribution methods conditioned to continuous outcrop control for improved reservoir characterization and simulation.
Date: February 7, 2003
Creator: Castle, J.W.; Bridges, R.A.; Lorinovich, C.J.; Molz, Fred J.; Dinwiddie, C.L. & Lu, S.
Partner: UNT Libraries Government Documents Department

Systematic Model Error and Seasonal Predictability of the Asian Summer Monsoon

Description: The goals of this paper are to (1) ascertain the ability of atmospheric GCMs to hindcast the summer monsoons of 1987, 1988, and 1993, (2) to determine how well the models represent the dominant modes of subseasonal variability of the 850hPa flow, (3) to determine if the models can represent the strong link between the subseasonal modes of variability and the rainfall, (4) to determine if the models properly project these modes onto interannual timescales, (5) to determine if it is possible to objectively discriminate among the ensemble members to ascertain which members are most reliable. The results are based upon contributions to the seasonal prediction model intercomparison project (SMIP), which was initiated by the CLIVAR Working Group on Seasonal to Interannual Prediction (WGSIP). For June-September, ensembles of integrations were performed using observed initial conditions, and observed sea surface temperatures. Here, the results from a 4-member ensemble from the United Kingdom Met Office (UKMO) model are presented for the sake of brevity. The conclusions based on the analysis of this model are consistent with the behavior of the other models.
Date: August 22, 2000
Creator: Sperber, K.R.; Brankovic, C.; Deque, M.; Fredericksen, C.S.; Graham, R.; Kitoh, A. et al.
Partner: UNT Libraries Government Documents Department

Quantitative Methods for Reservoir Characterization and Improved Recovery: Application to Heavy Oil Sands

Description: Improved prediction of interwell reservoir heterogeneity was needed to increase productivity and to reduce recovery cost for California's heavy oil sands, which contain approximately 2.3 billion barrels of remaining reserves in the Temblor Formation and in other formations of the San Joaquin Valley. This investigation involved application of advanced analytical property-distribution methods conditioned to continuous outcrop control for improved reservoir characterization and simulation.
Date: February 7, 2003
Creator: Castle, J.W.; Molz, F.J.; Brame, S.E. & Falta, R.W.
Partner: UNT Libraries Government Documents Department

Quantitative Methods for Reservoir Characterization and Improved Recovery: Application to Heavy Oil Sands

Description: Improved prediction of interwell reservoir heterogeneity is needed to increase productivity and to reduce recovery cost for California's heavy oil sands, which contain approximately 2.3 billion barrels of remaining reserves in the Temblor Formation and in other formations of the San Joaquin Valley. This investigation involved application of advanced analytical property-distribution methods conditioned to continuous outcrop control for improved reservoir characterization and simulation.
Date: February 7, 2003
Creator: Castle, James W. & Molz, Fred J.
Partner: UNT Libraries Government Documents Department

Geographical and Temporal Dynamics of Chaetocnema Pulicaria Populations and Their Role in Stewart's Disease of Corn in Iowa

Description: This thesis is organized into five chapters. Chapter 1 is the introduction and justification, chapters 2 and 3 are journal papers, chapter 4 is a preliminary analysis of winter environmental variables and their use in forecasting for Stewart's disease of corn, and chapter 5 is general conclusions and discussion. References can be found at the end of each chapter, except chapter 5 and are specific to that chapter.
Date: May 1, 2001
Creator: Esker, Paul David
Partner: UNT Libraries Government Documents Department

Current Economic Conditions and Selected Forecasts

Description: This report begins with a comprehensive presentation of current economic conditions focusing on income growth, unemployment, and inflation. The posture of monetary and fiscal policy is surveyed as are the forecasts of economic activity. It concludes with data on the factors important for economic growth.
Date: August 21, 2006
Creator: Makinen, Gail
Partner: UNT Libraries Government Documents Department