UNT Theses and Dissertations - 25 Matching Results

Search Results

Developing Criteria for Extracting Principal Components and Assessing Multiple Significance Tests in Knowledge Discovery Applications

Description: With advances in computer technology, organizations are able to store large amounts of data in data warehouses. There are two fundamental issues researchers must address: the dimensionality of data and the interpretation of multiple statistical tests. The first issue addressed by this research is the determination of the number of components to retain in principal components analysis. This research establishes regression, asymptotic theory, and neural network approaches for estimating mean and 95th percentile eigenvalues for implementing Horn's parallel analysis procedure for retaining components. Certain methods perform better for specific combinations of sample size and numbers of variables. The adjusted normal order statistic estimator (ANOSE), an asymptotic procedure, performs the best overall. Future research is warranted on combining methods to increase accuracy. The second issue involves interpreting multiple statistical tests. This study uses simulation to show that Parker and Rothenberg's technique using a density function with a mixture of betas to model p-values is viable for p-values from central and non-central t distributions. The simulation study shows that final estimates obtained in the proposed mixture approach reliably estimate the true proportion of the distributions associated with the null and nonnull hypotheses. Modeling the density of p-values allows for better control of the true experimentwise error rate and is used to provide insight into grouping hypothesis tests for clustering purposes. Future research will expand the simulation to include p-values generated from additional distributions. The techniques presented are applied to data from Lake Texoma where the size of the database and the number of hypotheses of interest call for nontraditional data mining techniques. The issue is to determine if information technology can be used to monitor the chlorophyll levels in the lake as chloride is removed upstream. A relationship established between chlorophyll and the energy reflectance, which can be measured by satellites, enables ...
Date: August 1999
Creator: Keeling, Kellie Bliss
Partner: UNT Libraries

Reliable Prediction Intervals and Bayesian Estimation for Demand Rates of Slow-Moving Inventory

Description: Application of multisource feedback (MSF) increased dramatically and became widespread globally in the past two decades, but there was little conceptual work regarding self-other agreement and few empirical studies investigated self-other agreement in other cultural settings. This study developed a new conceptual framework of self-other agreement and used three samples to illustrate how national culture affected self-other agreement. These three samples included 428 participants from China, 818 participants from the US, and 871 participants from globally dispersed teams (GDTs). An EQS procedure and a polynomial regression procedure were used to examine whether the covariance matrices were equal across samples and whether the relationships between self-other agreement and performance would be different across cultures, respectively. The results indicated MSF could be applied to China and GDTs, but the pattern of relationships between self-other agreement and performance was different across samples, suggesting that the results found in the U.S. sample were the exception rather than rule. Demographics also affected self-other agreement disparately across perspectives and cultures, indicating self-concept was susceptible to cultural influences. The proposed framework only received partial support but showed great promise to guide future studies. This study contributed to the literature by: (a) developing a new framework of self-other agreement that could be used to study various contextual factors; (b) examining the relationship between self-other agreement and performance in three vastly different samples; (c) providing some important insights about consensus between raters and self-other agreement; (d) offering some practical guidelines regarding how to apply MSF to other cultures more effectively.
Date: August 2007
Creator: Lindsey, Matthew Douglas
Partner: UNT Libraries

Investigating the relationship between the business performance management framework and the Malcolm Baldrige National Quality Award framework.

Description: The business performance management (BPM) framework helps an organization continuously adjust and successfully execute its strategies. BPM helps increase flexibility by providing managers with an early alert about changes and, as a result, allows faster response to such changes. The Malcolm Baldrige National Quality Award (MBNQA) framework provides a basis for self-assessment and a systems perspective for managing an organization's key processes for achieving business results. The MBNQA framework is a more comprehensive framework and encapsulates the underlying constructs in the BPM framework. The objectives of this dissertation are fourfold: (1) to validate the underlying relationships presented in the 2008 MBNQA framework, (2) to explore the MBNQA framework at the dimension level, and develop and test constructs measured at that level in a causal model, (3) to validate and create a common general framework for the business performance model by integrating the practitioner literature with basic theory including existing MBNQA theory, and (4) to integrate the BPM framework and the MBNQA framework into a new framework (BPM-MBNQA framework) that can guide organizations in their journey toward achieving and sustaining competitive and strategic advantages. The purpose of this study is to achieve these objectives by means of a combination of methodologies including literature reviews, expert opinions, interviews, presentation feedbacks, content analysis, and latent semantic analysis. An initial BPM framework was developed based on the reviews of literature and expert opinions. There is a paucity of academic research on business performance management. Therefore, this study reviewed the practitioner literature on BPM and from the numerous organization-specific BPM models developed a generic, conceptual BPM framework. With the intent of obtaining valuable feedback, this initial BPM framework was presented to Baldrige Award recipients (BARs) and selected academicians from across the United States who participated in the Fall Summit 2007 held at Caterpillar Financial Headquarter ...
Date: August 2009
Creator: Hossain, Muhammad Muazzem
Partner: UNT Libraries

Links among perceived service quality, patient satisfaction and behavioral intentions in the urgent care industry: Empirical evidence from college students.

Description: Patient perceptions of health care quality are critical to a health care service provider's long-term success because of the significant influence perceptions have on customer satisfaction and consequently organization financial performance. Patient satisfaction affects not only the outcome of the health care process such as patient compliance with physician advice and treatment, but also patient retention and favorable word-of-mouth. Accordingly, it is a critical strategy for health care organizations to provide quality service and address patient satisfaction. The urgent care (UC) industry is an integral part of the health care system in the United States that has been experiencing a rapid growth. UC provides a wide range of medical services for a large group of patients and now serves an increasing population. UC is becoming popular because of the convenient locations, extended hours, walk-in policy, short waiting times, and accessibility. A closer examination of the current health care research, however, indicates that there is a paucity of research on urgent care providers. Confronted with the emergence of the urgent care industry and the increasing demand for urgent care, it is necessary to understand how patients perceive urgent care providers and what influences patient satisfaction and retention. This dissertation addresses four areas relevant to the above mentioned issues: (1) development of an instrument to measure perceived service quality in the urgent care industry; (2) identification of the determinants of patient satisfaction and behavioral intentions; (3) empirical examination of the relationships among perceived service quality, patient satisfaction and behavioral intentions; and (4) comparison of the perceived service quality across several primary urgent care providers, such as urgent care centers, hospital emergency departments, and primary care physicians' offices. To validate this new instrument and examine the hypothesized relationships proposed in this study, an electronic web based survey was designed and administered to college ...
Date: August 2009
Creator: Qin, Hong
Partner: UNT Libraries

Impact of Forecasting Method Selection and Information Sharing on Supply Chain Performance.

Description: Effective supply chain management gains much attention from industry and academia because it helps firms across a supply chain to reduce cost and improve customer service level efficiently. Focusing on one of the key challenges of the supply chains, namely, demand uncertainty, this dissertation extends the work of Zhao, Xie, and Leung so as to examine the effects of forecasting method selection coupled with information sharing on supply chain performance in a dynamic business environment. The results of this study showed that under various scenarios, advanced forecasting methods such as neural network and GARCH models play a more significant role when capacity tightness increases and is more important to the retailers than to the supplier under certain circumstances in terms of supply chain costs. Thus, advanced forecasting models should be promoted in supply chain management. However, this study also demonstrated that forecasting methods not capable of modeling features of certain demand patterns significantly impact a supply chain's performance. That is, a forecasting method misspecified for characteristics of the demand pattern usually results in higher supply chain costs. Thus, in practice, supply chain managers should be cognizant of the cost impact of selecting commonly used traditional forecasting methods, such as moving average and exponential smoothing, in conjunction with various operational and environmental factors, to keep supply chain cost under control. This study demonstrated that when capacity tightness is high for the supplier, information sharing plays a more important role in effective supply chain management. In addition, this study also showed that retailers benefit directly from information sharing when advanced forecasting methods are employed under certain conditions.
Date: December 2009
Creator: Pan, Youqin
Partner: UNT Libraries

Comparing Latent Dirichlet Allocation and Latent Semantic Analysis as Classifiers

Description: In the Information Age, a proliferation of unstructured text electronic documents exists. Processing these documents by humans is a daunting task as humans have limited cognitive abilities for processing large volumes of documents that can often be extremely lengthy. To address this problem, text data computer algorithms are being developed. Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) are two text data computer algorithms that have received much attention individually in the text data literature for topic extraction studies but not for document classification nor for comparison studies. Since classification is considered an important human function and has been studied in the areas of cognitive science and information science, in this dissertation a research study was performed to compare LDA, LSA and humans as document classifiers. The research questions posed in this study are: R1: How accurate is LDA and LSA in classifying documents in a corpus of textual data over a known set of topics? R2: How accurate are humans in performing the same classification task? R3: How does LDA classification performance compare to LSA classification performance? To address these questions, a classification study involving human subjects was designed where humans were asked to generate and classify documents (customer comments) at two levels of abstraction for a quality assurance setting. Then two computer algorithms, LSA and LDA, were used to perform classification on these documents. The results indicate that humans outperformed all computer algorithms and had an accuracy rate of 94% at the higher level of abstraction and 76% at the lower level of abstraction. At the high level of abstraction, the accuracy rates were 84% for both LSA and LDA and at the lower level, the accuracy rate were 67% for LSA and 64% for LDA. The findings of this research have many strong implications for the ...
Date: December 2011
Creator: Anaya, Leticia H.
Partner: UNT Libraries

The Impact of Quality on Customer Behavioral Intentions Based on the Consumer Decision Making Process As Applied in E-commerce

Description: Perceived quality in the context of e-commerce was defined and examined in numerous studies, but, to date, there are no consistent definitions and measurement scales. Instruments that measure quality in e-commerce industries primarily focus on website quality or service quality during the transaction and delivery phases. Even though some scholars have proposed instruments from different perspectives, these scales do not fully evaluate the level of quality perceived by customers during the entire decision-making process. This dissertation purports to provide five main contributions for the e-commerce, service quality, and decision science literature: (1) development of a comprehensive instrument to measure how online customers perceive the quality of the shopping channel, website, transaction and recovery based on the customer decision making process; (2) identification of the determinants of customer satisfaction and the key dimensions of customer behavioral intentions in e-commerce; (3) examination of the relationships among perceived quality, customer satisfaction and loyalty intention using empirical data; (4) application of different statistical packages (LISREL and PLS-Graph) for data analysis and comparison of how these methods impact the results; and (5) examination of the moderating effects of control variables. A survey was designed and distributed to a total of 1126 college students in a large southwestern university in the U.S. Exploratory factor analysis, confirmatory factor analysis, and structural equation modeling with both LISREL and PLS-Graph are used to validate the comprehensive instrument and test the research hypotheses. The results provide theoretical and normative guidelines for researchers and practitioners in the e-commerce domain. The research results will also help e-commerce platform providers or e-retailers to improve their business and marketing strategies by providing a better understanding of the most important factors influencing customer behavioral intentions.
Date: August 2012
Creator: Wen, Chao
Partner: UNT Libraries

Supply Chain Network Planning for Humanitarian Operations During Seasonal Disasters

Description: To prevent loss of lives during seasonal disasters, relief agencies distribute critical supplies and provide lifesaving services to the affected populations. Despite agencies' efforts, frequently occuring disasters increase the cost of relief operations. The purpose of our study is to minimize the cost of relief operations, considering that such disasters cause random demand. To achieve this, we have formulated a series of models, which are distinct from the current studies in three ways. First, to the best of our knowledge, we are the first ones to capture both perishable and durable products together. Second, we have aggregated multiple products in a different way than current studies do. This unique aggregation requires less data than that of other types of aggregation. Finally, our models are compatible with the practical data generated by FEMA. Our models offer insights on the impacts of various parameters on optimum cost and order size. The analyses of correlation of demand and quality of information offer interesting insights; for instance, under certain cases, the quality of information does not influence cost. Our study has considered both risk averse and risk neutral approaches and provided insights. The insights obtained from our models are expected to help agencies reduce the cost of operations by choosing cost effective suppliers.
Date: May 2013
Creator: Ponnaiyan, Subramaniam
Partner: UNT Libraries

Robustness of Parametric and Nonparametric Tests When Distances between Points Change on an Ordinal Measurement Scale

Description: The purpose of this research was to evaluate the effect on parametric and nonparametric tests using ordinal data when the distances between points changed on the measurement scale. The research examined the performance of Type I and Type II error rates using selected parametric and nonparametric tests.
Date: August 1994
Creator: Chen, Andrew H. (Andrew Hwa-Fen)
Partner: UNT Libraries

Comparing the Powers of Several Proposed Tests for Testing the Equality of the Means of Two Populations When Some Data Are Missing

Description: In comparing the means .of two normally distributed populations with unknown variance, two tests very often used are: the two independent sample and the paired sample t tests. There is a possible gain in the power of the significance test by using the paired sample design instead of the two independent samples design.
Date: May 1994
Creator: Dunu, Emeka Samuel
Partner: UNT Libraries

Robustness of the One-Sample Kolmogorov Test to Sampling from a Finite Discrete Population

Description: One of the most useful and best known goodness of fit test is the Kolmogorov one-sample test. The assumptions for the Kolmogorov (one-sample test) test are: 1. A random sample; 2. A continuous random variable; 3. F(x) is a completely specified hypothesized cumulative distribution function. The Kolmogorov one-sample test has a wide range of applications. Knowing the effect fromusing the test when an assumption is not met is of practical importance. The purpose of this research is to analyze the robustness of the Kolmogorov one-sample test to sampling from a finite discrete distribution. The standard tables for the Kolmogorov test are derived based on sampling from a theoretical continuous distribution. As such, the theoretical distribution is infinite. The standard tables do not include a method or adjustment factor to estimate the effect on table values for statistical experiments where the sample stems from a finite discrete distribution without replacement. This research provides an extension of the Kolmogorov test when the hypothesized distribution function is finite and discrete, and the sampling distribution is based on sampling without replacement. An investigative study has been conducted to explore possible tendencies and relationships in the distribution of Dn when sampling with and without replacement for various parameter settings. In all, 96 sampling distributions were derived. Results show the standard Kolmogorov table values are conservative, particularly when the sample sizes are small or the sample represents 10% or more of the population.
Date: December 1996
Creator: Tucker, Joanne M. (Joanne Morris)
Partner: UNT Libraries

Mathematical Programming Approaches to the Three-Group Classification Problem

Description: In the last twelve years there has been considerable research interest in mathematical programming approaches to the statistical classification problem, primarily because they are not based on the assumptions of the parametric methods (Fisher's linear discriminant function, Smith's quadratic discriminant function) for optimality. This dissertation focuses on the development of mathematical programming models for the three-group classification problem and examines the computational efficiency and classificatory performance of proposed and existing models. The classificatory performance of these models is compared with that of Fisher's linear discriminant function and Smith's quadratic discriminant function. Additionally, this dissertation investigates theoretical characteristics of mathematical programming models for the classification problem with three or more groups. A computationally efficient model for the three-group classification problem is developed. This model minimizes directly the number of misclassifications in the training sample. Furthermore, the classificatory performance of the proposed model is enhanced by the introduction of a two-phase algorithm. The same algorithm can be used to improve the classificatory performance of any interval-based mathematical programming model for the classification problem with three or more groups. A modification to improve the computational efficiency of an existing model is also proposed. In addition, a multiple-group extension of a mathematical programming model for the two-group classification problem is introduced. A simulation study on classificatory performance reveals that the proposed models yield lower misclassification rates than Fisher's linear discriminant function and Smith's quadratic discriminant function under certain data configurations. Data configurations, where the parametric methods outperform the proposed models, are also identified. A number of theoretical characteristics of mathematical programming models for the classification problem are identified. These include conditions for the existence of feasible solutions, as well as conditions for the avoidance of degenerate solutions. Additionally, conditions are identified that guarantee the classificatory non-inferiority of one model over another in the training ...
Date: August 1993
Creator: Loucopoulos, Constantine
Partner: UNT Libraries

The Fixed v. Variable Sampling Interval Shewhart X-Bar Control Chart in the Presence of Positively Autocorrelated Data

Description: This study uses simulation to examine differences between fixed sampling interval (FSI) and variable sampling interval (VSI) Shewhart X-bar control charts for processes that produce positively autocorrelated data. The influence of sample size (1 and 5), autocorrelation parameter, shift in process mean, and length of time between samples is investigated by comparing average time (ATS) and average number of samples (ANSS) to produce an out of control signal for FSI and VSI Shewhart X-bar charts. These comparisons are conducted in two ways: control chart limits pre-set at ±3σ_x / √n and limits computed from the sampling process. Proper interpretation of the Shewhart X-bar chart requires the assumption that observations are statistically independent; however, process data are often autocorrelated over time. Results of this study indicate that increasing the time between samples decreases the effect of positive autocorrelation between samples. Thus, with sufficient time between samples the assumption of independence is essentially not violated. Samples of size 5 produce a faster signal than samples of size 1 with both the FSI and VSI Shewhart X-bar chart when positive autocorrelation is present. However, samples of size 5 require the same time when the data are independent, indicating that this effect is a result of autocorrelation. This research determined that the VSI Shewhart X-bar chart signals increasingly faster than the corresponding FSI chart as the shift in the process mean increases. If the process is likely to exhibit a large shift in the mean, then the VSI technique is recommended. But the faster signaling time of the VSI chart is undesirable when the process is operating on target. However, if the control limits are estimated from process samples, results show that when the process is in control the ARL for the FSI and the ANSS for the VSI are approximately the same, and ...
Date: May 1993
Creator: Harvey, Martha M. (Martha Mattern)
Partner: UNT Libraries

A Heuristic Procedure for Specifying Parameters in Neural Network Models for Shewhart X-bar Control Chart Applications

Description: This study develops a heuristic procedure for specifying parameters for a neural network configuration (learning rate, momentum, and the number of neurons in a single hidden layer) in Shewhart X-bar control chart applications. Also, this study examines the replicability of the neural network solution when the neural network is retrained several times with different initial weights.
Date: December 1993
Creator: Nam, Kyungdoo T.
Partner: UNT Libraries

The Effect of Certain Modifications to Mathematical Programming Models for the Two-Group Classification Problem

Description: This research examines certain modifications of the mathematical programming models to improve their classificatory performance. These modifications involve the inclusion of second-order terms and secondary goals in mathematical programming models. A Monte Carlo simulation study is conducted to investigate the performance of two standard parametric models and various mathematical programming models, including the MSD (minimize sum of deviations) model, the MIP (mixed integer programming) model and the hybrid linear programming model.
Date: May 1994
Creator: Wanarat, Pradit
Partner: UNT Libraries

A Simulation Study Comparing Various Confidence Intervals for the Mean of Voucher Populations in Accounting

Description: This research examined the performance of three parametric methods for confidence intervals: the classical, the Bonferroni, and the bootstrap-t method, as applied to estimating the mean of voucher populations in accounting. Usually auditing populations do not follow standard models. The population for accounting audits generally is a nonstandard mixture distribution in which the audit data set contains a large number of zero values and a comparatively small number of nonzero errors. This study assumed a situation in which only overstatement errors exist. The nonzero errors were assumed to be normally, exponentially, and uniformly distributed. Five indicators of performance were used. The classical method was found to be unreliable. The Bonferroni method was conservative for all population conditions. The bootstrap-t method was excellent in terms of reliability, but the lower limit of the confidence intervals produced by this method was unstable for all population conditions. The classical method provided the shortest average width of the confidence intervals among the three methods. This study provided initial evidence as to how the parametric bootstrap-t method performs when applied to the nonstandard distribution of audit populations of line items. Further research should provide a reliable confidence interval for a wider variety of accounting populations.
Date: December 1992
Creator: Lee, Ihn Shik
Partner: UNT Libraries

Classification by Neural Network and Statistical Models in Tandem: Does Integration Enhance Performance?

Description: The major purposes of the current research are twofold. The first purpose is to present a composite approach to the general classification problem by using outputs from various parametric statistical procedures and neural networks. The second purpose is to compare several parametric and neural network models on a transportation planning related classification problem and five simulated classification problems.
Date: December 1998
Creator: Mitchell, David
Partner: UNT Libraries

Accuracy and Interpretability Testing of Text Mining Methods

Description: Extracting meaningful information from large collections of text data is problematic because of the sheer size of the database. However, automated analytic methods capable of processing such data have emerged. These methods, collectively called text mining first began to appear in 1988. A number of additional text mining methods quickly developed in independent research silos with each based on unique mathematical algorithms. How good each of these methods are at analyzing text is unclear. Method development typically evolves from some research silo centric requirement with the success of the method measured by a custom requirement-based metric. Results of the new method are then compared to another method that was similarly developed. The proposed research introduces an experimentally designed testing method to text mining that eliminates research silo bias and simultaneously evaluates methods from all of the major context-region text mining method families. The proposed research method follows a random block factorial design with two treatments consisting of three and five levels (RBF-35) with repeated measures. Contribution of the research is threefold. First, the users perceived a difference in the effectiveness of the various methods. Second, while still not clear, there are characteristics with in the text collection that affect the algorithms ability to extract meaningful results. Third, this research develops an experimental design process for testing the algorithms that is adaptable into other areas of software development and algorithm testing. This design eliminates the bias based practices historically employed by algorithm developers.
Date: August 2013
Creator: Ashton, Triss A.
Partner: UNT Libraries

A Relationship-based Cross National Customer Decision-making Model in the Service Industry

Description: In 2012, the CIA World Fact Book showed that the service sector contributed about 76.6% and 51.4% of the 2010 gross national product of both the United States and Ghana, respectively. Research in the services area shows that a firm's success in today's competitive business environment is dependent upon its ability to deliver superior service quality. However, these studies have yet to address factors that influence customers to remain committed to a mass service in economically diverse countries. In addition, there is little research on established service quality measures pertaining to the mass service domain. This dissertation applies Rusbult's investment model of relationship commitment and examines its psychological impact on the commitment level of a customer towards a service in two economically diverse countries. In addition, service quality is conceptualized as a hierarchical construct in the mass service (banking) and specific dimensions are developed on which customers assess their quality evaluations. Using, PLS path modeling, a structural equation modeling approach to data analysis, service quality as a hierarchical third-order construct was found to have three primary dimensions and six sub-dimensions. The results also established that a country's national economy has a moderating effect on the relationship between service quality and investment size, and service satisfaction on investment size. This study is the first to conceptualize and use the hierarchical approach to service quality in mass services. Not only does this study build upon the investment model to provide a comprehensive decision model for service organizations to increase their return on investment but also, provides a congruence of work between service quality and the investment model in the management and decision sciences discipline.
Date: August 2013
Creator: Boakye, Kwabena G.
Partner: UNT Libraries

Economic Statistical Design of Inverse Gaussian Distribution Control Charts

Description: Statistical quality control (SQC) is one technique companies are using in the development of a Total Quality Management (TQM) culture. Shewhart control charts, a widely used SQC tool, rely on an underlying normal distribution of the data. Often data are skewed. The inverse Gaussian distribution is a probability distribution that is wellsuited to handling skewed data. This analysis develops models and a set of tools usable by practitioners for the constrained economic statistical design of control charts for inverse Gaussian distribution process centrality and process dispersion. The use of this methodology is illustrated by the design of an x-bar chart and a V chart for an inverse Gaussian distributed process.
Date: August 1990
Creator: Grayson, James M. (James Morris)
Partner: UNT Libraries

Derivation of Probability Density Functions for the Relative Differences in the Standard and Poor's 100 Stock Index Over Various Intervals of Time

Description: In this study a two-part mixed probability density function was derived which described the relative changes in the Standard and Poor's 100 Stock Index over various intervals of time. The density function is a mixture of two different halves of normal distributions. Optimal values for the standard deviations for the two halves and the mean are given. Also, a general form of the function is given which uses linear regression models to estimate the standard deviations and the means. The density functions allow stock market participants trading index options and futures contracts on the S & P 100 Stock Index to determine probabilities of success or failure of trades involving price movements of certain magnitudes in given lengths of time.
Date: August 1988
Creator: Bunger, R. C. (Robert Charles)
Partner: UNT Libraries

The Development and Evaluation of a Forecasting System that Incorporates ARIMA Modeling with Autoregression and Exponential Smoothing

Description: This research was designed to develop and evaluate an automated alternative to the Box-Jenkins method of forecasting. The study involved two major phases. The first phase was the formulation of an automated ARIMA method; the second was the combination of forecasts from the automated ARIMA with forecasts from two other automated methods, the Holt-Winters method and the Stepwise Autoregressive method. The development of the automated ARIMA, based on a decision criterion suggested by Akaike, borrows heavily from the work of Ang, Chuaa and Fatema. Seasonality and small data set handling were some of the modifications made to the original method to make it suitable for use with a broad range of time series. Forecasts were combined by means of both the simple average and a weighted averaging scheme. Empirical and generated data were employed to perform the forecasting evaluation. The 111 sets of empirical data came from the M-Competition. The twenty-one sets of generated data arose from ARIMA models that Box, Taio and Pack analyzed using the Box-Jenkins method. To compare the forecasting abilities of the Box-Jenkins and the automated ARIMA alone and in combination with the other two methods, two accuracy measures were used. These measures, which are free of magnitude bias, are the mean absolute percentage error (MAPE) and the median absolute percentage error (Md APE).
Date: May 1985
Creator: Simmons, Laurette Poulos
Partner: UNT Libraries

A Model for the Efficient Investment of Temporary Funds by Corporate Money Managers

Description: In this study seventeen various relationships between yields of three-month, six-month, and twelve-month maturity negotiable CD's and U.S. Government T-Bills were analyzed to find a leading indicator of short-term interest rates. Each of the seventeen relationships was tested for correlation with actual three-, six-, and twelve-month yields from zero to twenty-six weeks in the future. Only one relationship was found to be significant as a leading indicator. This was the twelve-month yield minus the six-month yield adjusted for scale and accumulated where the result was positive. This indicator (variable nineteen in the study) was further tested for usefulness as a trend indicator by transforming it into a function consisting of +1 (when its slope was positive), 0 (when its slope was zero), and -1 (when its slope was negative). Stage II of the study consisted of constructing a computer-aided model employing variable nineteen as a forecasting device. The model accepts a week-by-week minimum cash balance forecast, and the past thirteen weeks' yields of three-, six-, and twelve-month CD's as input. The output of the model consists of a cash time availability schedule, a numerical listing of variable nineteen values, the thirteen-week history of three-, six-, and twelve-month CD yields, a plot of variable nineteen for the next thirteen weeks, and a suggested investment strategy for cash available for investment in the current period.
Date: August 1974
Creator: McWilliams, Donald B., 1936-
Partner: UNT Libraries

The Impact of Culture on the Decision Making Process in Restaurants

Description: Understanding the process of consumers during key purchasing decision points is the margin between success and failure for any business. The cultural differences between the factors that affect consumers in their decision-making process is the motivation of this research. The purpose of this research is to extend the current body of knowledge about decision-making factors by developing and testing a new theoretical model to measure how culture may affect the attitudes and behaviors of consumers in restaurants. This study has its theoretical foundation in the theory of service quality, theory of planned behavior, and rational choice theory. To understand how culture affects the decision-making process and perceived satisfaction, it is necessary to analyze the relationships among the decision factors and attitudes. The findings of this study contribute by building theory and having practical implications for restaurant owners and managers. This study employs a mixed methodology of qualitative and quantitative research. More specifically, the methodologies employed include the development of a framework and testing of that framework via collection of data using semi-structured interviews and a survey instrument. Considering this framework, we test culture as a moderating relationship by using respondents’ birth country, parents’ birth country and ethnic identity. The results of this study conclude, in the restaurant context, culture significantly moderates consumers’ perception of service quality, overall satisfaction, and behavior intention.of OA.
Date: August 2015
Creator: Boonme, Kittipong
Partner: UNT Libraries