122 Matching Results

Search Results

Public School Educators' Use of Computer-Mediated Communication

Description: This study examined the uses of computer-mediated communication (CMC) by educators in selected public schools. It used Rogers' Diffusion of Innovation Theory as the underpinnings of the study. CMC refers to any exchange of information that involves the use of computers for communication between individuals or individuals and a machine. This study was an exploration of difficulties users confront, what services they access, and the tasks they accomplish when using CMC. It investigated the factors that affect the use of CMC. The sample population was drawn from registered users on TENET, the Texas Education Network as of December 1997. The educators were described with frequency and percentages analyzing the demographic data. For the research, eight indices were selected to test how strongly these user and environmental attributes were associated with the use of CMC. These variables were (1) education, (2) position, (3) place of employment, (4) geographic location, (5) district size, (6) organization vitality, (7) adopter resources, and (8) instrumentality Two dependent variables were used to test for usage: (1) depth or frequency of CMC usage and amount of time spent online and (2) breadth or variety of Internet utilities used. Additionally, the users' perception of network benefits was measured. Network benefits were correlated with social interaction and perception of CMC to investigate what tasks educators were accomplishing with CMC. Correlations, SEQ CHAPTER h r 1 crosstabulations, and ANOVAs were used to analysis the data for testing the four hypotheses. The major findings of the study, based on the hypotheses tested, were that the socioeconomic variables of education and position influenced the use of CMC. A significant finding is that teachers used e-mail and for Internet resources less frequently than those in other positions. An interesting finding was that frequency of use was more significant for usage than amount of ...
Date: December 2000
Creator: Urias-Barker, Zelina
Partner: UNT Libraries

Identifying At-Risk Students: An Assessment Instrument for Distributed Learning Courses in Higher Education

Description: The current period of rapid technological change, particularly in the area of mediated communication, has combined with new philosophies of education and market forces to bring upheaval to the realm of higher education. Technical capabilities exceed our knowledge of whether expenditures on hardware and software lead to corresponding gains in student learning. Educators do not yet possess sophisticated assessments of what we may be gaining or losing as we widen the scope of distributed learning. The purpose of this study was not to draw sweeping conclusions with respect to the costs or benefits of technology in education. The researcher focused on a single issue involved in educational quality: assessing the ability of a student to complete a course. Previous research in this area indicates that attrition rates are often higher in distributed learning environments. Educators and students may benefit from a reliable instrument to identify those students who may encounter difficulty in these learning situations. This study is aligned with research focused on the individual engaged in seeking information, assisted or hindered by the capabilities of the computer information systems that create and provide access to information. Specifically, the study focused on the indicators of completion for students enrolled in video conferencing and Web-based courses. In the final version, the Distributed Learning Survey encompassed thirteen indicators of completion. The results of this study of 396 students indicated that the Distributed Learning Survey represented a reliable and valid instrument for identifying at-risk students in video conferencing and Web-based courses where the student population is similar to the study participants. Educational level, GPA, credit hours taken in the semester, study environment, motivation, computer confidence, and the number of previous distributed learning courses accounted for most of the predictive power in the discriminant function based on student scores from the survey.
Date: May 2000
Creator: Osborn, Viola
Partner: UNT Libraries

Creating a Criterion-Based Information Agent Through Data Mining for Automated Identification of Scholarly Research on the World Wide Web

Description: This dissertation creates an information agent that correctly identifies Web pages containing scholarly research approximately 96% of the time. It does this by analyzing the Web page with a set of criteria, and then uses a classification tree to arrive at a decision. The criteria were gathered from the literature on selecting print and electronic materials for academic libraries. A Delphi study was done with an international panel of librarians to expand and refine the criteria until a list of 41 operationalizable criteria was agreed upon. A Perl program was then designed to analyze a Web page and determine a numerical value for each criterion. A large collection of Web pages was gathered comprising 5,000 pages that contain the full work of scholarly research and 5,000 random pages, representative of user searches, which do not contain scholarly research. Datasets were built by running the Perl program on these Web pages. The datasets were split into model building and testing sets. Data mining was then used to create different classification models. Four techniques were used: logistic regression, nonparametric discriminant analysis, classification trees, and neural networks. The models were created with the model datasets and then tested against the test dataset. Precision and recall were used to judge the effectiveness of each model. In addition, a set of pages that were difficult to classify because of their similarity to scholarly research was gathered and classified with the models. The classification tree created the most effective classification model, with a precision ratio of 96% and a recall ratio of 95.6%. However, logistic regression created a model that was able to correctly classify more of the problematic pages. This agent can be used to create a database of scholarly research published on the Web. In addition, the technique can be used to create a ...
Date: May 2000
Creator: Nicholson, Scott
Partner: UNT Libraries

An Analysis of the Ability of an Instrument to Measure Quality of Library Service and Library Success

Description: This study consisted of an examination of how service quality should be measured within libraries and how library service quality relates to library success. A modified version of the SERVQUAL instrument was evaluated to determine how effectively it measures library service quality. Instruments designed to measure information center success and information system success were evaluated to determine how effectively they measure library success and how they relate to SERVQUAL. A model of library success was developed to examine how library service quality relates to other variables associated with library success. Responses from 385 end users at two U.S. Army Corps of Engineers libraries were obtained through a mail survey. Results indicate that library service quality is best measured with a performance-based version of SERVQUAL, and that measuring importance may be as critical as measuring expectations for management purposes. Results also indicate that library service quality is an important factor in library success and that library success is best measured with a combination of SERVQUAL and library success instruments. The findings have implications for the development of new instruments to more effectively measure library service quality and library success as well as for the development of new models of library service quality and library success.
Date: December 1999
Creator: Landrum, Hollis T.
Partner: UNT Libraries

A Personal Documenation System for Scholars: A Tool for Thinking

Description: This exploratory research focused on a problem stated years ago by Vannevar Bush: "The problem is how creative men think, and what can be done to help them think." The study explored the scholarly work process and the use of computer tools to augment thinking. Based on a review of several related literatures, a framework of 7 major categories and 28 subcategories of scholarly thinking was proposed. The literature was used to predict problems scholars have in organizing their information, potential solutions, and specific computer tool features to augment scholarly thinking. Info Select, a personal information manager with most of these features (text and outline processing, sophisticated searching and organizing), was chosen as a potential tool for thinking. The study looked at how six scholars (faculty and doctoral students in social science fields at three universities) organized information using Info Select as a personal documentation system for scholarly work. These multiple case studies involved four in-depth, focused interviews, written evaluations, direct observation, and analysis of computer logs and files collected over a 3- to 6-month period. A content analysis of interviews and journals supported the proposed AfFORD-W taxonomy: Scholarly work activities consisted of Adding, Filing, Finding, Organizing, Reminding, and Displaying information to produce a Written product. Very few activities fell outside this framework, and activities were distributed evenly across all categories. Problems, needs, and likes mentioned by scholars, however, clustered mainly in the filing, finding, and organizing categories. All problems were related to human memory. Both predictions and research findings imply a need for tools that support information storage and retrieval in personal documentation systems, for references and notes, with fast and easy input of source material. A computer tool for thinking should support categorizing and organizing, reorganizing and transporting information. It should provide a simple search engine and support ...
Date: December 1999
Creator: Burkett, Leslie Stewart
Partner: UNT Libraries

Korean Studies in North America 1977-1996: A Bibliometric Study

Description: This research is a descriptive bibliometric study of the literature of the field of Korean studies. Its goal is to quantitatively describe the literature and serve as a model for such research in other area studies fields. This study analyzed 193 source articles and 7,166 citations in the articles in four representative Korean and Asian studies journals published in North America from 1977 to 1996. The journals included in this study were Korean Studies (KS), the Journal of Korean Studies (JKS), the Journal of Asian Studies (JAS), and the Harvard Journal of Asiatic Studies (HJAS). Subject matters and author characteristics of the source articles were examined, along with various characteristics such as the form, date, language, country of origin, subject, key authors, and key titles of the literature cited in the source articles. Research in Korean studies falls within fourteen broad disciplines, but concentrated in a few disciplines. Americans have been the most active authors in Korean studies, followed closely by authors of Korean ethnicity. Monographic literature was used most. The mean age of publications cited was 20.87 and the median age of publications cited was 12. The Price Index of Korean studies as a whole is 21.9 percent. Sources written in English were most cited (47.1%) and references to Korean language sources amounted to only 34.9% of all sources. In general, authors preferred sources published in their own countries. Sources on history were cited most by other disciplines. No significant core authors were identified. No significant core literature were identified either. This study indicates that Korean studies is still evolving. Some ways of promoting research in less studied disciplines and of facilitating formal communication between Korean scholars in Korea and Koreanists in North America need to be sought in order to promote well-balanced development in the field. This study ...
Date: December 1999
Creator: Chun, Kyungmi
Partner: UNT Libraries

MEDLINE Metric: A method to assess medical students' MEDLINE search effectiveness

Description: Medical educators advocate the need for medical students to acquire information management skills, including the ability to search the MEDLINE database. There has been no published validated method available to use for assessing medical students' MEDLINE information retrieval skills. This research proposes and evaluates a method, designed as the MEDLINE Metric, for assessing medical students' search skills. MEDLINE Metric consists of: (a) the development, by experts, of realistic clinical scenarios that include highly constructed search questions designed to test defined search skills; (b) timed tasks (searches) completed by subjects; (c) the evaluation of search results; and (d) instructive feedback. A goal is to offer medical educators a valid, reliable, and feasible way to judge mastery of information searching skill by measuring results (search retrieval) rather than process (search behavior) or cognition (knowledge about searching). Following a documented procedure for test development, search specialists and medical content experts formulated six clinical search scenarios and questions. One hundred and forty-five subjects completed the six-item test under timed conditions. Subjects represented a wide range of MEDLINE search expertise. One hundred twenty complete cases were used, representing 53 second-year medical students (44%), 47 fourth-year medical students (39%), and 20 medical librarians (17%). Data related to educational level, search training, search experience, confidence in retrieval, difficulty of search, and score were analyzed. Evidence supporting the validity of the method includes the agreement by experts about the skills and knowledge necessary to successfully retrieve information relevant to a clinical question from the MEDLINE database. Also, the test discriminated among different performance levels. There were statistically significant, positive relationships between test score and level of education, self-reported previous MEDLINE training, and self-reported previous search experience. The findings from this study suggest that MEDLINE Metric is a valid method for constructing and administering a performance-based test to identify ...
Date: May 2000
Creator: Hannigan, Gale G.
Partner: UNT Libraries

Empowering Agent for Oklahoma School Learning Communities: An Examination of the Oklahoma Library Improvement Program

Description: The purposes of this study were to determine the initial impact of the Oklahoma Library Media Improvement Grants on Oklahoma school library media programs; assess whether the Oklahoma Library Media Improvement Grants continue to contribute to Oklahoma school learning communities; and examine possible relationships between school library media programs and student academic success. It also seeks to document the history of the Oklahoma Library Media Improvement Program 1978 - 1994 and increase awareness of its influence upon the Oklahoma school library media programs. Methods of data collection included: examining Oklahoma Library Media Improvement Program archival materials; sending a survey to 1703 school principals in Oklahoma; and interviewing Oklahoma Library Media Improvement Program participants. Data collection took place over a one year period. Data analyses were conducted in three primary phases: descriptive statistics and frequencies were disaggregated to examine mean scores as they related to money spent on school library media programs; opinions of school library media programs; and possible relationships between school library media programs and student academic achievement. Analysis of variance was used in the second phase of data analysis to determine if any variation between means was significant as related to Oklahoma Library Improvement Grants, time spent in the library media center by library media specialists, principal gender, opinions of library media programs, student achievement indicators, and the region of the state in which the respondent was located. The third phase of data analysis compared longitudinal data collected in the 2000 survey with past data. The primary results indicated students in Oklahoma from schools with a centralized library media center, served by a full-time library media specialist, and the school having received one or more Library Media Improvement Grants scored significantly higher academically than students in schools not having a centralized library media center, not served by a ...
Date: August 2000
Creator: Jenkins, Carolyn Sue Ottinger
Partner: UNT Libraries

An Experimental Study of Teachers' Verbal and Nonverbal Immediacy, Student Motivation, and Cognitive Learning in Video Instruction

Description: This study used an experimental design and a direct test of recall to provide data about teacher immediacy and student cognitive learning. Four hypotheses and a research question addressed two research problems: first, how verbal and nonverbal immediacy function together and/or separately to enhance learning; and second, how immediacy affects cognitive learning in relation to student motivation. These questions were examined in the context of video instruction to provide insight into distance learning processes and to ensure maximum control over experimental manipulations. Participants (N = 347) were drawn from university students in an undergraduate communication course. Students were randomly assigned to groups, completed a measure of state motivation, and viewed a 15-minute video lecture containing part of the usual course content delivered by a guest instructor. Participants were unaware that the video instructor was actually performing one of four scripted manipulations reflecting higher and lower combinations of specific verbal and nonverbal cues, representing the four cells of the 2x2 research design. Immediately after the lecture, students completed a recall measure, consisting of portions of the video text with blanks in the place of key words. Participants were to fill in the blanks with exact words they recalled from the videotape. Findings strengthened previous research associating teacher nonverbal immediacy with enhanced cognitive learning outcomes. However, higher verbal immediacy, in the presence of higher and lower nonverbal immediacy, was not shown to produce greater learning among participants in this experiment. No interaction effects were found between higher and lower levels of verbal and nonverbal immediacy. Recall scores were comparatively low in the presence of higher verbal and lower nonverbal immediacy, suggesting that nonverbal expectancy violations may have hindered cognitive learning. Student motivation was not found to be a significant source of error in measuring immediacy's effects, and no interaction effects were detected ...
Date: May 2000
Creator: Witt, Paul L.
Partner: UNT Libraries

The Second Vatican Council and American Catholic Theological Research: A Bibliometric Analysis of Theological Studies: 1940-1995

Description: A descriptive analysis was given of the characteristics of the authors and citations of the articles in the journal Theological Studies from 1940-1995. Data was gathered on the institutional affiliation, geographic location, occupation, and gender and personal characteristics of the author. The citation characteristics were examined for the cited authors, date and age of the citations, format, language, place of publication, and journal titles. These characteristics were compared to the time-period before and after the Second Vatican Council in order to detect any changes that might have occurred in the characteristics after certain recommendations by the council were made to theologians. Subject dispersion of the literature was also analyzed. Lotka's Law of author productivity and Bradford's Law of title dispersion were also performed for this literature. The profile of the characteristics of the authors showed that the articles published by women and laypersons has increased since the recommendations of the council. The data had a good fit to Lotka's Law for the pre-Vatican II time period but not for the period after Vatican II. The data was a good fit to Bradford's Law for the predicted number of journals in the nucleus and Zone 2, but the observed number of journals in Zone 3 was higher than predicted for all time-periods. Subject dispersion of research from disciplines other than theology is low but citation to works from the fields of education, psychology, social sciences, and science has increased since Vatican II. The results of the analysis of the characteristics of the citations showed that there was no significant change in the age, format and languages used, or the geographic location of the publisher of the cited works after Vatican II. Citation characteristics showed that authors prefer research from monographs published in English and in U.S. locations for all time-periods. Research ...
Date: August 2000
Creator: Phelps, Helen Stegall
Partner: UNT Libraries

An Examination Of The Variation In Information Systems Project Cost Estimates: The Case Of Year 2000 Compliance Projects

Description: The year 2000 (Y2K) problem presented a fortuitous opportunity to explore the relationship between estimated costs of software projects and five cost influence dimensions described by the Year 2000 Enterprise Cost Model (Kappelman, et al., 1998) -- organization, problem, solution, resources, and stage of completion. This research was a field study survey of (Y2K) project managers in industry, government, and education and part of a joint project that began in 1996 between the University of North Texas and the Y2K Working Group of the Society for Information Management (SIM). Evidence was found to support relationships between estimated costs and organization, problem, resources, and project stage but not for the solution dimension. Project stage appears to moderate the relationships for organization, particularly IS practices, and resources. A history of superior IS practices appears to mean lower estimated costs, especially for projects in larger IS organizations. Acquiring resources, especially external skills, appears to increase costs. Moreover, projects apparently have many individual differences, many related to size and to project stage, and their influences on costs appear to be at the sub-dimension or even the individual variable level. A Revised Year 2000 Enterprise Model is presented incorporating this granularity. Two primary conclusions can be drawn from this research: (1) large software projects are very complex and thus cost estimating is also; and (2) the devil of cost estimating is in the details of knowing which of the many possible variables are the important ones for each particular enterprise and project. This points to the importance of organizations keeping software project metrics and the historical calibration of cost-estimating practices. Project managers must understand the relevant details and their interaction and importance in order to successfully develop a cost estimate for a particular project, even when rational cost models are used. This research also indicates ...
Date: May 2000
Creator: Fent, Darla
Partner: UNT Libraries

A Study of Graphically Chosen Features for Representation of TREC Topic-Document Sets

Description: Document representation is important for computer-based text processing. Good document representations must include at least the most salient concepts of the document. Documents exist in a multidimensional space that difficult the identification of what concepts to include. A current problem is to measure the effectiveness of the different strategies that have been proposed to accomplish this task. As a contribution towards this goal, this dissertation studied the visual inter-document relationship in a dimensionally reduced space. The same treatment was done on full text and on three document representations. Two of the representations were based on the assumption that the salient features in a document set follow the chi-distribution in the whole document set. The third document representation identified features through a novel method. A Coefficient of Variability was calculated by normalizing the Cartesian distance of the discriminating value in the relevant and the non-relevant document subsets. Also, the local dictionary method was used. Cosine similarity values measured the inter-document distance in the information space and formed a matrix to serve as input to the Multi-Dimensional Scale (MDS) procedure. A Precision-Recall procedure was averaged across all treatments to statistically compare them. Treatments were not found to be statistically the same and the null hypotheses were rejected.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2000
Creator: Oyarce, Guillermo Alfredo
Partner: UNT Libraries

A Theory for the Measurement of Internet Information Retrieval

Description: The purpose of this study was to develop and evaluate a measurement model for Internet information retrieval strategy performance evaluation whose theoretical basis is a modification of the classical measurement model embodied in the Cranfield studies and their progeny. Though not the first, the Cranfield studies were the most influential of the early evaluation experiments. The general problem with this model was and continues to be the subjectivity of the concept of relevance. In cyberspace, information scientists are using quantitative measurement models for evaluating information retrieval performance that are based on the Cranfield model. This research modified this model by incorporating enduser relevance judgment rather than using objective relevance judgments, and by adopting a fundamental unit of measure developed for the cyberspace of Internet information retrieval rather than using recall and precision-type measures. The proposed measure, the Content-bearing Click (CBC) Ratio, was developed as a quantitative measure reflecting the performance of an Internet IR strategy. Since the hypertext "click" is common to many Internet IR strategies, it was chosen as the fundamental unit of measure rather than the "document." The CBC Ratio is a ratio of hypertext click counts that can be viewed as a false drop measure that determines the average number of irrelevant content-bearing clicks that an enduser check before retrieving relevant information. After measurement data were collected, they were used to evaluate the reliability of several methods for aggregating relevance judgments. After reliability coefficients were calculated, measurement model was used to compare web catalog and web database performance in an experimental setting. Conclusions were the reached concerning the reliability of the proposed measurement model and its ability to measure Internet IR performance, as well as implications for clinical use of the Internet and for future research in Information Science.
Date: May 1999
Creator: MacCall, Steven Leonard
Partner: UNT Libraries

Information Seeking in a Virtual Learning Environment

Description: Duplicating a time series study done by Kuhlthau and associates in 1989, this study examines the applicability of the Information Search Process (ISP) Model in the context of a virtual learning environment. This study confirms that students given an information seeking task in a virtual learning environment do exhibit the stages indicated by the ISP Model. The six-phase ISP Model is shown to be valid for describing the different stages of cognitive, affective, and physical tasks individuals progress through when facing a situation where they must search for information to complete an academic task in a virtual learning environment. The findings in this study further indicate there is no relationship between the amount of computer experience subjects possess and demonstrating the patterns of thoughts, feelings, and actions described by the ISP Model. The study demonstrates the ISP Model to be independent of the original physical library environments where the model was developed. An attempt is made to represent the ISP model in a slightly different manner that provides more of the sense of motion and interaction among the components of thoughts, feelings, and action than is currently provided for in the model. The study suggests that the development of non-self-reporting data collection techniques would be useful in complementing and furthering research to enhance and refine the representation of the ISP Model. Additionally, expanding the research to include the examination of group interaction is called for to enhance the ISP Model and develop further applications that could potentially aid educational delivery in all types of learning environments.
Date: August 1999
Creator: Byron, Suzanne M.
Partner: UNT Libraries

The Effects of Task-Based Documentation Versus Online Help Menu Documentation on the Acceptance of Information Technology

Description: The objectives of this study were (1) to identify and describe task-based documentation; (2) to identify and describe any purported changes in users attitudes when IT migration was preceded by task-based documentation; (3) to suggest implications of task-based documentation on users attitude toward IT acceptance. Questionnaires were given to 150 university students. Of these, all 150 students participated in this study. The study determined the following: (1) if favorable pre-implementation attitudes toward a new e-mail system increase, as a result of training, if users expect it to be easy to learn and use; (2) if user acceptance of an e-mail program increase as expected perceived usefulness increase as delineated by task-based documentation; (3) if task-based documentation is more effective than standard help menus while learning a new application program; and (4) if training that requires active student participation increase the acceptance of a new e-mail system. The following conclusions were reached: (1) Positive pre-implementation attitudes toward a new e-mail system are not affected by training even if the users expect it to be easy to learn and use. (2) User acceptance of an e-mail program does not increase as perceived usefulness increase when aided by task-based documentation. (3) Task-based documentation is not more effective than standard help menus when learning a new application program. (4) Training that requires active student participation does not increase the acceptance of a new e-mail system.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 1999
Creator: Bell, Thomas
Partner: UNT Libraries

The Validity of Health Claims on the World Wide Web: A Case Study of the Herbal Remedy Opuntia

Description: The World Wide Web has become a significant source of medical information for the public, but there is concern that much of the information is inaccurate, misleading, and unsupported by scientific evidence. This study analyzes the validity of health claims on the World Wide Web for the herbal Opuntia using an evidence-based approach, and supports the observation that individuals must critically assess health information in this relatively new medium of communication. A systematic search by means of nine search engines and online resources of Web sites relating to herbal remedies was conducted and specific sites providing information on the cactus herbal remedy from the genus Opuntia were retrieved. Validity of therapeutic health claims on the Web sites was checked by comparison with reports in the scientific literature subjected to two established quality assessment rating instruments. 184 Web sites from a variety of sources were retrieved and evaluated, and 98 distinct health claims were identified. 53 scientific reports were retrieved to validate claims. 25 involved human subjects, and 28 involved animal or laboratory models. Only 33 (34%) of the claims were addressed in the scientific literature. For 3% of the claims, evidence from the scientific reports was conflicting or contradictory. Of the scientific reports involving human subjects, none met the predefined criteria for high quality as determined by quality assessment rating instruments. Two-thirds of the claims were unsupported by scientific evidence and were based on folklore, or indirect evidence from related sources. Information on herbal remedies such as Opuntia is well represented on the World Wide Web. Health claims on Web sites were numerous and varied widely in subject matter. The determination of the validity of information about claims made for herbals on the Web would help individuals assess their value in medical treatment. However, the Web is conducive to dubious ...
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2000
Creator: Veronin, Michael A.
Partner: UNT Libraries

The Cluster Hypothesis: A Visual/Statistical Analysis

Description: By allowing judgments based on a small number of exemplar documents to be applied to a larger number of unexamined documents, clustered presentation of search results represents an intuitively attractive possibility for reducing the cognitive resource demands on human users of information retrieval systems. However, clustered presentation of search results is sensible only to the extent that naturally occurring similarity relationships among documents correspond to topically coherent clusters. The Cluster Hypothesis posits just such a systematic relationship between document similarity and topical relevance. To date, experimental validation of the Cluster Hypothesis has proved problematic, with collection-specific results both supporting and failing to support this fundamental theoretical postulate. The present study consists of two computational information visualization experiments, representing a two-tiered test of the Cluster Hypothesis under adverse conditions. Both experiments rely on multidimensionally scaled representations of interdocument similarity matrices. Experiment 1 is a term-reduction condition, in which descriptive titles are extracted from Associated Press news stories drawn from the TREC information retrieval test collection. The clustering behavior of these titles is compared to the behavior of the corresponding full text via statistical analysis of the visual characteristics of a two-dimensional similarity map. Experiment 2 is a dimensionality reduction condition, in which inter-item similarity coefficients for full text documents are scaled into a single dimension and then rendered as a two-dimensional visualization; the clustering behavior of relevant documents within these unidimensionally scaled representations is examined via visual and statistical methods. Taken as a whole, results of both experiments lend strong though not unqualified support to the Cluster Hypothesis. In Experiment 1, semantically meaningful 6.6-word document surrogates systematically conform to the predictions of the Cluster Hypothesis. In Experiment 2, the majority of the unidimensionally scaled datasets exhibit a marked nonuniformity of distribution of relevant documents, further supporting the Cluster Hypothesis. Results of ...
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2000
Creator: Sullivan, Terry
Partner: UNT Libraries

Modeling Utilization of Planned Information Technology

Description: Implementations of information technology solutions to address specific information problems are only successful when the technology is utilized. The antecedents of technology use involve user, system, task and organization characteristics as well as externalities which can affect all of these entities. However, measurement of the interaction effects between these entities can act as a proxy for individual attribute values. A model is proposed which based upon evaluation of these interaction effects can predict technology utilization. This model was tested with systems being implemented at a pediatric health care facility. Results from this study provide insight into the relationship between the antecedents of technology utilization. Specifically, task time provided significant direct causal effects on utilization. Indirect causal effects were identified in task value and perceived utility constructs. Perceived utility, along with organizational support also provided direct causal effects on user satisfaction. Task value also impacted user satisfaction in an indirect fashion. Also, results provide a predictive model and taxonomy of variables which can be applied to predict or manipulate the likelihood of utilization for planned technology.
Date: May 2000
Creator: Stettheimer, Timothy Dwight
Partner: UNT Libraries

Evaluation of Text-Based and Image-Based Representations for Moving Image Documents

Description: Document representation is a fundamental concept in information retrieval (IR), and has been relied upon in textual IR systems since the advent of library catalogs. The reliance upon text-based representations of stored information has been perpetuated in conventional systems for the retrieval of moving images as well. Although newer systems have added image-based representations of moving image documents as aids to retrieval, there has been little research examining how humans interpret these different types of representations. Such basic research has the potential to inform IR system designers about how best to aid users of their systems in retrieving moving images. One key requirement for the effective use of document representations in either textual or image form is thedegree to which these representations are congruent with the original documents. A measure of congruence is the degree to which human responses to representations are similar to responses produced by the document being represented. The aim of this study was to develop a model for the representation of moving images based upon human judgements of representativeness. The study measured the degree of congruence between moving image documents and their representations, both text and image based, in a non-retrieval environment with and without task constraints. Multidimensional scaling (MDS) was used to examine the dimensional dispersions of human judgements for the full moving images and their representations.
Date: August 1997
Creator: Goodrum, Abby A. (Abby Ann)
Partner: UNT Libraries

Perceived features and similarity of images: An investigation into their relationships and a test of Tversky's contrast model.

Description: The creation, storage, manipulation, and transmission of images have become less costly and more efficient. Consequently, the numbers of images and their users are growing rapidly. This poses challenges to those who organize and provide access to them. One of these challenges is similarity matching. Most current content-based image retrieval (CBIR) systems which can extract only low-level visual features such as color, shape, and texture, use similarity measures based on geometric models of similarity. However, most human similarity judgment data violate the metric axioms of these models. Tversky's (1977) contrast model, which defines similarity as a feature contrast task and equates the degree of similarity of two stimuli to a linear combination of their common and distinctive features, explains human similarity judgments much better than the geometric models. This study tested the contrast model as a conceptual framework to investigate the nature of the relationships between features and similarity of images as perceived by human judges. Data were collected from 150 participants who performed two tasks: an image description and a similarity judgment task. Qualitative methods (content analysis) and quantitative (correlational) methods were used to seek answers to four research questions related to the relationships between common and distinctive features and similarity judgments of images as well as measures of their common and distinctive features. Structural equation modeling, correlation analysis, and regression analysis confirmed the relationships between perceived features and similarity of objects hypothesized by Tversky (1977). Tversky's (1977) contrast model based upon a combination of two methods for measuring common and distinctive features, and two methods for measuring similarity produced statistically significant structural coefficients between the independent latent variables (common and distinctive features) and the dependent latent variable (similarity). This model fit the data well for a sample of 30 (435 pairs of) images and 150 participants (χ2 =16.97, ...
Date: May 2005
Creator: Rorissa, Abebe
Partner: UNT Libraries

The Impact of Predisposition Towards Group Work on Intention to Use a CSCW System

Description: Groupware packages are increasingly being used to support content delivery, class discussion, student to student and student to faculty interactions and group work on projects. This research focused on groupware packages that are used to support students who are located in different places, but who are assigned group projects as part of their coursework requirements. In many cases, students are being asked to use unfamiliar technologies that are very different from those that support personal productivity. For example, computer-assisted cooperative work (CSCW) technology is different from other more traditional, stand-alone software applications because it requires the user to interact with the computer as well as other users. However, familiarity with the technology is not the only requirement for successful completion of a group assigned project. For a group to be successful, it must also have a desire to work together on the project. If this pre-requisite is not present within the group, then the technology will only create additional communication and coordination barriers. How much of an impact does each of these factors have on the acceptance of CSCW technology? The significance of this study is threefold. First, this research contributed to how a user's predisposition toward group work affects their acceptance of CSCW technology. Second, it helped identify ways to overcome some of the obstacles associated with group work and the use of CSCW technology in an academic online environment. Finally, it helped identify early adopters of CSCW software and how these users can form the critical mass required to diffuse the technology. This dissertation reports the impact of predisposition toward group work and prior computer experience on the intention to use synchronous CSCW. It was found that predisposition toward group work was not only positively associated to perceived usefulness; it was also related to intention to use. It ...
Date: May 2005
Creator: Reyna, Josephine
Partner: UNT Libraries

A Comparative Analysis of Style of User Interface Look and Feel in a Synchronous Computer Supported Cooperative Work Environment

Description: The purpose of this study is to determine whether the style of a user interface (i.e., its look and feel) has an effect on the usability of a synchronous computer supported cooperative work (CSCW) environment for delivering Internet-based collaborative content. The problem motivating this study is that people who are located in different places need to be able to communicate with one another. One way to do this is by using complex computer tools that allow users to share information, documents, programs, etc. As an increasing number of business organizations require workers to use these types of complex communication tools, it is important to determine how users regard these types of tools and whether they are perceived to be useful. If a tool, or interface, is not perceived to be useful then it is often not used, or used ineffectively. As organizations strive to improve communication with and among users by providing more Internet-based collaborative environments, the users' experience in this form of delivery may be tied to a style of user interface look and feel that could negatively affect their overall acceptance and satisfaction of the collaborative environment. The significance of this study is that it applies the technology acceptance model (TAM) as a tool for evaluating style of user interface look and feel in a collaborative environment, and attempts to predict which factors of that model, perceived ease of use and/or perceived usefulness, could lead to better acceptance of collaborative tools within an organization.
Date: May 2005
Creator: Livingston, Alan
Partner: UNT Libraries

Makeshift Information Constructions: Information Flow and Undercover Police

Description: This dissertation presents the social virtual interface (SVI) model, which was born out of a need to develop a viable model of the complex interactions, information flow and information seeking behaviors among undercover officers. The SVI model was created from a combination of various philosophies and models in the literature of information seeking, communication and philosophy. The questions this research paper answers are as follows: 1. Can we make use of models and concepts familiar to or drawn from Information Science to construct a model of undercover police work that effectively represents the large number of entities and relationships? and 2. Will undercover police officers recognize this model as realistic? This study used a descriptive qualitative research method to examine the research questions. An online survey and hard copy survey were distributed to police officers who had worked in an undercover capacity. In addition groups of officers were interviewed about their opinion of the SVI model. The data gathered was analyzed and the model was validated by the results of the survey and interviews.
Date: August 2005
Creator: Aksakal, Baris
Partner: UNT Libraries

Global response to cyberterrorism and cybercrime: A matrix for international cooperation and vulnerability assessment.

Description: Cyberterrorism and cybercrime present new challenges for law enforcement and policy makers. Due to its transnational nature, a real and sound response to such a threat requires international cooperation involving participation of all concerned parties in the international community. However, vulnerability emerges from increased reliance on technology, lack of legal measures, and lack of cooperation at the national and international level represents real obstacle toward effective response to these threats. In sum, lack of global consensus in terms of responding to cyberterrorism and cybercrime is the general problem. Terrorists and cyber criminals will exploit vulnerabilities, including technical, legal, political, and cultural. Such a broad range of vulnerabilities can be dealt with by comprehensive cooperation which requires efforts both at the national and international level. "Vulnerability-Comprehensive Cooperation-Freedom Scale" or "Ozeren Scale" identified variables that constructed the scale based on the expert opinions. Also, the study presented typology of cyberterrorism, which involves three general classifications of cyberterrorism; Disruptive and destructive information attacks, Facilitation of technology to support the ideology, and Communication, Fund raising, Recruitment, Propaganda (C-F-R-P). Such a typology is expected to help those who are in a position of decision-making and investigating activities as well as academicians in the area of terrorism. The matrix for international cooperation and vulnerability assessment is expected to be used as a model for global response to cyberterrorism and cybercrime.
Date: August 2005
Creator: Ozeren, Suleyman
Partner: UNT Libraries