You limited your search to:

  Partner: UNT Libraries
 Degree Discipline: Information Science
 Collection: UNT Theses and Dissertations
The Second Vatican Council and American Catholic theological research: A bibliometric analysis of Theological Studies: 1940-1995

The Second Vatican Council and American Catholic theological research: A bibliometric analysis of Theological Studies: 1940-1995

Date: August 2000
Creator: Phelps, Helen Stegall
Description: A descriptive analysis was given of the characteristics of the authors and citations of the articles in the journal Theological Studies from 1940-1995. Data was gathered on the institutional affiliation, geographic location, occupation, and gender and personal characteristics of the author. The citation characteristics were examined for the cited authors, date and age of the citations, format, language, place of publication, and journal titles. These characteristics were compared to the time-period before and after the Second Vatican Council in order to detect any changes that might have occurred in the characteristics after certain recommendations by the council were made to theologians. Subject dispersion of the literature was also analyzed. Lotka's Law of author productivity and Bradford's Law of title dispersion were also performed for this literature. The profile of the characteristics of the authors showed that the articles published by women and laypersons has increased since the recommendations of the council. The data had a good fit to Lotka's Law for the pre-Vatican II time period but not for the period after Vatican II. The data was a good fit to Bradford's Law for the predicted number of journals in the nucleus and Zone 2, but the observed number of ...
Contributing Partner: UNT Libraries
Students' criteria for course selection: Towards a metadata standard for distributed higher education

Students' criteria for course selection: Towards a metadata standard for distributed higher education

Date: August 2000
Creator: Murray, Kathleen R.
Description: By 2007, one half of higher education students are expected to enroll in distributed learning courses. Higher education institutions need to attract students searching the Internet for courses and need to provide students with enough information to select courses. Internet resource discovery tools are readily available, however, users have difficulty selecting relevant resources. In part this is due to the lack of a standard for representation of Internet resources. An emerging solution is metadata. In the educational domain, the IEEE Learning Technology Standards Committee (LTSC) has specified a Learning Object Metadata (LOM) standard. This exploratory study (a) determined criteria students think are important for selecting higher education courses, (b) discovered relationships between these criteria and students' demographic characteristics, educational status, and Internet experience, and (c) evaluated these criteria vis-à-vis the IEEE LTSC LOM standard. Web-based questionnaires (N=209) measured (a) the criteria students think are important in the selection of higher education courses and (b) three factors that might influence students' selections. Respondents were principally female (66%), employed full time (57%), and located in the U.S. (89%). The chi square goodness-of-fit test determined 40 criteria students think are important and exploratory factor analysis determined five common factors among the top 21 ...
Contributing Partner: UNT Libraries
The Cluster Hypothesis: A visual/statistical analysis

The Cluster Hypothesis: A visual/statistical analysis

Access: Use of this item is restricted to the UNT Community.
Date: May 2000
Creator: Sullivan, Terry
Description: By allowing judgments based on a small number of exemplar documents to be applied to a larger number of unexamined documents, clustered presentation of search results represents an intuitively attractive possibility for reducing the cognitive resource demands on human users of information retrieval systems. However, clustered presentation of search results is sensible only to the extent that naturally occurring similarity relationships among documents correspond to topically coherent clusters. The Cluster Hypothesis posits just such a systematic relationship between document similarity and topical relevance. To date, experimental validation of the Cluster Hypothesis has proved problematic, with collection-specific results both supporting and failing to support this fundamental theoretical postulate. The present study consists of two computational information visualization experiments, representing a two-tiered test of the Cluster Hypothesis under adverse conditions. Both experiments rely on multidimensionally scaled representations of interdocument similarity matrices. Experiment 1 is a term-reduction condition, in which descriptive titles are extracted from Associated Press news stories drawn from the TREC information retrieval test collection. The clustering behavior of these titles is compared to the behavior of the corresponding full text via statistical analysis of the visual characteristics of a two-dimensional similarity map. Experiment 2 is a dimensionality reduction condition, in ...
Contributing Partner: UNT Libraries
Creating a criterion-based information agent through data mining for automated identification of scholarly research on the World Wide Web

Creating a criterion-based information agent through data mining for automated identification of scholarly research on the World Wide Web

Date: May 2000
Creator: Nicholson, Scott
Description: This dissertation creates an information agent that correctly identifies Web pages containing scholarly research approximately 96% of the time. It does this by analyzing the Web page with a set of criteria, and then uses a classification tree to arrive at a decision. The criteria were gathered from the literature on selecting print and electronic materials for academic libraries. A Delphi study was done with an international panel of librarians to expand and refine the criteria until a list of 41 operationalizable criteria was agreed upon. A Perl program was then designed to analyze a Web page and determine a numerical value for each criterion. A large collection of Web pages was gathered comprising 5,000 pages that contain the full work of scholarly research and 5,000 random pages, representative of user searches, which do not contain scholarly research. Datasets were built by running the Perl program on these Web pages. The datasets were split into model building and testing sets. Data mining was then used to create different classification models. Four techniques were used: logistic regression, nonparametric discriminant analysis, classification trees, and neural networks. The models were created with the model datasets and then tested against the test dataset. Precision ...
Contributing Partner: UNT Libraries
An Examination Of The Variation In Information Systems Project Cost Estimates: The Case Of Year 2000 Compliance Projects

An Examination Of The Variation In Information Systems Project Cost Estimates: The Case Of Year 2000 Compliance Projects

Date: May 2000
Creator: Fent, Darla
Description: The year 2000 (Y2K) problem presented a fortuitous opportunity to explore the relationship between estimated costs of software projects and five cost influence dimensions described by the Year 2000 Enterprise Cost Model (Kappelman, et al., 1998) -- organization, problem, solution, resources, and stage of completion. This research was a field study survey of (Y2K) project managers in industry, government, and education and part of a joint project that began in 1996 between the University of North Texas and the Y2K Working Group of the Society for Information Management (SIM). Evidence was found to support relationships between estimated costs and organization, problem, resources, and project stage but not for the solution dimension. Project stage appears to moderate the relationships for organization, particularly IS practices, and resources. A history of superior IS practices appears to mean lower estimated costs, especially for projects in larger IS organizations. Acquiring resources, especially external skills, appears to increase costs. Moreover, projects apparently have many individual differences, many related to size and to project stage, and their influences on costs appear to be at the sub-dimension or even the individual variable level. A Revised Year 2000 Enterprise Model is presented incorporating this granularity. Two primary conclusions can ...
Contributing Partner: UNT Libraries
An experimental study of teachers' verbal and nonverbal immediacy, student motivation, and cognitive learning in video instruction

An experimental study of teachers' verbal and nonverbal immediacy, student motivation, and cognitive learning in video instruction

Date: May 2000
Creator: Witt, Paul L.
Description: This study used an experimental design and a direct test of recall to provide data about teacher immediacy and student cognitive learning. Four hypotheses and a research question addressed two research problems: first, how verbal and nonverbal immediacy function together and/or separately to enhance learning; and second, how immediacy affects cognitive learning in relation to student motivation. These questions were examined in the context of video instruction to provide insight into distance learning processes and to ensure maximum control over experimental manipulations. Participants (N = 347) were drawn from university students in an undergraduate communication course. Students were randomly assigned to groups, completed a measure of state motivation, and viewed a 15-minute video lecture containing part of the usual course content delivered by a guest instructor. Participants were unaware that the video instructor was actually performing one of four scripted manipulations reflecting higher and lower combinations of specific verbal and nonverbal cues, representing the four cells of the 2x2 research design. Immediately after the lecture, students completed a recall measure, consisting of portions of the video text with blanks in the place of key words. Participants were to fill in the blanks with exact words they recalled from the videotape. ...
Contributing Partner: UNT Libraries
Identifying At-Risk Students: An Assessment Instrument for Distributed Learning Courses in Higher Education

Identifying At-Risk Students: An Assessment Instrument for Distributed Learning Courses in Higher Education

Date: May 2000
Creator: Osborn, Viola
Description: The current period of rapid technological change, particularly in the area of mediated communication, has combined with new philosophies of education and market forces to bring upheaval to the realm of higher education. Technical capabilities exceed our knowledge of whether expenditures on hardware and software lead to corresponding gains in student learning. Educators do not yet possess sophisticated assessments of what we may be gaining or losing as we widen the scope of distributed learning. The purpose of this study was not to draw sweeping conclusions with respect to the costs or benefits of technology in education. The researcher focused on a single issue involved in educational quality: assessing the ability of a student to complete a course. Previous research in this area indicates that attrition rates are often higher in distributed learning environments. Educators and students may benefit from a reliable instrument to identify those students who may encounter difficulty in these learning situations. This study is aligned with research focused on the individual engaged in seeking information, assisted or hindered by the capabilities of the computer information systems that create and provide access to information. Specifically, the study focused on the indicators of completion for students enrolled in ...
Contributing Partner: UNT Libraries
MEDLINE Metric: A method to assess medical students' MEDLINE search effectiveness

MEDLINE Metric: A method to assess medical students' MEDLINE search effectiveness

Date: May 2000
Creator: Hannigan, Gale G.
Description: Medical educators advocate the need for medical students to acquire information management skills, including the ability to search the MEDLINE database. There has been no published validated method available to use for assessing medical students' MEDLINE information retrieval skills. This research proposes and evaluates a method, designed as the MEDLINE Metric, for assessing medical students' search skills. MEDLINE Metric consists of: (a) the development, by experts, of realistic clinical scenarios that include highly constructed search questions designed to test defined search skills; (b) timed tasks (searches) completed by subjects; (c) the evaluation of search results; and (d) instructive feedback. A goal is to offer medical educators a valid, reliable, and feasible way to judge mastery of information searching skill by measuring results (search retrieval) rather than process (search behavior) or cognition (knowledge about searching). Following a documented procedure for test development, search specialists and medical content experts formulated six clinical search scenarios and questions. One hundred and forty-five subjects completed the six-item test under timed conditions. Subjects represented a wide range of MEDLINE search expertise. One hundred twenty complete cases were used, representing 53 second-year medical students (44%), 47 fourth-year medical students (39%), and 20 medical librarians (17%). Data related ...
Contributing Partner: UNT Libraries
Modeling utilization of planned information technology

Modeling utilization of planned information technology

Date: May 2000
Creator: Stettheimer, Timothy Dwight
Description: Implementations of information technology solutions to address specific information problems are only successful when the technology is utilized. The antecedents of technology use involve user, system, task and organization characteristics as well as externalities which can affect all of these entities. However, measurement of the interaction effects between these entities can act as a proxy for individual attribute values. A model is proposed which based upon evaluation of these interaction effects can predict technology utilization. This model was tested with systems being implemented at a pediatric health care facility. Results from this study provide insight into the relationship between the antecedents of technology utilization. Specifically, task time provided significant direct causal effects on utilization. Indirect causal effects were identified in task value and perceived utility constructs. Perceived utility, along with organizational support also provided direct causal effects on user satisfaction. Task value also impacted user satisfaction in an indirect fashion. Also, results provide a predictive model and taxonomy of variables which can be applied to predict or manipulate the likelihood of utilization for planned technology.
Contributing Partner: UNT Libraries
A Study of Graphically Chosen Features for Representation of TREC Topic-Document Sets

A Study of Graphically Chosen Features for Representation of TREC Topic-Document Sets

Access: Use of this item is restricted to the UNT Community.
Date: May 2000
Creator: Oyarce, Guillermo Alfredo
Description: Document representation is important for computer-based text processing. Good document representations must include at least the most salient concepts of the document. Documents exist in a multidimensional space that difficult the identification of what concepts to include. A current problem is to measure the effectiveness of the different strategies that have been proposed to accomplish this task. As a contribution towards this goal, this dissertation studied the visual inter-document relationship in a dimensionally reduced space. The same treatment was done on full text and on three document representations. Two of the representations were based on the assumption that the salient features in a document set follow the chi-distribution in the whole document set. The third document representation identified features through a novel method. A Coefficient of Variability was calculated by normalizing the Cartesian distance of the discriminating value in the relevant and the non-relevant document subsets. Also, the local dictionary method was used. Cosine similarity values measured the inter-document distance in the information space and formed a matrix to serve as input to the Multi-Dimensional Scale (MDS) procedure. A Precision-Recall procedure was averaged across all treatments to statistically compare them. Treatments were not found to be statistically the same and ...
Contributing Partner: UNT Libraries