Search Results

Investigating Factors that Affect Faculty Attitudes towards Participation in Open Access Institutional Repositories
Open access institutional repositories (OA IRs) are electronic systems that capture, preserve, and provide access to the scholarly digital work of an institution. As a new channel of scholarly communications IRs offer faculty a new way to disseminate their work to a wider audience, which in turn can increase the visibility to their work and impact factors, and at the same time increase institutions prestige and value. However, despite the increased popularity of IRs in numbers, research shows that IRs remain thinly populated in large part due to faculty reluctance to participate. There have been studies on the topic of open access repositories with the focus on external factors (social or technological context) that affect faculty attitudes towards participation in IRs, and there is a lack of understanding of the internal factors and the psychology of the reluctance. The goal of this mix method study was to identify the overall factors that affect faculty attitudes towards participation in IRs and examine the extent to which these factors influenced faculty willingness to participate in IRs. First, from literature review and the Model of Factors Affecting Faculty Self-Archiving this study identified eleven factors that influenced faculty members' intention to participate in OA repositories. Theory of Planned Behavior (TPB) postulated that faculty intention to participate in IR was determined by three categories of factors: five attitudinal, four external (social) and two individual factors. Within the framework of the TPB this study (1) confirmed the measurement scale for each factor using principal component analysis, (2) it examined the influence that each factor had on the faculty likelihood to participate in IR using logistic regression, and (3) it weighted the relative importance of each factor on faculty intent to participate, utilizing relative weight analysis. Quantitative analysis revealed that four out of 11 factors proved to …
The Role of Information in the Selection Process of a Primary Care Physician
There is a paucity of information about the various factors that influence the selection of primary care physicians. Also, the relative significance of these factors is not known, making it difficult to properly address ways to improve the information flow to patients when they select a primary care physician.
Diagnosing Learner Deficiencies in Algorithmic Reasoning
It is hypothesized that useful diagnostic information can reside in the wrong answers of multiple-choice tests, and that properly designed distractors can yield indications of misinformation and missing information in algorithmic reasoning on the part of the test taker. In addition to summarizing the literature regarding diagnostic research as opposed to scoring research, this study proposes a methodology for analyzing test results and compares the findings with those from the research of Birenbaum and Tatsuoka and others. The proposed method identifies the conditions of misinformation and missing information, and it contains a statistical compensation for careless errors. Strengths and weaknesses of the method are explored, and suggestions for further research are offered.
Library CD-ROM LAN Performance and Patron Use: a Computer Simulation Model
In this study, a computer simulation model for library CD-ROM LAN systems was created. Using this model, the system optimization problems were examined. The simulation model imitated the process of the actual decision variables changing their values and generated the corresponding results. Under a certain system environment, if the values of decision variables are changing, the system performances are getting changed also. This study investigated these relationships with the created model. The system users' interarrival time, service time, and other relevant data were collected on randomly selected days in a university library. For data collection, both of the observation and the system automatic metering software were used. According to the collected data, a discrete events simulation model was created with GPSS/H. The simulation model was proven valid and accurate by a pilot test and by the calculation with queuing theory. Statistical tests were used for data comparison and analysis. In addition, animation technique was used to show the simulation process by using Proof Animation. By this technique, the simulation process was monitored on the screen.
University Students and the Internet: Information Seeking Study
This study explored university students' information needs and seeking behaviors on the Internet. A Web-based survey was administrated one time. Two hundred responses were received from the target sample within the two weeks period of the study. Data were analyzed with descriptive statistics, factor analysis, and graphical representation. The study explored various issues related to the usability, preferences, and activities of the Internet, such as searching tools, e-mail, search engines, and preferred primary sources of everyday-life information needs. The study explored the perceptions of the students toward the Internet and the traditional library. Kuhlthau's model of the information-seeking process, which includes six stages and affective components, was utilized and modified in the construction of the Web survey. A study by Presno (1998), which includes the four types of Internet anxiety, was utilized in the construction of the Web survey. With regard to the six stages of Kuhlthau model, the majority of the respondents experienced stage 5, which was about information gathering; stage 3 had the next highest number of respondents. Very few respondents experienced stages 1 and 2. There was a systematic pattern in which, the earlier the stages the respondents were in, the more negative adjectives they selected, and vice versa. The feeling adjectives section showed a difference in the behavior between males and females. The results indicated that most students had Internet time delay anxiety. In general, the study found that students have a great interest in the Internet and consider it an important source of information for their personal, educational, and communication activities.
The Effects of Computer Performance Assessment on Student Scores in a Computer Applications Course
The goal of this study was to determine if performance-based tests should be routinely administered to students in computer application courses. The purpose was to determine the most appropriate mode of testing for individuals taking a computer applications course. The study is divided into areas of assessment, personality traits, and computer attitudes.
Students' Criteria for Course Selection: Towards a Metadata Standard for Distributed Higher Education
By 2007, one half of higher education students are expected to enroll in distributed learning courses. Higher education institutions need to attract students searching the Internet for courses and need to provide students with enough information to select courses. Internet resource discovery tools are readily available, however, users have difficulty selecting relevant resources. In part this is due to the lack of a standard for representation of Internet resources. An emerging solution is metadata. In the educational domain, the IEEE Learning Technology Standards Committee (LTSC) has specified a Learning Object Metadata (LOM) standard. This exploratory study (a) determined criteria students think are important for selecting higher education courses, (b) discovered relationships between these criteria and students' demographic characteristics, educational status, and Internet experience, and (c) evaluated these criteria vis-à-vis the IEEE LTSC LOM standard. Web-based questionnaires (N=209) measured (a) the criteria students think are important in the selection of higher education courses and (b) three factors that might influence students' selections. Respondents were principally female (66%), employed full time (57%), and located in the U.S. (89%). The chi square goodness-of-fit test determined 40 criteria students think are important and exploratory factor analysis determined five common factors among the top 21 criteria, three evaluative factors and two descriptive. Results indicated evaluation criteria are very important in course selection. Spearman correlation coefficients and chi-square tests of independence determined the relationships between the importance of selection criteria and demographic characteristics, educational status, and Internet experience. Four profiles emerged representing groups of students with unique concerns. Side by side analysis determined if the IEEE LTSC LOM standard included the criteria of importance to students. The IEEE LOM by itself is not enough to meet students course selection needs. Recommendations include development of a metadata standard for course evaluation and accommodation of group differences in …
Public School Educators' Use of Computer-Mediated Communication
This study examined the uses of computer-mediated communication (CMC) by educators in selected public schools. It used Rogers' Diffusion of Innovation Theory as the underpinnings of the study. CMC refers to any exchange of information that involves the use of computers for communication between individuals or individuals and a machine. This study was an exploration of difficulties users confront, what services they access, and the tasks they accomplish when using CMC. It investigated the factors that affect the use of CMC. The sample population was drawn from registered users on TENET, the Texas Education Network as of December 1997. The educators were described with frequency and percentages analyzing the demographic data. For the research, eight indices were selected to test how strongly these user and environmental attributes were associated with the use of CMC. These variables were (1) education, (2) position, (3) place of employment, (4) geographic location, (5) district size, (6) organization vitality, (7) adopter resources, and (8) instrumentality Two dependent variables were used to test for usage: (1) depth or frequency of CMC usage and amount of time spent online and (2) breadth or variety of Internet utilities used. Additionally, the users' perception of network benefits was measured. Network benefits were correlated with social interaction and perception of CMC to investigate what tasks educators were accomplishing with CMC. Correlations, SEQ CHAPTER h r 1 crosstabulations, and ANOVAs were used to analysis the data for testing the four hypotheses. The major findings of the study, based on the hypotheses tested, were that the socioeconomic variables of education and position influenced the use of CMC. A significant finding is that teachers used e-mail and for Internet resources less frequently than those in other positions. An interesting finding was that frequency of use was more significant for usage than amount of …
Information Seeking Behavior of Crime Scene Investigators in the Turkish National Police
This exploratory research is the first one among occupational information seeking behavior studies that focuses on information seeking behaviors of the crime scene investigators. The data used in this dissertation were gathered via a self-administrated survey instrument from 29 cities in Turkey. Findings obtained from the data analyses show that there is a strongly positive relationship between the experience of the crime scene investigators and the use of personal knowledge and experience as a primary information source (experience is operationalized with age, service years in policing, and service years in crime scene investigation units). The findings also suggest that increasing of the level of education is negatively related to relying on immediate colleagues as an information source among the crime scene investigators. These findings are consistent with related literature and theory. The data analysis shows that crime scene investigators work in cities with higher population rates have more complaint scores than those who work in cities with lower population rates across Turkey. The findings from the data analysis may suggest valuable implications to defeat the barriers between crime scene investigators and information sources. The researcher drew a proposed theoretical framework of an information behavior concept in the context of crime scene investigation that may help those who are interested in the phenomenon and its applications to other contexts.
The Electronic Ranch: the Information Environment of Cattle Breeders
The present study was a longitudinal analysis of the information needs of Red Angus cattle breeders and their use of networked information services. It was based on two surveys. The first, conducted in 1995--96, polled all 1067 ranches of the Red Angus Association of America. Responses from 192 Red Angus breeders were used to determine the value of different information types and to evaluate perceptions of the greatest barriers to the adoption of network information services. The second survey, mailed to 41 Red Angus breeders in 1998, focused on early adopters and likely users of network services. Responses from 15 breeders were used to evaluate perceptions of the greatest barriers to the effective use of Web-based information services.
The Applicability of SERVPERF in Judging Service Quality for Biomedical Information Professionals
The applicability of SERVPERF as a tool for judging the quality of services used by biomedical information professionals was tested using standard statistical procedures. Data was gathered nationally via a combination of electronic and non-electronic forms, from Area Health Education Center (AHEC) information professionals and the results consolidated to provide information for the study. It was determined that SERVPERF was applicable in making judgements about service quality for AHEC information professionals. Their perceptions about service quality tended to have a greater influence than did their level of actual satisfaction on whether or not they planned to use a particular service in the future. There is currently no validated tool available to ascertain the quality of services offered to these valuable members of the rural health care team. This dissertation proposes to provide such a tool, and to serve as a guide or template for other professionals seeking a means to judge service quality in their own disciplines.
The Impact of Computer Instruction on the Near Transfer and Far Transfer of a General Problem Solving Strategy
The purpose of this study was to examine the impact of computer instruction on the near transfer and far transfer of a means-end analysis problem solving strategy.
Seeking Information After the 2010 Haiti Earthquake: a Case Study in Mass-fatality Management
The 2010 earthquake in Haiti, which killed an estimated 316,000 people, offered many lessons in mass-fatality management (MFM). The dissertation defined MFM in seeking information and in recovery, preservation, identification, and disposition of human remains. Specifically, it examined how mass fatalities were managed in Haiti, how affected individuals sought information about fatalities, and what needs motivated them. Data from 28 in-depth, partially structured interviews, conducted during two field visits ending 21 weeks after the earthquake, were included in a case study. The data analysis revealed the MFM was severely inadequate. One interviewee, a senior UN official, stated, "There was no fatality management." The analysis also indicated a need to learn whereabouts of the deceased motivated individuals to visit spots the deceased were last seen at. It sought to illumine information-seeking practices, as discussed in the works of J. David Johnson and others, by developing a new model of information flow in MFM. In addition, it reaffirmed Donald Case and Thomas Wilson's theoretical proposition – that need guides any seeking of information – in the case of Haiti. Finally, it produced recommendations regarding future directions in MFM for emergency managers and information scientists, including possible use of unidentified body parts in organ transplants. Overall, the dissertation, which was supported by two grants of the National Science Foundation, attempted to add to relatively scanty literature in information seeking in MFM.
Graduate Students' Collaborative Information Seeking in a Group-based Learning Setting
Working with others within an organization can have a variety of positive effects, and the benefits of collaboration have been discussed in various disciplines. In information science, interest in collaborative information seeking, including collaborative information seeking by students in an online learning environment is expanding. This study was aimed at understanding graduate students' collaborative information seeking behaviors through the process of a group project, including factors that affected students' perceptions of collaborative work and their difficulties during the collaborative process. The research was based on Yue and He's model, which describes information users' collaborative communication and information behaviors, and Kuhlthau's model, which describes users' individual information seeking behaviors. The participants were 43 students enrolled in a master's level course delivered primarily online. The students were required to work together in groups to complete a research project. Data were collected through a background survey, behavior survey, and online communication texts and analyzed using descriptive statistics, statistical tests, and content analyses. The results showed significant changes in collaborative and information seeking behaviors and perceptions across three stages of the project during the semester. Theoretical, practical, and methodological implications for future research are discussed.
Relevance Thresholds: A Conjunctive/Disjunctive Model of End-User Cognition as an Evaluative Process
This investigation identifies end-user cognitive heuristics that facilitate judgment and evaluation during information retrieval (IR) system interactions. The study extends previous research surrounding relevance as a key construct for representing the value end-users ascribe to items retrieved from IR systems and the perceived effectiveness of such systems. The Lens Model of user cognition serves as the foundation for design and interpretation of the study; earlier research in problem solving, decision making, and attitude formation also contribute to the model and analysis. A self reporting instrument collected evaluative responses from 32 end-users related to 1432 retrieved items in relation to five characteristics of each item: topical, pertinence, utility, systematic, and motivational levels of relevance. The nominal nature of the data collected led to non-parametric statistical analyses that indicated that end-user evaluation of retrieved items to resolve an information problem at hand is most likely a multi-stage process. That process appears to be a cognitive progression from topic to meaning (pertinence) to functionality (use). Each step in end-user evaluative processing engages a cognitive hierarchy of heuristics that includes consideration (of appropriate cues), differentiation (the positive or negative aspects of those cues considered), and aggregation (the combination of differentiated cue aspects needed to render an evaluative label of the item in relation to the information problem at hand). While individuals may differ in their judgments and evaluations of retrieved items, they appear to make those decisions by using consistent heuristic approaches.
Increasing Telecommunications Channel Capacity: Impacts on Firm Profitability
In calling for the deployment of high-capacity telecommunications infrastructures, the Clinton Administration is relying on market forces to drive demand toward self-sustaining development. There is little doubt that many firms will embrace the new telecommunications services for a variety of reasons including market differentiation, vertical market integration, and other organization-specific factors. However, there is little evidence at the firm level that adopting the use of increased-capacity telecommunications technologies is associated with improvements in firm profitability. This study seeks to identify the presence of impacts on firm income that can be associated with the adoption of T1 telecommunications services.
Information Censorship: A Comparative Analysis of Newspaper Coverage of the Jyllands-Posten Editorial Caricatures in Cross-Cultural Settings
The identification and examination of cultural information strategies and censorship patterns used to propagate the controversial issue of the caricatures in two separate cultural contexts was the aim of this dissertation. It explored discourse used for the coverage of this topic by one newspaper in a restrictive information context and two newspapers in a liberal information context. Message propagation in a restrictive information environment was analyzed using the English daily Kuwait Times from the Middle East; the liberal information environment of the US was analyzed using two major dailies, the New York Times and the Philadelphia Inquirer. The study also concurrently identifies and elaborates on the themes and frames through which discourse was presented exposing the cultural ideologies and premises they represent. The topic was approached with an interdisciplinary position with the support and applicability testing of Chatman's insider-outsider theory within information science and Noelle-Neumann's spiral of silence theory and Herman and Chomsky's propaganda model based in the area of mass communication. The study has also presented a new model of information censorship - circle of information censorship, emphasizing conceptual issues that influence the selection and censorship of information.
Smoothing the information seeking path: Removing representational obstacles in the middle-school digital library.
Middle school student's interaction within a digital library is explored. Issues of interface features used, obstacles encountered, search strategies and search techniques used, and representation obstacles are examined. A mechanism for evaluating user's descriptors is tested and effects of augmenting the system's resource descriptions with these descriptors on retrieval is explored. Transaction log data analysis (TLA) was used, with external corroborating achievement data provided by teachers. Analysis was conducted using quantitative and qualitative methods. Coding schemes for the failure analysis, search strategies and techniques analysis, as well as extent of match analysis between terms in student's questions and their search terms, and extent of match analysis between search terms and controlled vocabulary were developed. There are five chapters with twelve supporting appendixes. Chapter One presents an introduction to the problem and reviews the pilot study. Chapter Two presents the literature review and theoretical basis for the study. Chapter Three describes the research questions, hypotheses and methods. Chapter Four presents findings. Chapter Five presents a summary of the findings and their support of the hypotheses. Unanticipated findings, limitations, speculations, and areas of further research are indicated. Findings indicate that middle school users interact with the system in various sequences of patterns. User groups' interactions and scaffold use are influenced by the teacher's objectives for using the ADL. Users preferred to use single word searches over Boolean, phrase or natural language searches. Users tended to use a strategy of repeating the same exact search, instead of using the advanced scaffolds. A high percent of users attempted at least one search that included spelling or typographical errors, punctuation, or sequentially repeated searches. Search terms matched the DQ's in some instantiation 54% of all searches. Terms used by the system to represent the resources do not adequately represent the user groups' information needs, however, …
Detecting the Presence of Disease by Unifying Two Methods of Remote Sensing.
There is currently no effective tool available to quickly and economically measure a change in landmass in the setting of biomedical professionals and environmental specialists. The purpose of this study is to structure and demonstrate a statistical change-detection method using remotely sensed data that can detect the presence of an infectious land borne disease. Data sources included the Texas Department of Health database, which provided the types of infectious land borne diseases and indicated the geographical area to study. Methods of data collection included the gathering of images produced by digital orthophoto quadrangle and aerial videography and Landsat. Also, a method was developed to identify statistically the severity of changes of the landmass over a three-year period. Data analysis included using a unique statistical detection procedure to measure the severity of change in landmass when a disease was not present and when the disease was present. The statistical detection method was applied to two different remotely sensed platform types and again to two like remotely sensed platform types. The results indicated that when the statistical change detection method was used for two different types of remote sensing mediums (i.e.-digital orthophoto quadrangle and aerial videography), the results were negative due to skewed and unreliable data. However, when two like remote sensing mediums were used (i.e.- videography to videography and Landsat to Landsat) the results were positive and the data were reliable.
Information Literacy Skills in the Workplace: A Study of Police Officers
Information literacy has become more important as more information is produced and communication has become easier. Better information skills are vital for individuals working in governmental organizations as well as in the business sector. Employees are expected to be confident and competent in interacting with information in their workplaces in order to deliver better service to customers and to the public. This study examines the differences in information literacy skills (ILS), computer literacy skills (CLS), and frequencies of use of information sources (FIS) among police officers, based on their socio-demographic characteristics, namely education, departmental affiliation, ranks, and experience. Information literacy process models developed in an educational environment are combined to explore information literacy process in the workplace. Bivariate and multivariate analyses indicated significant differences of ILS and CLS based on education, departmental affiliation, and ranks but no difference for experience. In addition, there were differences of FIS for all demographic variables except departmental affiliation. The findings of the study may guide both future researchers in the process of developing new models in understanding information literacy process and the managers in police organizations in planning better training programs by considering information and computer literacy skills and use of information sources of police officers.
Using Financial Rankings to Identify Characteristics of Libraries Serving Highly Profitable Private Law Firms
This purpose of this study was to develop evidence of a relationship between law libraries and private law firm profitability for law library administrators to use when making strategic decisions that influence the value of their libraries. The highest ranked administrator at each private law firm listed on the 2008 Am Law 200 was invited to complete an online benchmarking survey. The adjusted sample population totaled 179 firms. Fifty-one valid surveys were completed for a 28.5% response rate. Descriptive and statistical analyses were conducted using 26 independent variables (law library characteristics) and a single dependent variable, Revenue per Equity Partner, developed from data published for the Am Law 200. The most significant contributions of this study are: development of important law library financial and return on investment benchmarks; a listing of characteristics that have been empirically shown to impact law firm productivity; identification of optimum reporting structure for the law library administrator. Six characteristics positively impact Revenue per Equity Partner: to whom the library Administrator reports, number of library staff per library, number of Library staff per library, range in hourly bill rate for library staff time, practice areas most often supported. Two monetary measures were also established. The cost benefit of an Am Law library to its firm is $1.00 : $1.68. Each Am Law Library staff member is worth $295,000 in Revenue per Equity Partner to a firm. Law library practitioners can use the results to support evidenced-based strategic decision making in the administration of any private law firm library. Faculty and students in law librarianship programs will have a greater understanding of how to manage law libraries and collections to provide maximum value to their law firms. Benefits to library and information science research include validation of the research design and benchmarking as a theoretical framework for …
Coyote Ugly Librarian: A Participant Observer Examination of Lnowledge Construction in Reality TV.
Reality TV is the most popular genre of television programming today. The number of reality television shows has grown exponentially over the last fifteen years since the premier of The Real World in 1992. Although reality TV uses styles similar to those used in documentary film, the “reality” of the shows is questioned by critics and viewers alike. The current study focuses on the “reality” that is presented to viewers and how that “reality” is created and may differ from what the participants of the shows experience. I appeared on two reality shows, Faking It and That's Clever, and learned a great deal as a participant observer. Within the study, I outline my experience and demonstrate how editing changed the reality I experienced into what was presented to the viewers. O'Connor's (1996) representation context web serves as a model for the realities created through reality television. People derive various benefits from watching reality TV. Besides the obvious entertainment value of reality TV, viewers also gather information via this type of programming. Viewers want to see real people on television reacting to unusual circumstances without the use of scripts. By surveying reality TV show viewers and participants, this study gives insight into how real the viewers believe the shows are and how authentic they actually are. If these shows are presented as reality, viewers are probably taking what they see as historical fact. The results of the study indicate more must be done so that the “reality” of reality TV does not misinform viewers.
Supporting Computer-Mediated Collaboration through User Customized Agents
This research investigated a neglected problem - interruption of groups by agent advisory systems. The question was whether interruption by the agent advisory system was beneficial. A survey of literature in four areas is included in this dissertation. The areas surveyed were Agents, Online Help, Computer Supported Cooperative Work(CSCW) and Awareness in CSCW. Based on the review, a human subject experiment was conducted. The study investigated whether the style of agent advisory interface improved the performance of group members. There were three sets of groups, a control set that did not have advisory agents, a set that had system provided advisory agents and a set that had group customized advisory agents. The groups worked together using a CSCW application developed with GroupKit, a CSCW toolkit. The groups with group customized advisory agents used an Agent Manager application to define advisory agents that would give them advice as they worked in the CSCW application. The findings showed that the type of advisory agents did not significantly influence the performance of the groups. The groups with customized agents performed slightly better than the other groups but the difference was not statistically significant. When notified that advice was issued, groups with customized agents and groups with provided agents seldom accessed the agent's advice. General design guidelines for agent interruption have not been solved. Future work is needed to finish the job. The definitive solution may be some mixture of the three known individual design solutions.
Middle School Students in Virtual Learning Environments
This ethnographic study examined middle school students engaged in a virtual learning environment used in concert with face-to-face instruction in order to complete a collaborative research project. Thirty-eight students from three eighth grade classes participated in this study where data were collected through observation of student work within the virtual learning environment, an online survey, and focus group sessions with students involved in the project. Results indicated students found the virtual learning environment to be valuable as a platform to complete a collaborative research assignment because of portability, ease of use, and organization. Embedded resources within the environment were helpful because of the convenience. Other people, including peers and teachers, were the preferred source of help when problems navigating the environment or finding information arose. Students communicated within the virtual learning environment as a social outlet, a way to check in, and a means to offer content related comments. Ideally the study's findings will give insight into student experiences in a virtual learning environment in order to help educators design more effective learning experiences and incorporate useful supports within such environments.
An E-government Readiness Model
The purpose of this study is to develop an e-government readiness model and to test this model. Consistent with this model several instruments, IS assessment (ISA), IT governance (ITG), and Organization-IS alignment (IS-ALIGN) are examined for their ability to measure the readiness of one organization for e-government and to test the instruments fit in the proposed e-government model. The ISA instrument used is the result of adapting and combining the IS-SERVQUAL instrument proposed by Van Dyke, Kappelman, and Pybutok (1997), and the IS-SUCCESS instrument developed by Kappelman and Chong (2001) for the City of Denton (COD) project at UNT. The IS Success Model was first proposed by DeLone and McLean (1992), but they did not validate this model. The ITG instrument was based on the goals of the COD project for IT governance and was developed by Sanchez and Kappelman (2001) from UNT. The ISALIGN instrument was also developed by Sanchez and Kappelman (2001) for the COD project. It is an instrument based on the Malcolm Baldrige National Quality Award (MBNQA) that measures how effectively a government organization utilizes IT to support its various objectives. The EGOV instrument was adapted from the study of the Action-Audience Model developed by Koh and Balthazrd (1997) to measure how well a government organization is prepared to usher in e-government in terms of various success factors at planning, system and data levels. An on-line survey was conducted with employees of the City of Denton, Texas. An invitation letter to participate in the survey was sent to the 1100 employees of the City of Denton via email, 339 responses were received, yielding a response rate of 31%. About 168 responses were discarded because they were incomplete and had the missing values, leaving 171 usable surveys, for a usable set of responses that had a response …
Assessment of a Library Learning Theory by Measuring Library Skills of Students Completing an Online Library Instruction Tutorial
This study is designed to reveal whether students acquire the domains and levels of library skills discussed in a learning library skills theory after participating in an online library instruction tutorial. The acquisition of the library skills is demonstrated through a review of the scores on online tutorial quizzes, responses to a library skills questionnaire, and bibliographies of course research papers. Additional areas to be studied are the characteristics of the participants enrolled in traditional and online courses at a community college and the possible influence of these characteristics on the demonstrated learning of library skills. Multiple measurement methods, identified through assessment of library instruction literature, are used to verify the effectiveness of the library skills theory and to strengthen the validity and reliability of the study results.
Perceived attributes of diffusion of innovation theory as predictors of Internet adoption among faculty members of Imam Mohammed Bin Saud University.
The Internet is the most common communication and research tool worldwide. Perusal of the World Wide Web quickly reveals the variety of information available. Internet adoption can be considered the late 20th century's most important event. In academic environments today, Internet use among faculty members has been widely expanded, with professors now integrating Internet technology into classroom activities. Imam Muhammad Bin Saud Islamic University (IMSU) is a pioneering public university in Saudi Arabia. Until recently, some faculty members at IMSU were unable to access the Internet through the university. It is important to study the effects of this delay on faculty members regarding research and academic activities. This study identified the statistically significant differences in demographic characteristics of Internet adopters and non-adopters among faculty members at IMSU, examined whether faculty members' perceptions of the Internet affected adoption, determined if the university administration's decisions impacted faulty members' decisions to adopt the Internet, identified factors motivating faculty members to adopt the Internet, identified obstacles influencing faculty members' decisions to use the Internet, and determined whether innovation characteristics as perceived by faculty members predicted Internet adoption. Using Rogers' diffusion of innovation theory, the influence of eight attributes were examined regarding Internet adoption among IMSU faculty members. Multiple regression and chi-square techniques were conducted to analyze the data and answer research questions. Statistically significant differences were identified among Internet adopters and non-adopters regarding gender, age, academic rank, discipline, and English proficiency. The data revealed 54.7% of IMSU faulty members used the Internet for research and academic activities twice a month or less, indicating a low Internet adoption rate. Statistically significant differences were noted among adopters and non-adopters relative to income level and English proficiency. Multiple regression analysis showed that all attributes of innovation individually predicted Internet adoption. The combination of all attributes indicated the …
User Acceptance of North Central Texas Fusion Center System by Law Enforcement Officers
The September 11 terrorist attacks pointed out the lack of information sharing between law enforcement agencies as a potential threat to sound law enforcement in the United States. Therefore, many law enforcement agencies as well as the federal government have been initiating information sharing systems among law enforcement agencies to eradicate the information sharing problem. One of the systems established by Homeland Security is the North Central Texas Fusion Center (NCTFC). This study evaluates the NCTFC by utilizing user acceptance methodology. The unified theory of acceptance and the use of technology is used as a theoretical framework for this study. Within the study, user acceptance literature is examined and various models and theories are discussed. Furthermore, a brief information regarding the intelligence work done by law enforcement agencies are explained. In addition to the NCTFC, several major law enforcement information systems are introduced. The data for this study comes from the users of the NCTFC across the north central Texas region. Surveys and interviews are used to triangulate data. It is found in this study that performance expectancy and effort expectancy are important indicators of system use. Furthermore, outreach and needs assessment are important factors in establishing systems. The results of the study offer valuable input for NCTFC administrators, law enforcement officials, and future researchers.
The Situational Small World of a Post-disaster Community: Insights into Information Behaviors after the Devastation of Hurricane Katrina in Slidell, Louisiana
Catastrophes like Katrina destroy a community's critical infrastructure-a situation that instigates several dilemmas. Immediately, the community experiences information disruption within the community, as well as between the community and the outside world. The inability to communicate because of physical or virtual barriers to information instigates instant isolation. Prolonged, this scarcity of information becomes an information poverty spell, placing hardship on a community accustomed to easily accessible and applicable information. Physical devastation causes the scarcity of what Abraham Maslow calls basic survival needs-physiological, security, and social-a needs regression from the need to self-actualize, to meet intellectual and aesthetic needs. Because needs regress, the type of information required to meet the needs, also changes-regresses to information regarding survival needs. Regressed information needs requires altered information behaviors-altered methods and means to meet the information needs of the post-disaster situation. Situational information behavior follows new mores-altered norms-norms constructed for the post-disaster situation. To justify the unconventional, situational social norms, residents must adjust their beliefs about appropriate behavior. Situational beliefs support situational social norms-and situational information behaviors prevail. Residents find they must trust strangers, create makeshift messaging systems, and in some cases, disregard the law to meet their post-disaster survival needs.
The Effect of Personality Type on the Use of Relevance Criteria for Purposes of Selecting Information Sources.
Even though information scientists generally recognize that relevance judgments are multidimensional and dynamic, there is still discussion and debate regarding the degree to which certain internal (cognition, personality) and external (situation, social relationships) factors affect the use of criteria in reaching those judgments. Much of the debate centers on the relationship of those factors to the criteria and reliable methods for measuring those relationships. This study researched the use of relevance criteria to select an information source by undergraduate students whose task it is to create a course schedule for a semester. During registration periods, when creating their semester schedules, students filled out a two-part questionnaire. After completion of the questionnaire the students completed a Myers-Briggs Type Indicator instrument in order to determine their personality type. Data was analyzed using one-way ANOVAS and Chi-Square. A positive correlation exists between personality type as expressed by the MBTI and the information source selected as most important by the subject. A correlation also exists between personality type and relevance criteria use. The correlation is stronger for some criteria than for others. Therefore, one can expect personality type to have an effect on the use of relevance criteria while selecting information sources.
CT3 as an Index of Knowledge Domain Structure: Distributions for Order Analysis and Information Hierarchies
The problem with which this study is concerned is articulating all possible CT3 and KR21 reliability measures for every case of a 5x5 binary matrix (32,996,500 possible matrices). The study has three purposes. The first purpose is to calculate CT3 for every matrix and compare the results to the proposed optimum range of .3 to .5. The second purpose is to compare the results from the calculation of KR21 and CT3 reliability measures. The third purpose is to calculate CT3 and KR21 on every strand of a class test whose item set has been reduced using the difficulty strata identified by Order Analysis. The study was conducted by writing a computer program to articulate all possible 5 x 5 matrices. The program also calculated CT3 and KR21 reliability measures for each matrix. The nonparametric technique of Order Analysis was applied to two sections of test items to stratify the items into difficulty levels. The difficulty levels were used to reduce the item set from 22 to 9 items. All possible strands or chains of these items were identified so that both reliability measures (CT3 and KR21) could be calculated. One major finding of this study indicates that .3 to .5 is a desirable range for CT3 (cumulative p=.86 to p=.98) if cumulative frequencies are measured. A second major finding is that the KR21 reliability measure produced an invalid result more than half the time. The last major finding is that CT3, rescaled to range between 0 and 1, supports De Vellis' guidelines for reliability measures. The major conclusion is that CT3 is a better measure of reliability since it considers both inter- and intra-item variances.
A Comparison of Communication Motives of On-Site and Off-Site Students in Videoconference-Based Courses
The objective of this investigation is to determine whether student site location in an instructional videoconference is related to students' motives for communicating with their instructor. The study is based, in part, on the work of Martin et al. who identify five separate student-teacher communication motives. These motives, or dimensions, are termed relational, functional, excuse, participation, and sycophancy, and are measured by a 30-item questionnaire. Several communication-related theories were used to predict differences between on-site and off-site students, Media richness theory was used, foundationally, to explain differences between mediated and face-to-face communication and other theories such as uncertainty reduction theory were used in conjunction with media richness theory to predict specific differences.Two hundred eighty-one completed questionnaires were obtained from Education and Library and Information Science students in 17 separate course-sections employing interactive video at the University of North Texas during the Spring and Summer semesters of the 2001/2002 school year. This study concludes that off-site students in an instructional videoconference are more likely than their on-site peers to report being motivated to communicate with their instructor for participation reasons. If off-site students are more motivated than on-site students to communicate as a means to participate, then it may be important for instructors to watch for actual differences in participation levels, and instructors may need to be well versed in pedagogical methods that attempt to increase participation, The study also suggests that current teaching methods being employed in interactive video environments may be adequate with regard to functional, excuse-making, relational and sycophantic communication.
The Refusal Problem and Nonresponse in On-Line Organizational Surveys
Although the primary role of the computer has been in processing and analysis of survey data, it has increasingly been used in data collection. Computer surveys are not exempt from a common problem: some refuse to participate. Many researchers and practitioners indicate the refusal problem is less for computer surveys, perhaps due to the novelty of the method. What has not been investigated is the refusal problem when on-line surveys are no longer novel. This research study examines the use of one form of computer-assisted data collection, the electronic or on-line survey, as an organizational research tool. The study utilized historical response data and administered an on-line survey to individuals known to be cooperative or uncooperative in other on-line surveys. It investigated nonresponse bias and response effects of typical responders, periodic participants, and typical refusers within a sample of corporate employees in a computer-interactive interviewing environment utilizing on-line surveys. The items measured included: participation, respondent characteristics, response speed, interview length, perceived versus actual interview length, quantity of data, item nonresponse, item response bias, consistency of response, extremity of response, and early and late response. It also evaluated factors reported as important when deciding to participate, preferred data collection method, and preferred time of display. Past participation, attitudes toward on-line organizational surveys, response burden, and response error were assessed. The overall completion rate of 55.7% was achieved in this study. All effort was made to encourage cooperation of all groups, including an invitation to participate, token, on-line pre-notification, 800 number support, two on-line reminders, support of temporary exit, and a paper follow-up survey. A significant difference in the participation of the three groups was found. Only three demographic variables were found to be significant. No significant differences were found in speed of response, interview length, quantity, item nonresponse, item response bias, …
Effects of a Selective Dissemination of Information Service on the Environmental Scanning Process of an Academic Institution
A case study was conducted to document the changes in the attitudes of academic administrators at Langston University with regards to the use of various types of information sources for strategic planning. Environmental scanning of external factors was accomplished for six months by the use of a selective dissemination of information (SDI) service. Pre- and post-assessments of the perceived reliance on, satisfaction with and adequacy of personal and library-type information sources were conducted. Findings indicated the continued reliance on personal sources. No statistically significant changes were found in perceived adequacy levels in the use of library-type materials. The overall satisfaction level for the use of library-type information sources and retrieval methods showed a significant increase. Further study is recommended that will utilize additional information technology and other academic institutions.
A Common Representation Format for Multimedia Documents
Multimedia documents are composed of multiple file format combinations, such as image and text, image and sound, or image, text and sound. The type of multimedia document determines the form of analysis for knowledge architecture design and retrieval methods. Over the last few decades, theories of text analysis have been proposed and applied effectively. In recent years, theories of image and sound analysis have been proposed to work with text retrieval systems and progressed quickly due in part to rapid progress in computer processing speed. Retrieval of multimedia documents formerly was divided into the categories of image and text, and image and sound. While standard retrieval process begins from text only, methods are developing that allow the retrieval process to be accomplished simultaneously using text and image. Although image processing for feature extraction and text processing for term extractions are well understood, there are no prior methods that can combine these two features into a single data structure. This dissertation will introduce a common representation format for multimedia documents (CRFMD) composed of both images and text. For image and text analysis, two techniques are used: the Lorenz Information Measurement and the Word Code. A new process named Jeong's Transform is demonstrated for extraction of text and image features, combining the two previous measurements to form a single data structure. Finally, this single data measurements to form a single data structure. Finally, this single data structure is analyzed by using multi-dimensional scaling. This allows multimedia objects to be represented on a two-dimensional graph as vectors. The distance between vectors represents the magnitude of the difference between multimedia documents. This study shows that image classification on a given test set is dramatically improved when text features are encoded together with image features. This effect appears to hold true even when the available …
Solutions for Dynamic Channel Assignment and Synchronization Problem for Distributed Wireless Multimedia System
The recent advances in mobile computing and distributed multimedia systems allow mobile hosts (clients) to access wireless multimedia Data at anywhere and at anytime. In accessing multimedia information on the distributed multimedia servers from wireless personal communication service systems, a channel assignment problem and synchronization problems should be solved efficiently. Recent demand for mobile telephone service have been growing rapidly while the electro-magnetic spectrum of frequencies allocated for this purpose remain limited. Any solution to the channel assignment problem is subject to this limitation, as well as the interference constraint between adjacent channels in the spectrum. Channel allocation schemes provide a flexible and efficient access to bandwidth in wireless and mobile communication systems. In this dissertation, both an efficient distributed algorithm for dynamic channel allocation based upon mutual exclusion model, and an efficient distributed synchronization algorithm using Quasi-sink for wireless and mobile multimedia systems to ensure and facilitate mobile client access to multimedia objects are proposed. Algorithm's performance with several channel systems using different types of call arrival patterns is determined analytically. A set of simulation experiments to evaluate the performance of our scheme using message complexity and buffer usage at each frame arrival time.
Empowering Agent for Oklahoma School Learning Communities: An Examination of the Oklahoma Library Improvement Program
The purposes of this study were to determine the initial impact of the Oklahoma Library Media Improvement Grants on Oklahoma school library media programs; assess whether the Oklahoma Library Media Improvement Grants continue to contribute to Oklahoma school learning communities; and examine possible relationships between school library media programs and student academic success. It also seeks to document the history of the Oklahoma Library Media Improvement Program 1978 - 1994 and increase awareness of its influence upon the Oklahoma school library media programs. Methods of data collection included: examining Oklahoma Library Media Improvement Program archival materials; sending a survey to 1703 school principals in Oklahoma; and interviewing Oklahoma Library Media Improvement Program participants. Data collection took place over a one year period. Data analyses were conducted in three primary phases: descriptive statistics and frequencies were disaggregated to examine mean scores as they related to money spent on school library media programs; opinions of school library media programs; and possible relationships between school library media programs and student academic achievement. Analysis of variance was used in the second phase of data analysis to determine if any variation between means was significant as related to Oklahoma Library Improvement Grants, time spent in the library media center by library media specialists, principal gender, opinions of library media programs, student achievement indicators, and the region of the state in which the respondent was located. The third phase of data analysis compared longitudinal data collected in the 2000 survey with past data. The primary results indicated students in Oklahoma from schools with a centralized library media center, served by a full-time library media specialist, and the school having received one or more Library Media Improvement Grants scored significantly higher academically than students in schools not having a centralized library media center, not served by a …
The Validity of Health Claims on the World Wide Web: A Case Study of the Herbal Remedy Opuntia
The World Wide Web has become a significant source of medical information for the public, but there is concern that much of the information is inaccurate, misleading, and unsupported by scientific evidence. This study analyzes the validity of health claims on the World Wide Web for the herbal Opuntia using an evidence-based approach, and supports the observation that individuals must critically assess health information in this relatively new medium of communication. A systematic search by means of nine search engines and online resources of Web sites relating to herbal remedies was conducted and specific sites providing information on the cactus herbal remedy from the genus Opuntia were retrieved. Validity of therapeutic health claims on the Web sites was checked by comparison with reports in the scientific literature subjected to two established quality assessment rating instruments. 184 Web sites from a variety of sources were retrieved and evaluated, and 98 distinct health claims were identified. 53 scientific reports were retrieved to validate claims. 25 involved human subjects, and 28 involved animal or laboratory models. Only 33 (34%) of the claims were addressed in the scientific literature. For 3% of the claims, evidence from the scientific reports was conflicting or contradictory. Of the scientific reports involving human subjects, none met the predefined criteria for high quality as determined by quality assessment rating instruments. Two-thirds of the claims were unsupported by scientific evidence and were based on folklore, or indirect evidence from related sources. Information on herbal remedies such as Opuntia is well represented on the World Wide Web. Health claims on Web sites were numerous and varied widely in subject matter. The determination of the validity of information about claims made for herbals on the Web would help individuals assess their value in medical treatment. However, the Web is conducive to dubious …
The Cluster Hypothesis: A Visual/Statistical Analysis
By allowing judgments based on a small number of exemplar documents to be applied to a larger number of unexamined documents, clustered presentation of search results represents an intuitively attractive possibility for reducing the cognitive resource demands on human users of information retrieval systems. However, clustered presentation of search results is sensible only to the extent that naturally occurring similarity relationships among documents correspond to topically coherent clusters. The Cluster Hypothesis posits just such a systematic relationship between document similarity and topical relevance. To date, experimental validation of the Cluster Hypothesis has proved problematic, with collection-specific results both supporting and failing to support this fundamental theoretical postulate. The present study consists of two computational information visualization experiments, representing a two-tiered test of the Cluster Hypothesis under adverse conditions. Both experiments rely on multidimensionally scaled representations of interdocument similarity matrices. Experiment 1 is a term-reduction condition, in which descriptive titles are extracted from Associated Press news stories drawn from the TREC information retrieval test collection. The clustering behavior of these titles is compared to the behavior of the corresponding full text via statistical analysis of the visual characteristics of a two-dimensional similarity map. Experiment 2 is a dimensionality reduction condition, in which inter-item similarity coefficients for full text documents are scaled into a single dimension and then rendered as a two-dimensional visualization; the clustering behavior of relevant documents within these unidimensionally scaled representations is examined via visual and statistical methods. Taken as a whole, results of both experiments lend strong though not unqualified support to the Cluster Hypothesis. In Experiment 1, semantically meaningful 6.6-word document surrogates systematically conform to the predictions of the Cluster Hypothesis. In Experiment 2, the majority of the unidimensionally scaled datasets exhibit a marked nonuniformity of distribution of relevant documents, further supporting the Cluster Hypothesis. Results of …
Modeling Utilization of Planned Information Technology
Implementations of information technology solutions to address specific information problems are only successful when the technology is utilized. The antecedents of technology use involve user, system, task and organization characteristics as well as externalities which can affect all of these entities. However, measurement of the interaction effects between these entities can act as a proxy for individual attribute values. A model is proposed which based upon evaluation of these interaction effects can predict technology utilization. This model was tested with systems being implemented at a pediatric health care facility. Results from this study provide insight into the relationship between the antecedents of technology utilization. Specifically, task time provided significant direct causal effects on utilization. Indirect causal effects were identified in task value and perceived utility constructs. Perceived utility, along with organizational support also provided direct causal effects on user satisfaction. Task value also impacted user satisfaction in an indirect fashion. Also, results provide a predictive model and taxonomy of variables which can be applied to predict or manipulate the likelihood of utilization for planned technology.
Identifying At-Risk Students: An Assessment Instrument for Distributed Learning Courses in Higher Education
The current period of rapid technological change, particularly in the area of mediated communication, has combined with new philosophies of education and market forces to bring upheaval to the realm of higher education. Technical capabilities exceed our knowledge of whether expenditures on hardware and software lead to corresponding gains in student learning. Educators do not yet possess sophisticated assessments of what we may be gaining or losing as we widen the scope of distributed learning. The purpose of this study was not to draw sweeping conclusions with respect to the costs or benefits of technology in education. The researcher focused on a single issue involved in educational quality: assessing the ability of a student to complete a course. Previous research in this area indicates that attrition rates are often higher in distributed learning environments. Educators and students may benefit from a reliable instrument to identify those students who may encounter difficulty in these learning situations. This study is aligned with research focused on the individual engaged in seeking information, assisted or hindered by the capabilities of the computer information systems that create and provide access to information. Specifically, the study focused on the indicators of completion for students enrolled in video conferencing and Web-based courses. In the final version, the Distributed Learning Survey encompassed thirteen indicators of completion. The results of this study of 396 students indicated that the Distributed Learning Survey represented a reliable and valid instrument for identifying at-risk students in video conferencing and Web-based courses where the student population is similar to the study participants. Educational level, GPA, credit hours taken in the semester, study environment, motivation, computer confidence, and the number of previous distributed learning courses accounted for most of the predictive power in the discriminant function based on student scores from the survey.
MEDLINE Metric: A method to assess medical students' MEDLINE search effectiveness
Medical educators advocate the need for medical students to acquire information management skills, including the ability to search the MEDLINE database. There has been no published validated method available to use for assessing medical students' MEDLINE information retrieval skills. This research proposes and evaluates a method, designed as the MEDLINE Metric, for assessing medical students' search skills. MEDLINE Metric consists of: (a) the development, by experts, of realistic clinical scenarios that include highly constructed search questions designed to test defined search skills; (b) timed tasks (searches) completed by subjects; (c) the evaluation of search results; and (d) instructive feedback. A goal is to offer medical educators a valid, reliable, and feasible way to judge mastery of information searching skill by measuring results (search retrieval) rather than process (search behavior) or cognition (knowledge about searching). Following a documented procedure for test development, search specialists and medical content experts formulated six clinical search scenarios and questions. One hundred and forty-five subjects completed the six-item test under timed conditions. Subjects represented a wide range of MEDLINE search expertise. One hundred twenty complete cases were used, representing 53 second-year medical students (44%), 47 fourth-year medical students (39%), and 20 medical librarians (17%). Data related to educational level, search training, search experience, confidence in retrieval, difficulty of search, and score were analyzed. Evidence supporting the validity of the method includes the agreement by experts about the skills and knowledge necessary to successfully retrieve information relevant to a clinical question from the MEDLINE database. Also, the test discriminated among different performance levels. There were statistically significant, positive relationships between test score and level of education, self-reported previous MEDLINE training, and self-reported previous search experience. The findings from this study suggest that MEDLINE Metric is a valid method for constructing and administering a performance-based test to identify …
An Experimental Study of Teachers' Verbal and Nonverbal Immediacy, Student Motivation, and Cognitive Learning in Video Instruction
This study used an experimental design and a direct test of recall to provide data about teacher immediacy and student cognitive learning. Four hypotheses and a research question addressed two research problems: first, how verbal and nonverbal immediacy function together and/or separately to enhance learning; and second, how immediacy affects cognitive learning in relation to student motivation. These questions were examined in the context of video instruction to provide insight into distance learning processes and to ensure maximum control over experimental manipulations. Participants (N = 347) were drawn from university students in an undergraduate communication course. Students were randomly assigned to groups, completed a measure of state motivation, and viewed a 15-minute video lecture containing part of the usual course content delivered by a guest instructor. Participants were unaware that the video instructor was actually performing one of four scripted manipulations reflecting higher and lower combinations of specific verbal and nonverbal cues, representing the four cells of the 2x2 research design. Immediately after the lecture, students completed a recall measure, consisting of portions of the video text with blanks in the place of key words. Participants were to fill in the blanks with exact words they recalled from the videotape. Findings strengthened previous research associating teacher nonverbal immediacy with enhanced cognitive learning outcomes. However, higher verbal immediacy, in the presence of higher and lower nonverbal immediacy, was not shown to produce greater learning among participants in this experiment. No interaction effects were found between higher and lower levels of verbal and nonverbal immediacy. Recall scores were comparatively low in the presence of higher verbal and lower nonverbal immediacy, suggesting that nonverbal expectancy violations may have hindered cognitive learning. Student motivation was not found to be a significant source of error in measuring immediacy's effects, and no interaction effects were detected …
Creating a Criterion-Based Information Agent Through Data Mining for Automated Identification of Scholarly Research on the World Wide Web
This dissertation creates an information agent that correctly identifies Web pages containing scholarly research approximately 96% of the time. It does this by analyzing the Web page with a set of criteria, and then uses a classification tree to arrive at a decision. The criteria were gathered from the literature on selecting print and electronic materials for academic libraries. A Delphi study was done with an international panel of librarians to expand and refine the criteria until a list of 41 operationalizable criteria was agreed upon. A Perl program was then designed to analyze a Web page and determine a numerical value for each criterion. A large collection of Web pages was gathered comprising 5,000 pages that contain the full work of scholarly research and 5,000 random pages, representative of user searches, which do not contain scholarly research. Datasets were built by running the Perl program on these Web pages. The datasets were split into model building and testing sets. Data mining was then used to create different classification models. Four techniques were used: logistic regression, nonparametric discriminant analysis, classification trees, and neural networks. The models were created with the model datasets and then tested against the test dataset. Precision and recall were used to judge the effectiveness of each model. In addition, a set of pages that were difficult to classify because of their similarity to scholarly research was gathered and classified with the models. The classification tree created the most effective classification model, with a precision ratio of 96% and a recall ratio of 95.6%. However, logistic regression created a model that was able to correctly classify more of the problematic pages. This agent can be used to create a database of scholarly research published on the Web. In addition, the technique can be used to create a …
Korean Studies in North America 1977-1996: A Bibliometric Study
This research is a descriptive bibliometric study of the literature of the field of Korean studies. Its goal is to quantitatively describe the literature and serve as a model for such research in other area studies fields. This study analyzed 193 source articles and 7,166 citations in the articles in four representative Korean and Asian studies journals published in North America from 1977 to 1996. The journals included in this study were Korean Studies (KS), the Journal of Korean Studies (JKS), the Journal of Asian Studies (JAS), and the Harvard Journal of Asiatic Studies (HJAS). Subject matters and author characteristics of the source articles were examined, along with various characteristics such as the form, date, language, country of origin, subject, key authors, and key titles of the literature cited in the source articles. Research in Korean studies falls within fourteen broad disciplines, but concentrated in a few disciplines. Americans have been the most active authors in Korean studies, followed closely by authors of Korean ethnicity. Monographic literature was used most. The mean age of publications cited was 20.87 and the median age of publications cited was 12. The Price Index of Korean studies as a whole is 21.9 percent. Sources written in English were most cited (47.1%) and references to Korean language sources amounted to only 34.9% of all sources. In general, authors preferred sources published in their own countries. Sources on history were cited most by other disciplines. No significant core authors were identified. No significant core literature were identified either. This study indicates that Korean studies is still evolving. Some ways of promoting research in less studied disciplines and of facilitating formal communication between Korean scholars in Korea and Koreanists in North America need to be sought in order to promote well-balanced development in the field. This study …
Information Seeking in a Virtual Learning Environment
Duplicating a time series study done by Kuhlthau and associates in 1989, this study examines the applicability of the Information Search Process (ISP) Model in the context of a virtual learning environment. This study confirms that students given an information seeking task in a virtual learning environment do exhibit the stages indicated by the ISP Model. The six-phase ISP Model is shown to be valid for describing the different stages of cognitive, affective, and physical tasks individuals progress through when facing a situation where they must search for information to complete an academic task in a virtual learning environment. The findings in this study further indicate there is no relationship between the amount of computer experience subjects possess and demonstrating the patterns of thoughts, feelings, and actions described by the ISP Model. The study demonstrates the ISP Model to be independent of the original physical library environments where the model was developed. An attempt is made to represent the ISP model in a slightly different manner that provides more of the sense of motion and interaction among the components of thoughts, feelings, and action than is currently provided for in the model. The study suggests that the development of non-self-reporting data collection techniques would be useful in complementing and furthering research to enhance and refine the representation of the ISP Model. Additionally, expanding the research to include the examination of group interaction is called for to enhance the ISP Model and develop further applications that could potentially aid educational delivery in all types of learning environments.
A Personal Documenation System for Scholars: A Tool for Thinking
This exploratory research focused on a problem stated years ago by Vannevar Bush: "The problem is how creative men think, and what can be done to help them think." The study explored the scholarly work process and the use of computer tools to augment thinking. Based on a review of several related literatures, a framework of 7 major categories and 28 subcategories of scholarly thinking was proposed. The literature was used to predict problems scholars have in organizing their information, potential solutions, and specific computer tool features to augment scholarly thinking. Info Select, a personal information manager with most of these features (text and outline processing, sophisticated searching and organizing), was chosen as a potential tool for thinking. The study looked at how six scholars (faculty and doctoral students in social science fields at three universities) organized information using Info Select as a personal documentation system for scholarly work. These multiple case studies involved four in-depth, focused interviews, written evaluations, direct observation, and analysis of computer logs and files collected over a 3- to 6-month period. A content analysis of interviews and journals supported the proposed AfFORD-W taxonomy: Scholarly work activities consisted of Adding, Filing, Finding, Organizing, Reminding, and Displaying information to produce a Written product. Very few activities fell outside this framework, and activities were distributed evenly across all categories. Problems, needs, and likes mentioned by scholars, however, clustered mainly in the filing, finding, and organizing categories. All problems were related to human memory. Both predictions and research findings imply a need for tools that support information storage and retrieval in personal documentation systems, for references and notes, with fast and easy input of source material. A computer tool for thinking should support categorizing and organizing, reorganizing and transporting information. It should provide a simple search engine and support …
A Theory for the Measurement of Internet Information Retrieval
The purpose of this study was to develop and evaluate a measurement model for Internet information retrieval strategy performance evaluation whose theoretical basis is a modification of the classical measurement model embodied in the Cranfield studies and their progeny. Though not the first, the Cranfield studies were the most influential of the early evaluation experiments. The general problem with this model was and continues to be the subjectivity of the concept of relevance. In cyberspace, information scientists are using quantitative measurement models for evaluating information retrieval performance that are based on the Cranfield model. This research modified this model by incorporating enduser relevance judgment rather than using objective relevance judgments, and by adopting a fundamental unit of measure developed for the cyberspace of Internet information retrieval rather than using recall and precision-type measures. The proposed measure, the Content-bearing Click (CBC) Ratio, was developed as a quantitative measure reflecting the performance of an Internet IR strategy. Since the hypertext "click" is common to many Internet IR strategies, it was chosen as the fundamental unit of measure rather than the "document." The CBC Ratio is a ratio of hypertext click counts that can be viewed as a false drop measure that determines the average number of irrelevant content-bearing clicks that an enduser check before retrieving relevant information. After measurement data were collected, they were used to evaluate the reliability of several methods for aggregating relevance judgments. After reliability coefficients were calculated, measurement model was used to compare web catalog and web database performance in an experimental setting. Conclusions were the reached concerning the reliability of the proposed measurement model and its ability to measure Internet IR performance, as well as implications for clinical use of the Internet and for future research in Information Science.
The Effects of Task-Based Documentation Versus Online Help Menu Documentation on the Acceptance of Information Technology
The objectives of this study were (1) to identify and describe task-based documentation; (2) to identify and describe any purported changes in users attitudes when IT migration was preceded by task-based documentation; (3) to suggest implications of task-based documentation on users attitude toward IT acceptance. Questionnaires were given to 150 university students. Of these, all 150 students participated in this study. The study determined the following: (1) if favorable pre-implementation attitudes toward a new e-mail system increase, as a result of training, if users expect it to be easy to learn and use; (2) if user acceptance of an e-mail program increase as expected perceived usefulness increase as delineated by task-based documentation; (3) if task-based documentation is more effective than standard help menus while learning a new application program; and (4) if training that requires active student participation increase the acceptance of a new e-mail system. The following conclusions were reached: (1) Positive pre-implementation attitudes toward a new e-mail system are not affected by training even if the users expect it to be easy to learn and use. (2) User acceptance of an e-mail program does not increase as perceived usefulness increase when aided by task-based documentation. (3) Task-based documentation is not more effective than standard help menus when learning a new application program. (4) Training that requires active student participation does not increase the acceptance of a new e-mail system.
An Analysis of the Ability of an Instrument to Measure Quality of Library Service and Library Success
This study consisted of an examination of how service quality should be measured within libraries and how library service quality relates to library success. A modified version of the SERVQUAL instrument was evaluated to determine how effectively it measures library service quality. Instruments designed to measure information center success and information system success were evaluated to determine how effectively they measure library success and how they relate to SERVQUAL. A model of library success was developed to examine how library service quality relates to other variables associated with library success. Responses from 385 end users at two U.S. Army Corps of Engineers libraries were obtained through a mail survey. Results indicate that library service quality is best measured with a performance-based version of SERVQUAL, and that measuring importance may be as critical as measuring expectations for management purposes. Results also indicate that library service quality is an important factor in library success and that library success is best measured with a combination of SERVQUAL and library success instruments. The findings have implications for the development of new instruments to more effectively measure library service quality and library success as well as for the development of new models of library service quality and library success.
Back to Top of Screen