You limited your search to:
- The intersection of social networks in a public service model: A case study.
- Examining human interaction networks contributes to an understanding of factors that improve and constrain collaboration. This study examined multiple network levels of information exchanges within a public service model designed to strengthen community partnerships by connecting city services to the neighborhoods. The research setting was the Neighbourhood Integrated Service Teams (NIST) program in Vancouver, B.C., Canada. A literature review related information dimensions to the municipal structure, including social network theory, social network analysis, social capital, transactive memory theory, public goods theory, and the information environment of the public administration setting. The research method involved multiple instruments and included surveys of two bounded populations. First, the membership of the NIST program received a survey asking for identification of up to 20 people they contact for NIST-related work. Second, a network component of the NIST program, 23 community centre coordinators in the Parks and Recreation Department, completed a survey designed to identify their information exchanges relating to regular work responsibilities and the infusion of NIST issues. Additionally, 25 semi-structured interviews with the coordinators and other program members, collection of organization documents, field observation, and feedback sessions provided valuable insight into the complexity of the model. This research contributes to the application of social network theory and analysis in information environments and provides insight for public administrators into the operation of the model and reasons for the program's network effectiveness.
- An Examination Of The Variation In Information Systems Project Cost Estimates: The Case Of Year 2000 Compliance Projects
- The year 2000 (Y2K) problem presented a fortuitous opportunity to explore the relationship between estimated costs of software projects and five cost influence dimensions described by the Year 2000 Enterprise Cost Model (Kappelman, et al., 1998) -- organization, problem, solution, resources, and stage of completion. This research was a field study survey of (Y2K) project managers in industry, government, and education and part of a joint project that began in 1996 between the University of North Texas and the Y2K Working Group of the Society for Information Management (SIM). Evidence was found to support relationships between estimated costs and organization, problem, resources, and project stage but not for the solution dimension. Project stage appears to moderate the relationships for organization, particularly IS practices, and resources. A history of superior IS practices appears to mean lower estimated costs, especially for projects in larger IS organizations. Acquiring resources, especially external skills, appears to increase costs. Moreover, projects apparently have many individual differences, many related to size and to project stage, and their influences on costs appear to be at the sub-dimension or even the individual variable level. A Revised Year 2000 Enterprise Model is presented incorporating this granularity. Two primary conclusions can be drawn from this research: (1) large software projects are very complex and thus cost estimating is also; and (2) the devil of cost estimating is in the details of knowing which of the many possible variables are the important ones for each particular enterprise and project. This points to the importance of organizations keeping software project metrics and the historical calibration of cost-estimating practices. Project managers must understand the relevant details and their interaction and importance in order to successfully develop a cost estimate for a particular project, even when rational cost models are used. This research also indicates that software cost estimating has political as well as rational influences at play.
- Creating a criterion-based information agent through data mining for automated identification of scholarly research on the World Wide Web
- This dissertation creates an information agent that correctly identifies Web pages containing scholarly research approximately 96% of the time. It does this by analyzing the Web page with a set of criteria, and then uses a classification tree to arrive at a decision. The criteria were gathered from the literature on selecting print and electronic materials for academic libraries. A Delphi study was done with an international panel of librarians to expand and refine the criteria until a list of 41 operationalizable criteria was agreed upon. A Perl program was then designed to analyze a Web page and determine a numerical value for each criterion. A large collection of Web pages was gathered comprising 5,000 pages that contain the full work of scholarly research and 5,000 random pages, representative of user searches, which do not contain scholarly research. Datasets were built by running the Perl program on these Web pages. The datasets were split into model building and testing sets. Data mining was then used to create different classification models. Four techniques were used: logistic regression, nonparametric discriminant analysis, classification trees, and neural networks. The models were created with the model datasets and then tested against the test dataset. Precision and recall were used to judge the effectiveness of each model. In addition, a set of pages that were difficult to classify because of their similarity to scholarly research was gathered and classified with the models. The classification tree created the most effective classification model, with a precision ratio of 96% and a recall ratio of 95.6%. However, logistic regression created a model that was able to correctly classify more of the problematic pages. This agent can be used to create a database of scholarly research published on the Web. In addition, the technique can be used to create a database of any type of structured electronic information.
- Factors Influencing How Students Value Asynchronous Web Based Courses
- This dissertation discovered the factors influencing how students value asynchronous Web-based courses through the use of qualitative methods. Data was collected through surveys, observations, interviews, email correspondence, chat room and bulletin board transcripts. Instruments were tested in pilot studies of previous semesters. Factors were identified for two class formats. The asynchronous CD/Internet class format and the synchronous online Web based class format. Also, factors were uncovered for two of the instructional tools used in the course: the WebCT forum and WebCT testing. Factors were grouped accordingly as advantages or disadvantages under major categories. For the asynchronous CD/Internet class format the advantages were Convenience, Flexibility, Learning Enhancement, and Psychology. The disadvantages included Isolation, Learning Environment, and Technology. For the synchronous online Web based class format the advantages were Convenience, Flexibility, Human Interaction, Learning Enhancement and Psychology, whereas the disadvantages included Isolation, Learning Environment and Technology. Concurrently, the study revealed the following factors as advantages of the WebCT Forum: Help Each Other, Interaction, Socialization, Classroom News, and Time Independent. The disadvantages uncovered were Complaints, Technical Problems and Isolation. Finally, advantages specified for the WebCT testing tool were Convenience, Flexibility and Innovations, and its disadvantages were Surroundings Not Conducive to Learning, and Technical Problems. Results indicate that not only classroom preference, learning style and personality type influence how students value a Web based course, but, most importantly, a student's lifestyle (number of personal commitments, how far they live, and life's priorities). The WebCT forum or bulletin board, and the WebCT testing or computerized testing were seen mostly by students, as good tools for encouraging classroom communication and testing because of the convenience and flexibility offered. Still, further research is needed both quantitatively and qualitatively to ascertain the true weight of the factors discovered in this study.
- The Cluster Hypothesis: A visual/statistical analysis
Access: Use of this item is restricted to the UNT Community.
By allowing judgments based on a small number of exemplar documents to be applied to a larger number of unexamined documents, clustered presentation of search results represents an intuitively attractive possibility for reducing the cognitive resource demands on human users of information retrieval systems. However, clustered presentation of search results is sensible only to the extent that naturally occurring similarity relationships among documents correspond to topically coherent clusters. The Cluster Hypothesis posits just such a systematic relationship between document similarity and topical relevance. To date, experimental validation of the Cluster Hypothesis has proved problematic, with collection-specific results both supporting and failing to support this fundamental theoretical postulate. The present study consists of two computational information visualization experiments, representing a two-tiered test of the Cluster Hypothesis under adverse conditions. Both experiments rely on multidimensionally scaled representations of interdocument similarity matrices. Experiment 1 is a term-reduction condition, in which descriptive titles are extracted from Associated Press news stories drawn from the TREC information retrieval test collection. The clustering behavior of these titles is compared to the behavior of the corresponding full text via statistical analysis of the visual characteristics of a two-dimensional similarity map. Experiment 2 is a dimensionality reduction condition, in which inter-item similarity coefficients for full text documents are scaled into a single dimension and then rendered as a two-dimensional visualization; the clustering behavior of relevant documents within these unidimensionally scaled representations is examined via visual and statistical methods. Taken as a whole, results of both experiments lend strong though not unqualified support to the Cluster Hypothesis. In Experiment 1, semantically meaningful 6.6-word document surrogates systematically conform to the predictions of the Cluster Hypothesis. In Experiment 2, the majority of the unidimensionally scaled datasets exhibit a marked nonuniformity of distribution of relevant documents, further supporting the Cluster Hypothesis. Results of the two experiments are profoundly question-specific. Post hoc analyses suggest that it may be possible to predict the success of clustered searching based on the lexical characteristics of users' natural-language expression of their information need.
- Computer support interactions: Verifying a process model of problem trajectory in an information technology support environment.
- Observations in the information technology (IT) support environment and generalizations from the literature regarding problem resolution behavior indicate that computer support staff seldom store reusable solution information effectively for IT problems. A comprehensive model of the processes encompassing problem arrival and assessment, expertise selection, problem resolution, and solution recording has not been available to facilitate research in this domain. This investigation employed the findings from a qualitative pilot study of IT support staff information behaviors to develop and explicate a detailed model of problem trajectory. Based on a model from clinical studies, this model encompassed a trajectory scheme that included the communication media, characteristics of the problem, decision points in the problem resolution process, and knowledge creation in the form of solution storage. The research design included the administration of an extensive scenario-based online survey to a purposive sample of IT support staff at a medium-sized state-supported university, with additional respondents from online communities of IT support managers and call-tracking software developers. The investigator analyzed 109 completed surveys and conducted email interviews of a stratified nonrandom sample of survey respondents to evaluate the suitability of the model. The investigation employed mixed methods including descriptive statistics, effects size analysis, and content analysis to interpret the results and verify the sufficiency of the problem trajectory model. The study found that expertise selection relied on the factors of credibility, responsibility, and responsiveness. Respondents referred severe new problems for resolution and recorded formal solutions more often than other types of problems, whereas they retained moderate recurring problems for resolution and seldom recorded those solutions. Work experience above and below the 5-year mark affected decisions to retain, refer, or defer problems, as well as solution storage and broadcasting behaviors. The veracity of the problem trajectory model was verified and it was found to be an appropriate tool and explanatory device for research in the IT domain.
- A comparison of communication motives of on-site and off-site students in videoconference-based courses
- The objective of this investigation is to determine whether student site location in an instructional videoconference is related to students' motives for communicating with their instructor. The study is based, in part, on the work of Martin et al. who identify five separate student-teacher communication motives. These motives, or dimensions, are termed relational, functional, excuse, participation, and sycophancy, and are measured by a 30-item questionnaire. Several communication-related theories were used to predict differences between on-site and off-site students, Media richness theory was used, foundationally, to explain differences between mediated and face-to-face communication and other theories such as uncertainty reduction theory were used in conjunction with media richness theory to predict specific differences.Two hundred eighty-one completed questionnaires were obtained from Education and Library and Information Science students in 17 separate course-sections employing interactive video at the University of North Texas during the Spring and Summer semesters of the 2001/2002 school year. This study concludes that off-site students in an instructional videoconference are more likely than their on-site peers to report being motivated to communicate with their instructor for participation reasons. If off-site students are more motivated than on-site students to communicate as a means to participate, then it may be important for instructors to watch for actual differences in participation levels, and instructors may need to be well versed in pedagogical methods that attempt to increase participation, The study also suggests that current teaching methods being employed in interactive video environments may be adequate with regard to functional, excuse-making, relational and sycophantic communication.
- A Study of Graphically Chosen Features for Representation of TREC Topic-Document Sets
Access: Use of this item is restricted to the UNT Community.
Document representation is important for computer-based text processing. Good document representations must include at least the most salient concepts of the document. Documents exist in a multidimensional space that difficult the identification of what concepts to include. A current problem is to measure the effectiveness of the different strategies that have been proposed to accomplish this task. As a contribution towards this goal, this dissertation studied the visual inter-document relationship in a dimensionally reduced space. The same treatment was done on full text and on three document representations. Two of the representations were based on the assumption that the salient features in a document set follow the chi-distribution in the whole document set. The third document representation identified features through a novel method. A Coefficient of Variability was calculated by normalizing the Cartesian distance of the discriminating value in the relevant and the non-relevant document subsets. Also, the local dictionary method was used. Cosine similarity values measured the inter-document distance in the information space and formed a matrix to serve as input to the Multi-Dimensional Scale (MDS) procedure. A Precision-Recall procedure was averaged across all treatments to statistically compare them. Treatments were not found to be statistically the same and the null hypotheses were rejected.
- The adoption and use of electronic information resources by a non-traditional user group: Automotive service technicians.
- The growing complexity of machines has led to a concomitant increase in the amount and complexity of the information needed by those charged with servicing them. This, in turn, has led to a need for more robust methods for storing and distributing information and for a workforce more sophisticated in its use of information resources. As a result, the service trades have "professionalized," adopting more rigorous academic standards and developing ongoing certification programs. The current paper deals with the acceptance of advanced electronic information technology by skilled service personnel, specifically, automotive service technicians. The theoretical basis of the study is Davis' technology acceptance model. The purpose of the study is to determine the effects of three external factors on the operation of the model: age, work experience, and education/certification level. The research design is in two parts, beginning with an onsite observation and interviews to establish the environment. During the second part of the research process a survey was administered to a sample of automotive service technicians. Results indicated significant inverse relationships between age and acceptance and between experience and acceptance. A significant positive relationship was shown between education, particularly certification, and acceptance.
- A Framework of Automatic Subject Term Assignment: An Indexing Conception-Based Approach
- The purpose of dissertation is to examine whether the understandings of subject indexing processes conducted by human indexers have a positive impact on the effectiveness of automatic subject term assignment through text categorization (TC). More specifically, human indexers' subject indexing approaches or conceptions in conjunction with semantic sources were explored in the context of a typical scientific journal article data set. Based on the premise that subject indexing approaches or conceptions with semantic sources are important for automatic subject term assignment through TC, this study proposed an indexing conception-based framework. For the purpose of this study, three hypotheses were tested: 1) the effectiveness of semantic sources, 2) the effectiveness of an indexing conception-based framework, and 3) the effectiveness of each of three indexing conception-based approaches (the content-oriented, the document-oriented, and the domain-oriented approaches). The experiments were conducted using a support vector machine implementation in WEKA (Witten, & Frank, 2000). The experiment results pointed out that cited works, source title, and title were as effective as the full text, while keyword was found more effective than the full text. In addition, the findings showed that an indexing conception-based framework was more effective than the full text. Especially, the content-oriented and the document-oriented indexing approaches were found more effective than the full text. Among three indexing conception-based approaches, the content-oriented approach and the document-oriented approach were more effective than the domain-oriented approach. In other words, in the context of a typical scientific journal article data set, the objective contents and authors' intentions were more focused that the possible users' needs. The research findings of this study support that incorporation of human indexers' indexing approaches or conception in conjunction with semantic sources has a positive impact on the effectiveness of automatic subject term assignment.
- Human concept cognition and semantic relations in the unified medical language system: A coherence analysis.
- There is almost a universal agreement among scholars in information retrieval (IR) research that knowledge representation needs improvement. As core component of an IR system, improvement of the knowledge representation system has so far involved manipulation of this component based on principles such as vector space, probabilistic approach, inference network, and language modeling, yet the required improvement is still far from fruition. One promising approach that is highly touted to offer a potential solution exists in the cognitive paradigm, where knowledge representation practice should involve, or start from, modeling the human conceptual system. This study based on two related cognitive theories: the theory-based approach to concept representation and the psychological theory of semantic relations, ventured to explore the connection between the human conceptual model and the knowledge representation model (represented by samples of concepts and relations from the unified medical language system, UMLS). Guided by these cognitive theories and based on related and appropriate data-analytic tools, such as nonmetric multidimensional scaling, hierarchical clustering, and content analysis, this study aimed to conduct an exploratory investigation to answer four related questions. Divided into two groups, a total of 89 research participants took part in two sets of cognitive tasks. The first group (49 participants) sorted 60 food names into categories followed by simultaneous description of the derived categories to explain the rationale for category judgment. The second group (40 participants) performed sorting 47 semantic relations (the nonhierarchical associative types) into 5 categories known a priori. Three datasets resulted as a result of the cognitive tasks: food-sorting data, relation-sorting data, and free and unstructured text of category descriptions. Using the data analytic tools mentioned, data analysis was carried out and important results and findings were obtained that offer plausible explanations to the 4 research questions. Major results include the following: (a) through discriminant analysis category members were predicted consistently in 70% of the time; (b) the categorization bases are largely simplified rules, naïve explanations, and feature-based; (c) individuals theoretical explanation remains valid and stays stable across category members; (d) the human conceptual model can be fairly reconstructed in a low-dimensional space where 93% of the variance in the dimensional space is accounted for by the subjects performance; (e) participants consistently classify 29 of the 47 semantic relations; and, (f) individuals perform better in the functional and spatial dimensions of the semantic relations classification task and perform poorly in the conceptual dimension.
- An exploration of the diffusion of a new technology from communities of practice perspective: Web services technologies in digital libraries.
- This study explored and described decision factors related to technology adoption. The research used diffusion of innovations and communities of practice (CoP) theoretical frameworks and a case study of Web services technology in the digital library (DL) environment to develop an understanding of the decision-making process. A qualitative case study approach was used to investigate the research problems and data were collected through semi-structured interviews, documentary evidence (e.g., meeting minutes), and a comprehensive member check. The research conducted face-to-face and phone interviews with seven respondents with different job titles (administraive vs. technical) from five different DL programs selected based on distinctive characteristics such as size of the DL program. Findings of the research suggested that the decision-making process is a complex process in which a number of factors are considered when making technology adoption decisions. These factors are categorized as organizational, individual, and technology specific factors. Further, data showed that DL CoPs played an important role in enabling staff members of a DL program to access up-to-date and experienced-based knowledge, provided a distributed problem solving and learning environment, facilitating informal communication and collaborative activities, and informing the decision-making process.
- Knowledge management in times of change: Tacit and explicit knowledge transfers.
- This study proposed a look at the importance and challenges of knowledge management in times of great change. In order to understand the information phenomena of interest, impacts on knowledge workers and knowledge documents in times of great organizational change, the study is positioned in a major consolidation of state agencies in Texas. It pays special attention to how the changes were perceived by the knowledge workers by interviewing those that were impacted by the changes resulting from the reorganization. The overall goal is to assess knowledge management in times of great organizational change by analyzing the impact of consolidation on knowledge management in Texas's Health and Human Services agencies. The overarching research question is what happened to the knowledge management structure during this time of great change? The first research question was what was the knowledge worker environment during the time of change? The second research question was what was the knowledge management environment of the agencies during the time of change? The last research question was did consolidation of the HHS agencies diminish the ability to transition from tacit to explicit knowledge? Additionally, the study investigates how the bill that mandated the consolidation was covered in the local media as well as the actual budget and employee loss impact of the consolidation in order to better understand the impacts on knowledge workers and knowledge documents as a result of major organizational restructuring. The findings have both theoretical and practical implications for information science, knowledge management and project management.
- Respect for human rights and the rise of democratic policing in Turkey: Adoption and diffusion of the European Union acquis in the Turkish National Police.
- This study is an exploration of the European Union acquis adoption in the Turkish National Police. The research employed the Diffusion of Innovations, Democratic Policing, and historical background check theoretical frameworks to study the decision-making of the TNP regarding reforms after 2003 as a qualitative case study which triangulated the methodology with less-dominant survey and several other analyzing methods. The data were collected from several sources including semi-structured interviews, archival records, documentary evidences and the European Commission Regular Reports on Turkey. The research interest was about the decision mechanisms of the TNP towards reforms and the rise of democratic policing in Turkey. During the study, internationally recognized human rights standards were given attention. As the data suggested, the police forces are shaped according to their ruling governments and societies. It is impossible to find a totally democratic police in a violent society and a totally violent police in a democratic society. The study findings suggested that reforming police agencies should not be a significant problem for determined governments. Human rights violations should not be directly related with the police in any country. The data suggested that democratic policing applications find common application when the democracy gets powerful and police brutality increases when authoritarian governments stays in power. Democratic policing on the other hand is an excellent tool to improve notion of democracy and to provide legitimacy to governments. However, democratic policing is not a tool to bring the democracy, but a support mechanism for it.
- A study of on-line use and perceived effectiveness of compliance-gaining in health-related banner advertisements for senior citizens.
- This research investigated banner ads on the World Wide Web, specifically the types of messages used in those ads and the effectiveness of the ads as seen by their intended audience. The focus was on health-related banner advertisements targeting senior citizens. The study first sought to determine the frequency of appearance of those ads when classified into categories of compliance-gaining tactics provided by research scholars. Second, the study explored the relative perceived effectiveness among those categories. Two graduate students from a Central Texas university sorted text messages into predetermined compliance-gaining categories. Chi square tests looked for significant differences in the frequencies of banner ads in each category. Forty-five senior citizens from the Central Texas area completed surveys regarding the perceived effectiveness of a randomly ordered, randomly selected set of categorized banner ads. A repeated measures test attempted to determine whether some compliance-gaining strategies used in health-related banner ads were perceived as more effective than others. The hypothesis stated that there would be differences in frequencies of compliance-gaining strategies used among the compliance-gaining categories in health-related banner ads for senior citizens. The hypothesis was supported. The research question asked if some categories of compliance-gaining strategies used in health-related banner ads were perceived as more effective than others by senior citizens. There was no evidence that senior citizens perceived any compliance-gaining category as being more effective than any other. However, post hoc analyses revealed trends in the types of compliance-gaining messages senior citizens perceived as more effective. These trends provide a basis for directional predictions in future studies.
- Terrorism as a social information entity: A model for early intervention.
Access: Use of this item is restricted to the UNT Community.
This dissertation studies different social aspects of terrorists and terrorist organizations in an effort to better deal with terrorism, especially in the long run. The researcher, who also worked as a Police Captain at Turkish National Police Anti-Terrorism Department, seeks solutions to today's global problem by studying both literature and a Delphi examination of a survey of 1070 imprisoned terrorists. The research questions include questions such as "What are the reasons behind terrorism?", "Why does terrorism occur?", "What ideologies provide the framework for terrorist violence?, "Why do some individuals become terrorists and others do not?" and "Under what conditions will terrorists end their violence?" The results of the study presents the complexity of the terrorism problem as a social experience and impossibility of a single solution or remedy for the global problem of terrorism. The researcher through his examination of the findings of the data, presented that terrorism is a social phenomenon with criminal consequences that needs to be dealt by means of two dimensional approaches. The first is the social dimension of terrorism and the second is the criminal dimension of terrorism. Based on this, the researcher constructed a conceptual model which addresses both of these dimensions under the titles of long-term solutions and short-term solutions. The long-term solutions deal with the social aspects of terrorism under the title of Proactive Approach to Terrorism and the short-term solutions deal with the criminal aspects of terrorism under the title of The Immediate Fight against Terrorism. The researcher constructed this model because there seems to be a tendency of not asking the question of "Why does terrorism occur?" Instead, the focus is usually on dealing with the consequences of terrorism and future terrorist threats. While it is essential that the governments need to provide the finest security measures for their societies, at the same time they need to address the reasons behind terrorism. This research, from stated perspective, offered a conceptual model to address both aspects of terrorism for a more complete fight against today's most painful problem.
- Testing a model of the relationships among organizational performance, IT-business alignment and IT governance.
- Information Technology (IT) is often viewed as a resource that is capable of enhancing organizational performance. However, it is difficult for organizations to measure the actual contribution of IT investments. Despite an abundance of literature, there is an insufficiency of generally applicable frameworks and instruments to help organizations definitively assess the relationship among organizational performance, IT-business alignment, and IT governance. Previous studies have emphasized IT-business alignment as an important enabler of organizational effectiveness; however, the direct and indirect effects of IT governance have not been incorporated into these studies. The purpose of this study was (1) to propose a new model that defines the relationships among IT governance, IT-business alignment, and organizational performance, (2) to develop and validate measures for the IT governance and IT-business alignment constructs, and (3) to test this IT Governance-Alignment-Performance or "IT GAP" model. This study made some novel contributions to the understanding of the factors affecting organizational performance. The quest for IT-business alignment in the MIS literature has been based on the presumption that IT contributes directly to organizational performance. However, this study found that although IT-business alignment does contribute to organizational performance, IT governance is an important antecedent of both IT-business alignment and organizational performance. The major contributions of this work are the development and validation of uni-dimensional scales for both IT-business alignment and IT governance, and the confirmation of the validity of the IT GAP model to explain the hypothesized relationships among the three constructs. Future studies may improve upon this research by using different organizations settings, industries, and stakeholders. This study indicates that in order for organizations to improve the value, contribution, and alignment of IT investments they first need to improve the ways in which they govern their IT activities and the processes and mechanisms by which IT decisions are made.
- Widening the lens: An interdisciplinary approach to examining the effect of exposure therapy on public speaking state anxiety.
- This study used an interdisciplinary approach to examine an intervention for reducing public speaking state anxiety. A quasi-experiment was conducted to determine if a multiple-exposure treatment technique (TRIPLESPEAK) would help to attenuate public speaking anxiety. The treatment group reported experiencing significantly less state anxiety during their post-test presentation than did the control group. This lead to the conclusion that exposure therapy can be used to help students enrolled in basic communication classes begin to overcome their fear of speaking in front of an audience. Follow-up analysis of the treatment group's reported anxiety levels during all five presentations (pre-test, Treatment Presentation 1, Treatment Presentation 2, Treatment Presentation 3, and post-test) revealed an increase in anxiety from the last treatment presentation to the post-test presentation. In order to explore this issue, Shannon's entropy was utilized to calculate the amount of information in each speaking environment. Anderson's functional ontology construction approach served as a model to explain the role of the environment in shaping speakers' current and future behaviors and reports of anxiety. The exploratory analysis revealed a functional relationship between information and anxiety. In addition, a qualitative study was conducted to determine which environmental stimuli speakers perceived contributed to their anxiety levels. Students reported experiencing anxiety based on four categories, which included speaker concerns, audience characteristics, contextual factors and assignment criteria. Students' reports of anxiety were dependent upon their previous speaking experiences, and students suggested differences existed between the traditional presentations and the treatment presentations. Pedagogical and theoretical implications are discussed.
- Usability of a Keyphrase Browsing Tool Based on a Semantic Cloud Model
- The goal of this research was to facilitate the scrutiny and utilization of Web search engine retrieval results. I used a graphical keyphrase browsing interface to visualize the conceptual information space of the results, presenting document characteristics that make document relevance determinations easier.
- Accessing Information on the World Wide Web: Predicting Usage Based on Involvement
- Advice for Web designers often includes an admonition to use short, scannable, bullet-pointed text, reflecting the common belief that browsing the Web most often involves scanning rather than reading. Literature from several disciplines focuses on the myriad combinations of factors related to online reading but studies of the users' interests and motivations appear to offer a more promising avenue for understanding how users utilize information on Web pages. This study utilized the modified Personal Involvement Inventory (PII), a ten-item instrument used primarily in the marketing and advertising fields, to measure interest and motivation toward a topic presented on the Web. Two sites were constructed from Reader's Digest Association, Inc. online articles and a program written to track students' use of the site. Behavior was measured by the initial choice of short versus longer versions of the main page, the number of pages visited and the amount of time spent on the site. Data were gathered from students at a small, private university in the southwest part of the United States to answer six hypotheses which posited that subjects with higher involvement in a topic presented on the Web and a more positive attitude toward the Web would tend to select the longer text version, visit more pages, and spend more time on the site. While attitude toward the Web did not correlate significantly with any of the behavioral factors, the level of involvement was associated with the use of the sites in two of three hypotheses, but only partially in the manner hypothesized. Increased involvement with a Web topic did correlate with the choice of a longer, more detailed initial Web page, but was inversely related to the number of pages viewed so that the higher the involvement, the fewer pages visited. An additional indicator of usage, the average amount of time spent on each page, was measured and revealed that more involved users spent more time on each page.
- Officer attitudes toward organizational change in the Turkish National Police.
- This dissertation emphasizes the importance of the human factor in the organizational change process. Change - the only constant - is inevitable for organizations and no change program can be achieved without the support and acceptance of organization members. In this context, this study identifies officer attitudes toward organizational change in the Turkish National Police (TNP) and the factors affecting those attitudes. The Officer Attitude Model created by the researcher includes six main factors (receptivity to change, readiness for change, trust in management, commitment to organization, communication of change, and training for change) and five background factors (gender, age, rank, level of education, and work experience) to explain officer attitudes toward change. In order to test this model, an officer attitude survey was administered in Turkey among TNP members and the results of the gathered data validated this model.
- The Online and the Onsite Holocaust Museum Exhibition as an Informational Resource
- Museums today provide learning-rich experiences and quality informational resources through both physical and virtual environments. This study examined a Holocaust Museum traveling exhibition, Life in Shadows: Hidden Children and the Holocaust that was on display at the Art Center of Battle Creek, Michigan in fall 2005. The purpose of this mixed methods study was to assess the informational value of a Holocaust Museum exhibition in its onsite vs. online format by converging quantitative and qualitative data. Participants in the study included six eighth grade language arts classes who viewed various combinations or scenarios of the onsite and online Life in Shadows. Using student responses to questions in an online exhibition survey, an analysis of variance was performed to determine which scenario visit promotes the greatest content learning. Using student responses to additional questions on the same survey, data were analyzed qualitatively to discover the impact on students of each scenario visit. By means of an emotional empathy test, data were analyzed to determine differences among student response according to scenario visit. A principal finding of the study (supporting Falk and Dierking's contextual model of learning) was that the use of the online exhibition provided a source of prior orientation and functioned as an advanced organizer for students who subsequently viewed the onsite exhibition. Students who viewed the online exhibition received higher topic assessment scores. Students in each scenario visit gave positive exhibition feedback and evidence of emotional empathy. Further longitudinal studies in museum informatics and Holocaust education involving a more diverse population are needed. Of particular importance would be research focusing on using museum exhibitions and Web-based technology in a compelling manner so that students can continue to hear the words of survivors who themselves bear witness and give voice to silenced victims. When perpetuity of access to informational resources is assured, future generations will continue to be connected to the primary documents of history and cultural heritage.
- A multi-dimensional entropy model of jazz improvisation for music information retrieval.
- Jazz improvisation provides a case context for examining information in music; entropy provides a means for representing music for retrieval. Entropy measures are shown to distinguish between different improvisations on the same theme, thus demonstrating their potential for representing jazz information for analysis and retrieval. The calculated entropy measures are calibrated against human representation by means of a case study of an advanced jazz improvisation course, in which synonyms for "entropy" are frequently used by the instructor. The data sets are examined for insights in music information retrieval, music information behavior, and music representation.
- Measuring the accuracy of four attributes of sound for conveying changes in a large data set.
Access: Use of this item is restricted to the UNT Community.
Human auditory perception is suited to receiving and interpreting information from the environment but this knowledge has not been used extensively in designing computer-based information exploration tools. It is not known which aspects of sound are useful for accurately conveying information in an auditory display. An auditory display was created using PD, a graphical programming language used primarily to manipulate digital sound. The interface for the auditory display was a blank window. When the cursor is moved around in this window, the sound generated would changed based on the underlying data value at any given point. An experiment was conducted to determine which attribute of sound most accurately represents data values in an auditory display. The four attributes of sound tested were frequency-sine waveform, frequency-sawtooth waveform, loudness and tempo. 24 subjects were given the task of finding the highest data point using sound alone using each of the four sound treatments. Three dependent variables were measured: distance accuracy, numeric accuracy, and time on task. Repeated measures ANOVA procedures conducted on these variables did not rise to the level of statistical significance (α=.05). None of the sound treatments was more accurate than the other as representing the underlying data values. 52% of the trials were accurate within 50 pixels of the highest data point (target). An interesting finding was the tendency for the frequency-sin waveform to be used in the least accurate trial attempts (38%). Loudness, on the other hand, accounted for very few (12.5%) of the least accurate trial attempts. In completing the experimental task, four different search techniques were employed by the subjects: perimeter, parallel sweep, sector, and quadrant. The perimeter technique was the most commonly used.
- Modeling the role of blogging in librarianship
- This phenomenological study examines the motivations and experiences of librarians who author professionally-focused Weblogs. I constructed a model of librarianship based on Wilson and Buckland. The results show a close fit between librarian bloggers and the ideals of the field as expressed by two primary library and information science philosophers. A Web survey generated 239 responses to demographic and open-ended questions. Using the results of the survey, I analyzed demographic data and performed a phenomenological analysis of the open-ended questions. A list of category responses was generated from each set of answers via the coding of descriptive words and phrases. Results indicated the motivations of librarian bloggers are based around themes of sharing, participation in community, and enhanced professional development. Respondents reported feeling more connected to the profession and to colleagues across the world because of blogging. Respondents perceived the librarian blogosphere as a community with both positive aspects - feedback, discussion, and support - and negative aspects - insular voices, divides between technologists and librarians, and generational rifts. Respondents also reported an increased ability to keep current, improved writing skills, and opportunities to speak and contribute to professional journals.
- Functional Ontology Construction: A Pragmatic Approach to Addressing Problems Concerning the Individual and the Informing Environment
- Functional ontology construction (FOC) is an approach for modeling the relationships between a user and the informing environment by means of analysis of the user's behavior and the elements of the environment that have behavioral function. The FOC approach is an application of behavior analytic techniques and concepts to problems within information science. The FOC approach is both an alternative and a compliment to the cognitive viewpoint commonly found in models of behavior in information science. The basis for the synthesis of behavior analysis and information science is a shared tradition of pragmatism between the fields. The application of behavior analytic concepts brings with it the notion of selection by consequence. Selection is examined on the biological, behavioral, and cultural levels. Two perspicuous examples of the application of the FOC modeling approach are included. The first example looks at the document functioning as a reinforcer in a human operant experimental setting. The second example is an examination of the verbal behavior of expert film analyst, Raymond Bellour, the structure of a film he analyzed, and the elements of the film's structure that had behavioral function for Bellour. The FOC approach is examined within the ontological space of information science.
- The Effects of Task-Based Documentation Versus Online Help Menu Documentation on the Acceptance of Information Technology
Access: Use of this item is restricted to the UNT Community.
The objectives of this study were (1) to identify and describe task-based documentation; (2) to identify and describe any purported changes in users attitudes when IT migration was preceded by task-based documentation; (3) to suggest implications of task-based documentation on users attitude toward IT acceptance. Questionnaires were given to 150 university students. Of these, all 150 students participated in this study. The study determined the following: (1) if favorable pre-implementation attitudes toward a new e-mail system increase, as a result of training, if users expect it to be easy to learn and use; (2) if user acceptance of an e-mail program increase as expected perceived usefulness increase as delineated by task-based documentation; (3) if task-based documentation is more effective than standard help menus while learning a new application program; and (4) if training that requires active student participation increase the acceptance of a new e-mail system. The following conclusions were reached: (1) Positive pre-implementation attitudes toward a new e-mail system are not affected by training even if the users expect it to be easy to learn and use. (2) User acceptance of an e-mail program does not increase as perceived usefulness increase when aided by task-based documentation. (3) Task-based documentation is not more effective than standard help menus when learning a new application program. (4) Training that requires active student participation does not increase the acceptance of a new e-mail system.
- An Evaluation of the Effect of Learning Styles and Computer Competency on Students' Satisfaction on Web-Based Distance Learning Environments
Access: Use of this item is restricted to the UNT Community.
This study investigates the correlation between students' learning styles, computer competency and student satisfaction in Web-based distance learning. Three hundred and one graduate students participated in the current study during the Summer and Fall semesters of 2002 at the University of North Texas. Participants took the courses 100% online and came to the campus only once for software training. Computer competency and student satisfaction were measured using the Computer Skill and Use Assessment and the Student Satisfaction Survey questionnaires. Kolb's Learning Style Inventory measured students' learning styles. The study concludes that there is a significant difference among the different learning styles with respect to student satisfaction level when the subjects differ with regard to computer competency. For accommodating amd diverging styles, a higher level of computer competency results in a higher level of student satisfaction. But for converging and assimilating styles, a higher level of computer competency suggests a lower level of student satisfaction. A significant correlation was found between computer competency and student satisfaction level within Web-based courses for accommodating styles and no significant results were found in the other learning styles.
- Perceived value of journals for academic prestige, general reading and classroom use: A study of journals in educational and instructional technology.
- Conducting research, evaluating research, and publishing scholarly works all play an extremely prominent role for university faculty members. Tenure and promotion decisions are greatly influenced by the perceived value of publications as viewed by members of faculty evaluation committees. Faculty members seeking tenure may be limited to publishing in a limited group of journals perceived to be valuable by members of an academic committee. This study attempted to determine the value of various kinds of periodicals (journals, magazines, and e-journals), based on three principal criteria, as perceived by professionals (university faculty, K-12 practitioners, and corporate trainers) in the educational/instructional technology (E/IT) field. The criteria for journal evaluation were Academic Prestige, General Reading, and Classroom Use. The perceived value of journals based on each criterion was compared to determine any significant differences. Members of the Association for Educational Communications and Technology (AECT) were asked to rate 30 journals in the E/IT field using the three criteria. Statistically significant differences were found among ratings in 63% of the journals. The statistical analyses indicated that differences in the perceived value of journals among E/IT professionals across the three criteria (Academic Prestige, General Reading, and Classroom Use) were statistically significant. It is also noted that refereed journals were rated higher than nonrefereed journals for the Academic Prestige criterion. Survey respondents indicated that individual journals were not valued for the same reasons. This finding implies that the formation of any equitable measure for determining the value of faculty members' journal article publications would be best if based on definable criteria determined by colleagues. Lists of valued journals for each area of faculty assessment would provide standards of excellence both inside and outside the E/IT field for those who serve on tenure and promotion committees in educational institutions.
- The physiology of collaboration: An investigation of library-museum-university partnerships.
- Collaboration appears to be a magical solution for many problems when there is scarcity of resources, lack of knowledge or skills, and/or environmental threats. However, there is little knowledge about the nature of collaboration. A holistic conceptual framework was developed for the collaborative process, and the conceptualization process used systems thinking approach. The author has selectively chosen conceptualizations and/or research by a limited subset of scholars whose ideas appeared to be the most relevant and useful to explore the type of collaboration studied here. In other words, the selection of the literature was based on an eclectic selection. Multiple cases were used in this research to understand the factors that are components of collaborative effort among non-profit organizations and the relationships among those factors. This study also investigated the stages of collaborative process. Data were collected from 54 participants who were partners in collaborate projects funded by the Institute of Museum and Library Services (IMLS). Among these 54 participants, 50 answered the online questionnaire and 38 received the telephone interviews. The data collected was analyzed using cluster analysis, multidimensional scaling, internal consistency reliability, and descriptive statistics. The component factors of collaboration were grouped by the following seven concepts: trustworthiness, competence, dependency, misunderstanding and/or conflict, complexity, commitment and mechanism of coordination. This study showed twelve relationships among these factors. For instance, different points of view and partners' capacity to maintain inter-organizational relationships were found to be opposite concepts. In addition, the findings in this study indicate that 84% of participants reported the presence of the five pre-defined stages: execution, networking, definition, relationship, and common evaluation.
- Relevance Thresholds: A Conjunctive/Disjunctive Model of End-User Cognition as an Evaluative Process
- This investigation identifies end-user cognitive heuristics that facilitate judgment and evaluation during information retrieval (IR) system interactions. The study extends previous research surrounding relevance as a key construct for representing the value end-users ascribe to items retrieved from IR systems and the perceived effectiveness of such systems. The Lens Model of user cognition serves as the foundation for design and interpretation of the study; earlier research in problem solving, decision making, and attitude formation also contribute to the model and analysis. A self reporting instrument collected evaluative responses from 32 end-users related to 1432 retrieved items in relation to five characteristics of each item: topical, pertinence, utility, systematic, and motivational levels of relevance. The nominal nature of the data collected led to non-parametric statistical analyses that indicated that end-user evaluation of retrieved items to resolve an information problem at hand is most likely a multi-stage process. That process appears to be a cognitive progression from topic to meaning (pertinence) to functionality (use). Each step in end-user evaluative processing engages a cognitive hierarchy of heuristics that includes consideration (of appropriate cues), differentiation (the positive or negative aspects of those cues considered), and aggregation (the combination of differentiated cue aspects needed to render an evaluative label of the item in relation to the information problem at hand). While individuals may differ in their judgments and evaluations of retrieved items, they appear to make those decisions by using consistent heuristic approaches.
- Quality Management in Museum Information Systems: A Case Study of ISO 9001-2000 as an Evaluative Technique
- Museums are service-oriented information systems that provide access to information bearing materials contained in the museum's collections. Within museum environments, the primary vehicle for quality assurance and public accountability is the accreditation process of the American Association of Museums (AAM). Norbert Wiener founded the field of cybernetics, employing concepts of information feedback as a mechanism for system modification and control. W. Edwards Deming applied Wiener's principles to management theory, initiating the wave of change in manufacturing industries from production-driven to quality-driven systems. Today, the principles are embodied in the ISO 9000 International Standards for quality management systems (QMS), a globally-recognized set of standards, widely employed as a vehicle of quality management in manufacturing and service industries. The International Organization for Standardization defined a process for QMS registration against ISO 9001 that is similar in purpose to accreditation. This study's goals were to determine the degree of correspondence between elements of ISO 9001 and quality-related activities within museum environments, and to ascertain the relevance of ISO 9001-2000 as a technique of museum evaluation, parallel to accreditation. A content analysis compared museum activities to requirements specified in the ISO 9001-2000 International Standard. The study examined museum environment surrogates which consisted of (a) web sites of nine museum studies programs in the United States and (b) web sites of two museum professional associations, the AAM and the International Council of Museums (ICOM). Data items consisted of terms and phrases from the web sites and the associated context of each item. Affinity grouping of the data produced high degrees of correspondence to the categories and functional subcategories of ISO 9001. Many quality-related activities were found at the operational levels of museum environments, although not integrated as a QMS. If activities were unified as a QMS, the ISO 9001 Standard has potential for application as an evaluative technique in museum environments.
- Reading selection as information seeking behavior: A case study with adolescent girls.
- The aim of this research, Reading Selection as Information Seeking Behavior: A Case Study with Adolescent Girls, was to explore how the experience of reading fiction affects adolescent girls aged 13 through 15, and how that experience changes based upon four activities: journaling, blogging, a personal interview, and a focus group session. Each participant reflects upon works of her own choosing that she had recently read. The data is evaluated using content analysis with the goal of developing a relational analysis tool to be used and tested with future research projects. The goal of this research is to use the insights of the field of bibliotherapy together with the insights of the adolescent girls to provide a higher, more robust model of successful information behavior. That is, relevance is a matter of impact on life rather than just a match of subject heading. This work provides a thick description of a set of real world relevancy judgments. This may serve to illuminate theories and practices for bringing each individual seeker together with appropriate documents. This research offers a new model for relevant information seeking behavior associated with selecting works of essential instructional fiction, as well as a new definition for terminology to describe the results of the therapeutic literary experience. The data from this study, as well as from previous research, suggest that literature (specifically young adult literature) brings the reader to a better understanding of herself and the world around her.
- Perceived features and similarity of images: An investigation into their relationships and a test of Tversky's contrast model.
- The creation, storage, manipulation, and transmission of images have become less costly and more efficient. Consequently, the numbers of images and their users are growing rapidly. This poses challenges to those who organize and provide access to them. One of these challenges is similarity matching. Most current content-based image retrieval (CBIR) systems which can extract only low-level visual features such as color, shape, and texture, use similarity measures based on geometric models of similarity. However, most human similarity judgment data violate the metric axioms of these models. Tversky's (1977) contrast model, which defines similarity as a feature contrast task and equates the degree of similarity of two stimuli to a linear combination of their common and distinctive features, explains human similarity judgments much better than the geometric models. This study tested the contrast model as a conceptual framework to investigate the nature of the relationships between features and similarity of images as perceived by human judges. Data were collected from 150 participants who performed two tasks: an image description and a similarity judgment task. Qualitative methods (content analysis) and quantitative (correlational) methods were used to seek answers to four research questions related to the relationships between common and distinctive features and similarity judgments of images as well as measures of their common and distinctive features. Structural equation modeling, correlation analysis, and regression analysis confirmed the relationships between perceived features and similarity of objects hypothesized by Tversky (1977). Tversky's (1977) contrast model based upon a combination of two methods for measuring common and distinctive features, and two methods for measuring similarity produced statistically significant structural coefficients between the independent latent variables (common and distinctive features) and the dependent latent variable (similarity). This model fit the data well for a sample of 30 (435 pairs of) images and 150 participants (χ2 =16.97, df=10, p = .07508, RMSEA= .040, SRMR= .0205, GFI= .990, AGFI= .965). The goodness of fit indices showed the model did not significantly deviate from the actual sample data. This study is the first to test the contrast model in the context of information representation and retrieval. Results of the study are hoped to provide the foundations for future research that will attempt to further test the contrast model and assist designers of image organization and retrieval systems by pointing toward alternative document representations and similarity measures that more closely match human similarity judgments.
- Smoothing the information seeking path: Removing representational obstacles in the middle-school digital library.
- Middle school student's interaction within a digital library is explored. Issues of interface features used, obstacles encountered, search strategies and search techniques used, and representation obstacles are examined. A mechanism for evaluating user's descriptors is tested and effects of augmenting the system's resource descriptions with these descriptors on retrieval is explored. Transaction log data analysis (TLA) was used, with external corroborating achievement data provided by teachers. Analysis was conducted using quantitative and qualitative methods. Coding schemes for the failure analysis, search strategies and techniques analysis, as well as extent of match analysis between terms in student's questions and their search terms, and extent of match analysis between search terms and controlled vocabulary were developed. There are five chapters with twelve supporting appendixes. Chapter One presents an introduction to the problem and reviews the pilot study. Chapter Two presents the literature review and theoretical basis for the study. Chapter Three describes the research questions, hypotheses and methods. Chapter Four presents findings. Chapter Five presents a summary of the findings and their support of the hypotheses. Unanticipated findings, limitations, speculations, and areas of further research are indicated. Findings indicate that middle school users interact with the system in various sequences of patterns. User groups' interactions and scaffold use are influenced by the teacher's objectives for using the ADL. Users preferred to use single word searches over Boolean, phrase or natural language searches. Users tended to use a strategy of repeating the same exact search, instead of using the advanced scaffolds. A high percent of users attempted at least one search that included spelling or typographical errors, punctuation, or sequentially repeated searches. Search terms matched the DQ's in some instantiation 54% of all searches. Terms used by the system to represent the resources do not adequately represent the user groups' information needs, however, using student generated keywords to augment resource descriptions can have a positive effect on retrieval.
- Police officers' adoption of information technology: A case study of the Turkish POLNET system.
- One of the important branches of government and vital to the community, police agencies are organizations that have high usage rates of information technology systems since they are in the intelligence sector and thus have information incentives. Not only can information technologies develop intra- and inter-relationships of law enforcement agencies, but they also improve the efficiency and effectiveness of the police officers and agencies without adding additional costs. Thus, identifying the factors that influence the police officers' adoption of information technology can help predict and determine how information technology will contribute to the social organization of policing in terms of effectiveness and efficiency gains. A research framework was developed by integrating three different models, theory of planned behavior (TPB), technology acceptance theory (TAM), and diffusion of innovation theory (DOI) while adding two other factors, facility and voluntariness, to better determine the factors affecting the implementation and adoption of the POLNET software system used by the Turkish National Police (TNP). The integrated model used in this study covers not only basic technology acceptance factors, but also the factors related to policing. It also attempts to account for the factors of cultural differences by considering the important aspects of Turkish culture. A cross sectional survey was conducted among TNP officers using the POLNET system. The LISREL 8.5® analysis for the hypothesized model resulted in a good model fit; 13 of the 15 hypotheses were supported.
- Solutions for Dynamic Channel Assignment and Synchronization Problem for Distributed Wireless Multimedia System
- The recent advances in mobile computing and distributed multimedia systems allow mobile hosts (clients) to access wireless multimedia Data at anywhere and at anytime. In accessing multimedia information on the distributed multimedia servers from wireless personal communication service systems, a channel assignment problem and synchronization problems should be solved efficiently. Recent demand for mobile telephone service have been growing rapidly while the electro-magnetic spectrum of frequencies allocated for this purpose remain limited. Any solution to the channel assignment problem is subject to this limitation, as well as the interference constraint between adjacent channels in the spectrum. Channel allocation schemes provide a flexible and efficient access to bandwidth in wireless and mobile communication systems. In this dissertation, both an efficient distributed algorithm for dynamic channel allocation based upon mutual exclusion model, and an efficient distributed synchronization algorithm using Quasi-sink for wireless and mobile multimedia systems to ensure and facilitate mobile client access to multimedia objects are proposed. Algorithm's performance with several channel systems using different types of call arrival patterns is determined analytically. A set of simulation experiments to evaluate the performance of our scheme using message complexity and buffer usage at each frame arrival time.
- The gathering and use of information by fifth grade students with access to Palm® handhelds.
- Handheld computers may hold the possibility for a one-to-one computer: student ratio. The impact of the use of Palm® (Palm, Inc.) handhelds on information acquisition and use by 5th grade students in a North Texas school during a class research project was investigated. Five research questions were examined using observation, interviews, surveys, and document analysis. Are there differences in information gathering and use with the Palm between gifted, dyslexic, and regular learners? What relevance criteria do students use to evaluate a web site to determine whether to download the site to the Palm and afterwards whether to use the downloaded site's information in the report? How do the Palms affect the writing process? Do the animations and concept maps produced on the Palm demonstrate understanding of the intended concepts? Are there significant differences in results (i.e., final products grade) between Palm users and non-Palm users? Three groups of learners in the class, gifted, dyslexic, and regular learners, participated in the study. The regular and dyslexic students reported using Web sites that had not been downloaded to the Palm. Students reported several factors used to decide whether to download Web sites, but the predominant deciding factor was the amount of information. The students used a combination of writing on paper and the Palm in the preparation of the report. Many students flipped between two programs, FreeWrite and Fling-It, finding information and then writing the facts into the report. The peer review process was more difficult with the Palm. Most students had more grammatical errors in this research report than in previous research projects. By creating animated drawings on the Palm handheld, the students demonstrated their understanding of the invention though sometimes the media or the student's drawing skills limited the quality of the final product. Creating the animations was motivational and addressed different learning styles than a written report alone. No statistically significant difference was found in the scores of the three 6+1 Traits categories, however the Palm users didn't meet the page-length requirement for the research project but the majority of the control class did.
- Global response to cyberterrorism and cybercrime: A matrix for international cooperation and vulnerability assessment.
- Cyberterrorism and cybercrime present new challenges for law enforcement and policy makers. Due to its transnational nature, a real and sound response to such a threat requires international cooperation involving participation of all concerned parties in the international community. However, vulnerability emerges from increased reliance on technology, lack of legal measures, and lack of cooperation at the national and international level represents real obstacle toward effective response to these threats. In sum, lack of global consensus in terms of responding to cyberterrorism and cybercrime is the general problem. Terrorists and cyber criminals will exploit vulnerabilities, including technical, legal, political, and cultural. Such a broad range of vulnerabilities can be dealt with by comprehensive cooperation which requires efforts both at the national and international level. "Vulnerability-Comprehensive Cooperation-Freedom Scale" or "Ozeren Scale" identified variables that constructed the scale based on the expert opinions. Also, the study presented typology of cyberterrorism, which involves three general classifications of cyberterrorism; Disruptive and destructive information attacks, Facilitation of technology to support the ideology, and Communication, Fund raising, Recruitment, Propaganda (C-F-R-P). Such a typology is expected to help those who are in a position of decision-making and investigating activities as well as academicians in the area of terrorism. The matrix for international cooperation and vulnerability assessment is expected to be used as a model for global response to cyberterrorism and cybercrime.
- Information Needs of Art Museum Visitors: Real and Virtual
- Museums and libraries are considered large repositories of human knowledge and human culture. They have similar missions and goals in distributing accumulated knowledge to society. Current digitization projects allow both, museums and libraries to reach a broader audience, share their resources with a variety of users. While studies of information seeking behavior, retrieval systems and metadata in library science have a long history; such research studies in museum environments are at their early experimental stage. There are few studies concerning information seeking behavior and needs of virtual museum visitors, especially with the use of images in the museums' collections available on the Web. The current study identifies preferences of a variety of user groups about the information specifics on current exhibits, museum collections metadata information, and the use of multimedia. The study of information seeking behavior of users groups of museum digital collections or cultural collections allows examination and analysis of users' information needs, and the organization of cultural information, including descriptive metadata and the quantity of information that may be required. In addition, the study delineates information needs that different categories of users may have in common: teachers in high schools, students in colleges and universities, museum professionals, art historians and researchers, and the general public. This research also compares informational and educational needs of real visitors with the needs of virtual visitors. Educational needs of real visitors are based on various studies conducted and summarized by Falk and Dierking (2000), and an evaluation of the art museum websites previously conducted to support the current study.
- Information systems assessment: development of a comprehensive framework and contingency theory to assess the effectiveness of the information systems function.
- The purpose of this research is to develop a comprehensive, IS assessment framework using existing IS assessment theory as a base and incorporating suggestions from other disciplines. To validate the framework and to begin the investigation of current IS assessment practice, a survey instrument was developed. A small group of subject matter experts evaluated and improved the instrument. The instrument was further evaluated using a small sample of IS representatives. Results of this research include a reexamination of the IS function measurement problem using new frameworks of analyses yielding (a) guidance for the IS manager or executive on which IS measures might best fit their organization, (b) a further verification of the important measures most widely used by IS executives, (c) a comprehensive, theoretically-derived, IS assessment framework, and by (d) the enhancement of IS assessment theory by incorporating ideas from actual practice. The body of knowledge gains a comprehensive, IS assessment framework that can be further tested for usefulness and applicability. Future research is recommended to substantiate and improve on these findings. Chapter 2 is a complete survey of prior research, subdivided by relevant literature divisions, such as organizational effectiveness, quality management, and IS assessment. Chapter 3 includes development of and support for the research questions, IS assessment framework, and the research model. Chapter 4 describes how the research was conducted. It includes a brief justification for the research approach, a description of how the framework was evaluated, a description of how the survey instrument was developed and evaluated, a description of the participants and how they were selected, a synopsis of the data collection procedures, a brief description of follow-up procedures, and a summary. Chapter 5 presents the results of the research. Chapter 6 is a summary and conclusion of the research. Finally, included in the appendices are definitions of terms, and copies of the original and improved survey instruments.
- E-Learning and In-Service Training: An Exploration of the Beliefs and Practices of Trainers and Trainees in the Turkish National Police
- This targeted research study, carried out by an officer of the Turkish National Police (TNP), investigated the perceptions and beliefs of TNP trainers and trainees towards the potential adoption and implementation of e-learning technology for in-service police training. Utilizing diffusion and innovation theory (DOI) (Rogers, 1995) and the conceptual technology integration process model (CTIM) (Nicolle, 2005), two different surveys were administered; one to the trainers and one to the trainees. The factor analyses revealed three shared trainer and trainee perceptions: A positive perception towards e-learning, personally and for the TNP; a belief in the importance of administrative support for e-learning integration; and the belief in importance of appropriate resources to facilitate integration and maintain implementation. Three major recommendations were made for the TNP. First, the research findings could be used as a road map by the TNP Education Department to provide a more flexible system to disseminate in-service training information. The second is to establish two-way channels of communication between the administration and the TNP personnel to efficiently operationalize the adoption and integration of e-learning technology. The third is the administrative provision of necessary hardware, software, and technical support.
- Intangible Qualities of Rare Books: Toward a Decision-Making Framework for Preservation Management in Rare Book Collections, Based Upon the Concept of the Book as Object
- For rare book collections, a considerable challenge is involved in evaluating collection materials in terms of their inherent value, which includes the textual and intangible information the materials provide for the collection's users. Preservation management in rare book collections is a complex and costly process. As digitization and other technological advances in surrogate technology have provided new forms representation, new dilemmas in weighing the rare book's inherently valuable characteristics against the possibly lesser financial costs of surrogates have arisen. No model has been in wide use to guide preservation management decisions. An initial iteration of such a model is developed, based on a Delphi-like iterative questioning of a group of experts in the field of rare books. The results are used to synthesize a preservation management framework for rare book collections, and a small-scale test of the framework has been completed through two independent analyses of five rare books in a functioning collection. Utilizing a standardized template for making preservation decisions offers a variety of benefits. Preservation decisions may include prioritizing action upon the authentic objects, or developing and maintaining surrogates in lieu of retaining costly original collection materials. The framework constructed in this study provides a method for reducing the subjectivity of preservation decision-making and facilitating the development of a standard of practice for preservation management within rare book collections.
- Makeshift Information Constructions: Information Flow and Undercover Police
- This dissertation presents the social virtual interface (SVI) model, which was born out of a need to develop a viable model of the complex interactions, information flow and information seeking behaviors among undercover officers. The SVI model was created from a combination of various philosophies and models in the literature of information seeking, communication and philosophy. The questions this research paper answers are as follows: 1. Can we make use of models and concepts familiar to or drawn from Information Science to construct a model of undercover police work that effectively represents the large number of entities and relationships? and 2. Will undercover police officers recognize this model as realistic? This study used a descriptive qualitative research method to examine the research questions. An online survey and hard copy survey were distributed to police officers who had worked in an undercover capacity. In addition groups of officers were interviewed about their opinion of the SVI model. The data gathered was analyzed and the model was validated by the results of the survey and interviews.
- Assessment of a Library Learning Theory by Measuring Library Skills of Students Completing an Online Library Instruction Tutorial
- This study is designed to reveal whether students acquire the domains and levels of library skills discussed in a learning library skills theory after participating in an online library instruction tutorial. The acquisition of the library skills is demonstrated through a review of the scores on online tutorial quizzes, responses to a library skills questionnaire, and bibliographies of course research papers. Additional areas to be studied are the characteristics of the participants enrolled in traditional and online courses at a community college and the possible influence of these characteristics on the demonstrated learning of library skills. Multiple measurement methods, identified through assessment of library instruction literature, are used to verify the effectiveness of the library skills theory and to strengthen the validity and reliability of the study results.
- CT3 as an Index of Knowledge Domain Structure: Distributions for Order Analysis and Information Hierarchies
- The problem with which this study is concerned is articulating all possible CT3 and KR21 reliability measures for every case of a 5x5 binary matrix (32,996,500 possible matrices). The study has three purposes. The first purpose is to calculate CT3 for every matrix and compare the results to the proposed optimum range of .3 to .5. The second purpose is to compare the results from the calculation of KR21 and CT3 reliability measures. The third purpose is to calculate CT3 and KR21 on every strand of a class test whose item set has been reduced using the difficulty strata identified by Order Analysis. The study was conducted by writing a computer program to articulate all possible 5 x 5 matrices. The program also calculated CT3 and KR21 reliability measures for each matrix. The nonparametric technique of Order Analysis was applied to two sections of test items to stratify the items into difficulty levels. The difficulty levels were used to reduce the item set from 22 to 9 items. All possible strands or chains of these items were identified so that both reliability measures (CT3 and KR21) could be calculated. One major finding of this study indicates that .3 to .5 is a desirable range for CT3 (cumulative p=.86 to p=.98) if cumulative frequencies are measured. A second major finding is that the KR21 reliability measure produced an invalid result more than half the time. The last major finding is that CT3, rescaled to range between 0 and 1, supports De Vellis' guidelines for reliability measures. The major conclusion is that CT3 is a better measure of reliability since it considers both inter- and intra-item variances.
- The effectiveness of using LEGO Mindstorms robotics activities to influence self-regulated learning in a university introductory computer programming course.
- The research described in this dissertation examines the possible link between self-regulated learning and LEGO Mindstorms robotics activities in teaching concepts in an introductory university computer programming course. The areas of student motivation, learning strategies, and mastery of course objectives are investigated. In all three cases analysis failed to reveal any statistically significant differences between the traditional control group and the experimental LEGO Mindstorms group as measured by the Motivated Strategies for Learning Questionnaire and course exams. Possible reasons for the lack of positive results include technical problems and limitations of the LEGO Mindstorms systems, limited number and availability of robots outside of class, limited amount of time during the semester for the robotics activities, and a possible difference in effectiveness based on gender. Responses to student follow-up questions, however, suggest that at least some of the students really enjoyed the LEGO activities. As with any teaching tool or activity, there are numerous ways in which LEGO Mindstorms can be incorporated into learning. This study explores whether or not LEGO Mindstorms are an effective tool for teaching introductory computer programming at the university level and how these systems can best be utilized.
- Empowering agent for Oklahoma school learning Communities: an examination of the Oklahoma Library Improvement Program
- The purposes of this study were to determine the initial impact of the Oklahoma Library Media Improvement Grants on Oklahoma school library media programs; assess whether the Oklahoma Library Media Improvement Grants continue to contribute to Oklahoma school learning communities; and examine possible relationships between school library media programs and student academic success. It also seeks to document the history of the Oklahoma Library Media Improvement Program 1978 - 1994 and increase awareness of its influence upon the Oklahoma school library media programs. Methods of data collection included: examining Oklahoma Library Media Improvement Program archival materials; sending a survey to 1703 school principals in Oklahoma; and interviewing Oklahoma Library Media Improvement Program participants. Data collection took place over a one year period. Data analyses were conducted in three primary phases: descriptive statistics and frequencies were disaggregated to examine mean scores as they related to money spent on school library media programs; opinions of school library media programs; and possible relationships between school library media programs and student academic achievement. Analysis of variance was used in the second phase of data analysis to determine if any variation between means was significant as related to Oklahoma Library Improvement Grants, time spent in the library media center by library media specialists, principal gender, opinions of library media programs, student achievement indicators, and the region of the state in which the respondent was located. The third phase of data analysis compared longitudinal data collected in the 2000 survey with past data. The primary results indicated students in Oklahoma from schools with a centralized library media center, served by a full-time library media specialist, and the school having received one or more Library Media Improvement Grants scored significantly higher academically than students in schools not having a centralized library media center, not served by a full-time library media specialist, and the school not having received one or more Library Media Improvement Grants. Students in schools having even one of these components scored higher academically than students in schools with none of these components.
- Children's color association for digital image retrieval.
- In the field of information sciences, attention has been focused on developing mature information retrieval systems that abstract information automatically from the contents of information resources, such as books, images and films. As a subset of information retrieval research, content-based image retrieval systems automatically abstract elementary information from images in terms of colors, shapes, and texture. Color is the most commonly used in similarity measurement for content-based image retrieval systems. Human-computer interface design and image retrieval methods benefit from studies based on the understanding of their potential users. Today's children are exposed to digital technology at a very young age, and they will be the major technology users in five to ten years. This study focuses on children's color perception and color association with a controlled set of digital images. The method of survey research was used to gather data for this exploratory study about children's color association from a children's population, third to sixth graders. An online questionnaire with fifteen images was used to collect quantitative data of children's color selections. Face-to-face interviews investigated the rationale and factors affecting the color choices and children's interpretation of the images. The findings in this study indicate that the color children associated with in the images was the one that took the most space or the biggest part of an image. Another powerful factor in color selection was the vividness or saturation of the color. Colors that stood out the most generally attracted the greatest attention. Preferences of color, character, or subject matter in an image also strongly affected children's color association with images. One of the most unexpected findings was that children would choose a color to replace a color in an image. In general, children saw more things than what were actually represented in the images. However, the children's interpretation of the images had little effect on their color selections.
- A Common Representation Format for Multimedia Documents
- Multimedia documents are composed of multiple file format combinations, such as image and text, image and sound, or image, text and sound. The type of multimedia document determines the form of analysis for knowledge architecture design and retrieval methods. Over the last few decades, theories of text analysis have been proposed and applied effectively. In recent years, theories of image and sound analysis have been proposed to work with text retrieval systems and progressed quickly due in part to rapid progress in computer processing speed. Retrieval of multimedia documents formerly was divided into the categories of image and text, and image and sound. While standard retrieval process begins from text only, methods are developing that allow the retrieval process to be accomplished simultaneously using text and image. Although image processing for feature extraction and text processing for term extractions are well understood, there are no prior methods that can combine these two features into a single data structure. This dissertation will introduce a common representation format for multimedia documents (CRFMD) composed of both images and text. For image and text analysis, two techniques are used: the Lorenz Information Measurement and the Word Code. A new process named Jeong's Transform is demonstrated for extraction of text and image features, combining the two previous measurements to form a single data structure. Finally, this single data measurements to form a single data structure. Finally, this single data structure is analyzed by using multi-dimensional scaling. This allows multimedia objects to be represented on a two-dimensional graph as vectors. The distance between vectors represents the magnitude of the difference between multimedia documents. This study shows that image classification on a given test set is dramatically improved when text features are encoded together with image features. This effect appears to hold true even when the available text is diffused and is not uniform with the image features. This retrieval system works by representing a multimedia document as a single data structure. CRFMD is applicable to other areas of multimedia document retrieval and processing, such as medical image retrieval, World Wide Web searching, and museum collection retrieval.