UNT Theses and Dissertations - 211 Matching Results

Search Results

Cognitive Playfulness, Innovativeness, and Belief of Essentialness: Characteristics of Educators who have the Ability to Make Enduring Changes in the Integration of Technology into the Classroom Environment.

Description: Research on the adoption of innovation is largely limited to factors affecting immediate change with few studies focusing on enduring or lasting change. The purpose of the study was to examine the personality characteristics of cognitive playfulness, innovativeness, and essentialness beliefs in educators who were able to make an enduring change in pedagogy based on the use of technology in the curriculum within their assigned classroom settings. The study utilized teachers from 33 school districts and one private school in Texas who were first-year participants in the Intel® Teach to the Future program. The research design focused on how cognitive playfulness, innovativeness, and essentialness beliefs relate to a sustained high level of information technology use in the classroom. The research questions were: 1) Are individuals who are highly playful more likely to continue to demonstrate an ability to integrate technology use in the classroom at a high level than those who are less playful? 2) Are individuals who are highly innovative more likely to continue to demonstrate an ability to integrate technology use in the classroom at a high level than those who are less innovative? 3) Are individuals who believe information technology use is critical and indispensable to their teaching more likely to continue to demonstrate an ability to integrate technology use in the classroom at a high level than those who believe it is supplemental and not essential? The findings of the current study indicated that playfulness, innovativeness, and essentialness scores as defined by the scales used were significantly correlated to an individual's sustained ability to use technology at a high level. Playfulness was related to the educator's level of innovativeness, as well. Also, educators who believed the use of technology was critical and indispensable to their instruction were more likely to be able to demonstrate a sustained ...
Date: August 2004
Creator: Dunn, Lemoyne Luette Scott
Partner: UNT Libraries

A Common Representation Format for Multimedia Documents

Description: Multimedia documents are composed of multiple file format combinations, such as image and text, image and sound, or image, text and sound. The type of multimedia document determines the form of analysis for knowledge architecture design and retrieval methods. Over the last few decades, theories of text analysis have been proposed and applied effectively. In recent years, theories of image and sound analysis have been proposed to work with text retrieval systems and progressed quickly due in part to rapid progress in computer processing speed. Retrieval of multimedia documents formerly was divided into the categories of image and text, and image and sound. While standard retrieval process begins from text only, methods are developing that allow the retrieval process to be accomplished simultaneously using text and image. Although image processing for feature extraction and text processing for term extractions are well understood, there are no prior methods that can combine these two features into a single data structure. This dissertation will introduce a common representation format for multimedia documents (CRFMD) composed of both images and text. For image and text analysis, two techniques are used: the Lorenz Information Measurement and the Word Code. A new process named Jeong's Transform is demonstrated for extraction of text and image features, combining the two previous measurements to form a single data structure. Finally, this single data measurements to form a single data structure. Finally, this single data structure is analyzed by using multi-dimensional scaling. This allows multimedia objects to be represented on a two-dimensional graph as vectors. The distance between vectors represents the magnitude of the difference between multimedia documents. This study shows that image classification on a given test set is dramatically improved when text features are encoded together with image features. This effect appears to hold true even when the available ...
Date: December 2002
Creator: Jeong, Ki Tai
Partner: UNT Libraries

A Comparative Analysis of Style of User Interface Look and Feel in a Synchronous Computer Supported Cooperative Work Environment

Description: The purpose of this study is to determine whether the style of a user interface (i.e., its look and feel) has an effect on the usability of a synchronous computer supported cooperative work (CSCW) environment for delivering Internet-based collaborative content. The problem motivating this study is that people who are located in different places need to be able to communicate with one another. One way to do this is by using complex computer tools that allow users to share information, documents, programs, etc. As an increasing number of business organizations require workers to use these types of complex communication tools, it is important to determine how users regard these types of tools and whether they are perceived to be useful. If a tool, or interface, is not perceived to be useful then it is often not used, or used ineffectively. As organizations strive to improve communication with and among users by providing more Internet-based collaborative environments, the users' experience in this form of delivery may be tied to a style of user interface look and feel that could negatively affect their overall acceptance and satisfaction of the collaborative environment. The significance of this study is that it applies the technology acceptance model (TAM) as a tool for evaluating style of user interface look and feel in a collaborative environment, and attempts to predict which factors of that model, perceived ease of use and/or perceived usefulness, could lead to better acceptance of collaborative tools within an organization.
Date: May 2005
Creator: Livingston, Alan
Partner: UNT Libraries

Comparing the Readability of Text Displays on Paper, E-Book Readers, and Small Screen Devices

Description: Science fiction has long promised the digitalization of books. Characters in films and television routinely check their palm-sized (or smaller) electronic displays for fast-scrolling information. However, this very technology, increasingly prevalent in today's world, has not been embraced universally. While the convenience of pocket-sized information pieces has the techno-savvy entranced, the general public still greets the advent of the e-book with a curious reluctance. This lack of enthusiasm seems strange in the face of the many advantages offered by the new medium - vastly superior storage capacity, searchability, portability, lower cost, and instantaneous access. This dissertation addresses the need for research examining the reading comprehension and the role emotional response plays in the perceived performance on e-document formats as compared to traditional paper format. This study compares the relative reading comprehension on three formats (Kindle, iTouch, and paper) and examines the relationship of subject's emotional response and relative technology exposure as factors that affect how the subject perceives they have performed on those formats. This study demonstrates that, for basic reading comprehension, the medium does not matter. Furthermore, it shows that, the more uncomfortable a person is with technology and expertise in the requested task (in this case, reading), the more they cling to the belief that they will do better on traditional (paper) media - regardless of how well they actually do.
Date: May 2010
Creator: Baker, Rebecca Dawn
Partner: UNT Libraries

A Comparison of Communication Motives of On-Site and Off-Site Students in Videoconference-Based Courses

Description: The objective of this investigation is to determine whether student site location in an instructional videoconference is related to students' motives for communicating with their instructor. The study is based, in part, on the work of Martin et al. who identify five separate student-teacher communication motives. These motives, or dimensions, are termed relational, functional, excuse, participation, and sycophancy, and are measured by a 30-item questionnaire. Several communication-related theories were used to predict differences between on-site and off-site students, Media richness theory was used, foundationally, to explain differences between mediated and face-to-face communication and other theories such as uncertainty reduction theory were used in conjunction with media richness theory to predict specific differences.Two hundred eighty-one completed questionnaires were obtained from Education and Library and Information Science students in 17 separate course-sections employing interactive video at the University of North Texas during the Spring and Summer semesters of the 2001/2002 school year. This study concludes that off-site students in an instructional videoconference are more likely than their on-site peers to report being motivated to communicate with their instructor for participation reasons. If off-site students are more motivated than on-site students to communicate as a means to participate, then it may be important for instructors to watch for actual differences in participation levels, and instructors may need to be well versed in pedagogical methods that attempt to increase participation, The study also suggests that current teaching methods being employed in interactive video environments may be adequate with regard to functional, excuse-making, relational and sycophantic communication.
Date: August 2002
Creator: Massingill, K.B.
Partner: UNT Libraries

A Complex Systems Model for Understanding the Causes of Corruption: Case Study - Turkey

Description: It is attempted with this dissertation to draw an explanatory interdisciplinary framework to clarify the causes of systemic corruption. Following an intense review of political sciences, economics, and sociology literatures on the issue, a complex systems theoretical model is constructed. A political system consists of five main components: Society, interest aggregators, legislative, executive and private sector, and the human actors in these domains. It is hypothesized that when the legitimacy level of the system is low and morality of the systemic actors is flawed, selected political, social and economic incentives and opportunities that may exist within the structure of the systemic components might -individually or as a group- trigger corrupt transactions between the actors of the system. If left untouched, corruption might spread through the system by repetition and social learning eventually becoming the source of corruption itself. By eroding the already weak legitimacy and morality, it may increase the risk of corruption even further. This theoretical explanation is used to study causes of systemic corruption in the Turkish political system. Under the guidance of the complex systems theory, initial systemic conditions, -legacy of the predecessor of Turkey Ottoman Empire-, is evaluated first, and then political, social and economic factors that are presumed to be breeding corruption in contemporary Turkey is investigated. In this section, special focus is given on the formation and operation of amoral social networks and their contribution to the entrenchment of corruption within the system. Based upon the findings of the case study, the theoretical model that is informed by the literature is reformed: Thirty five system and actor level variables are identified to be related with systemic corruption and nature of the causality between them and corruption is explained. Although results of this study can not be academically generalized for obvious reasons; the analytical framework ...
Date: August 2005
Creator: Yasar, Muhammet Murat
Partner: UNT Libraries

Computer Support Interactions: Verifying a Process Model of Problem Trajectory in an Information Technology Support Environment.

Description: Observations in the information technology (IT) support environment and generalizations from the literature regarding problem resolution behavior indicate that computer support staff seldom store reusable solution information effectively for IT problems. A comprehensive model of the processes encompassing problem arrival and assessment, expertise selection, problem resolution, and solution recording has not been available to facilitate research in this domain. This investigation employed the findings from a qualitative pilot study of IT support staff information behaviors to develop and explicate a detailed model of problem trajectory. Based on a model from clinical studies, this model encompassed a trajectory scheme that included the communication media, characteristics of the problem, decision points in the problem resolution process, and knowledge creation in the form of solution storage. The research design included the administration of an extensive scenario-based online survey to a purposive sample of IT support staff at a medium-sized state-supported university, with additional respondents from online communities of IT support managers and call-tracking software developers. The investigator analyzed 109 completed surveys and conducted email interviews of a stratified nonrandom sample of survey respondents to evaluate the suitability of the model. The investigation employed mixed methods including descriptive statistics, effects size analysis, and content analysis to interpret the results and verify the sufficiency of the problem trajectory model. The study found that expertise selection relied on the factors of credibility, responsibility, and responsiveness. Respondents referred severe new problems for resolution and recorded formal solutions more often than other types of problems, whereas they retained moderate recurring problems for resolution and seldom recorded those solutions. Work experience above and below the 5-year mark affected decisions to retain, refer, or defer problems, as well as solution storage and broadcasting behaviors. The veracity of the problem trajectory model was verified and it was found to be an ...
Date: December 2006
Creator: Strauss, Christopher Eric
Partner: UNT Libraries

A Conceptual Map for Understanding the Terrorist Recruitment Process: Observation and Analysis of Turkish Hezbollah Terrorist Organizations.

Description: Terrorism is a historical problem; however, it becomes one of the biggest problems in 21st century. September 11 and the following Madrid, Istanbul and London attacks showed that it is the most significant problem threatening world peace and security. Governments have started to deal with terrorism by improving security measurements and making new investments to stop terrorism. Most of the governments' and scholars' focus is on immediate threats and causes of terrorism, instead of looking at long-term solutions such as root causes and underlying reasons of terrorism, and the recruitment style of terrorist organizations If terrorist recruitment does not stop, then it is safe to say terrorist activities cannot be stopped. This study focused on the recruitment process by observing two different terrorist organizations, DHKP/C and Turkish Hezbollah. The researcher brings 13 years of field experience and first-person data gathered from inside the terrorist organizations. The research questions of this study were: (i) How can an individual be prevented from joining or carrying out terrorist activities?; (ii) What factors are correlated with joining a terrorist organization?; (iii) What are the recruitment processes of the DHKP/C, PKK, and Turkish Hezbollah?; (iv) Is there any common process of being a member of these three terrorist organizations?; and (v) What are the similarities and differences these terrorist organizations? As a result of this analysis, a terrorist recruitment process map was created. With the help of this map, social organizations such as family and schools may be able to identify ways to prevent individuals from joining terrorist organizations. Also, this map will also be helpful for government organizations such as counterterrorism and intelligence to achieve the same goal.
Date: August 2007
Creator: Teymur, Samih
Partner: UNT Libraries

Constraints on Adoption of Innovations: Internet Availability in the Developing World.

Description: In a world that is increasingly united in time and distance, I examine why the world is increasingly divided socially, economically, and digitally. Using data for 35 variables from 93 countries, I separate the countries into groups of 31 each by gross domestic product per capita. These groups of developed, lesser developed and least developed countries are used in comparative analysis. Through a review of relevant literature and tests of bivariate correlation, I select eight key variables that are significantly related to information communication technology development and to human development. For this research, adoption of the Internet in the developing world is the innovation of particular interest. Thus, for comparative purposes, I chose Internet Users per 1000 persons per country and the Human Development Index as the dependent variables upon which the independent variables are regressed. Although small in numbers among the least developed countries, I find Internet Users as the most powerful influence on human development for the poorest countries. The research focuses on key obstacles as well as variables of opportunity for Internet usage in developing countries. The greatest obstacles are in fact related to Internet availability and the cost/need ratio for infrastructure expansion. However, innovations for expanded Internet usage in developing countries are expected to show positive results for increased Internet usage, as well as for greater human development and human capital. In addition to the diffusion of innovations in terms of the Internet, the diffusion of cultures through migration is also discussed in terms of the effect on social capital and the drain on human capital from developing countries.
Access: This item is restricted to the UNT Community Members at a UNT Libraries Location.
Date: December 2006
Creator: Stedman, Joseph B.
Partner: UNT Libraries

Convenience to the Cataloger or Convenience to the User?: An Exploratory Study of Catalogers’ Judgment

Description: This mixed-method study explored cataloger’s judgment through the presence of text as entered by catalogers for the 11 electronic resource items during the National Libraries test for Resource Description and Access (RDA). Although the literature discusses cataloger’s judgment and suggests that cataloging practice based on new cataloging code RDA will more heavily rely on cataloger’s judgment, the topic of cataloger’s judgment in RDA cataloging was not formally studied. The purpose of this study was to study the differences and similarities in the MARC records created as a part of the RDA National Test and to determine if the theory of bounded rationality could explain cataloger’s judgment based on the constructs of cognitive and temporal limits. This goal was addressed through a content analysis of the MARC records and various statistical tests (Pearson’s Chi-square, Fisher’s Exact, and Cramer’s V). Analysis of 217 MARC records was performed on seven elements of the bibliographic record. This study found that there were both similarities and differences among the various groups of participants, and there are indications that both support and refute the assertion that catalogers make decisions based on the constructs of time and cognitive ability. Future research is needed to be able to determine if bounded rationality is able to explain cataloger’s judgment; however, there are indicators that both support and refute this assertion. The findings from this research have implications for the cataloging community through the provision of training opportunities for catalogers, evaluating workflows, ensuring the proper indexing of bibliographic records for discovery, and recommended edits to RDA.
Date: May 2015
Creator: Hasenyager, Richard Lee, Jr.
Partner: UNT Libraries

Conversational Use of Photographic Images on Facebook: Modeling Visual Thinking on Social Media

Description: Modeling the "thick description" of photographs began at the intersection of personal and institutional descriptions. Comparing institutional descriptions of particular photos that were also used in personal online conversations was the initial phase. Analyzing conversations that started with a photographic image from the collection of the Library of Congress (LC) or the collection of the Manchester Historic Association (MHA) provided insights into how cultural heritage institutions could enrich the description of photographs by using informal descriptions such as those applied by Facebook users. Taking photos of family members, friends, places, and interesting objects is something people do often in their daily lives. Some photographic images are stored, and some are shared with others in gatherings, occasions, and holidays. Face-to-face conversations about remembering some of the details of photographs and the event they record are themselves rarely recorded. Digital cameras make it easy to share personal photos in Web conversations and to duplicate old photos and share them on the Internet. The World Wide Web even makes it simple to insert images from cultural heritage institutions in order to enhance conversations. Images have been used as tokens within conversations along with the sharing of information and background knowledge about them. The recorded knowledge from conversations using photographic images on Social Media (SM) has resulted in a repository of rich descriptions of photographs that often include information of a type that does not result from standard archival practices. Closed group conversations on Facebook among members of a community of interest/practice often involve the use of photographs to start conversations, convey details, and initiate story-telling about objets, events, and people. Modeling of the conversational use of photographic images on SM developed from the exploratory analyses of the historical photographic images of the Manchester, NH group on Facebook. The model was influenced by the ...
Date: May 2016
Creator: Albannai, Talal N.
Partner: UNT Libraries

Costly Ignorance: The Denial of Relevance by Job Seekers: A Case Study in Saudi Arabia

Description: Job centers aid businesses seeking qualified employees and assist job seekers to select and contact employment and training services. Job seekers are also offered the opportunity to assess their skills, abilities, qualifications, and readiness. Furthermore, job centers ensure that job seekers are complying with requirements that they must meet to benefit from job assistance programs such as unemployment insurance. Yet, claimants often procrastinate and/or suspend their job search efforts even though such actions can make them lose their free time and entitlements, and more importantly they may lose the opportunity to take advantage of free information, services, training, and financial assistance for getting a job to which they have already made a claim. The current work looks to Chatman's "small worlds" work, Johnson's comprehensive model of information seeking, and Wilson's "costly ignorance" construct for contributions to understanding such behavior. Identification of a particular trait or set of traits of job seekers during periods of unemployment will inform a new Job Seeking Activities Model (JSAM). This study purposely examines job seeker information behavior and the factors which influence job seekers' behavior, in particular, family tangible support as a social norm effect. A mixed method, using questionnaires for job hunting completers and non-completers and interviews for experts, was employed for data collection. Quantitative data analysis was conducted to provide the Cronbach α coefficient, Pearson's product moment correlation, an independent-sample t-test, effect size, and binary Logit regression. The qualitative data generated from the interview transcript for each section of the themes and subthemes were color coded. Finally, simultaneous triangulation was carried out to confirm or contradict the results from each method. The findings show that social norms, particularly uncontrolled social support provided by their families, are more likely to make job seekers ignore the relevant information about jobs available to them in favor ...
Date: December 2016
Creator: Alahmad, Badr Suleman
Partner: UNT Libraries

Coyote Ugly Librarian: A Participant Observer Examination of Lnowledge Construction in Reality TV.

Description: Reality TV is the most popular genre of television programming today. The number of reality television shows has grown exponentially over the last fifteen years since the premier of The Real World in 1992. Although reality TV uses styles similar to those used in documentary film, the “reality” of the shows is questioned by critics and viewers alike. The current study focuses on the “reality” that is presented to viewers and how that “reality” is created and may differ from what the participants of the shows experience. I appeared on two reality shows, Faking It and That's Clever, and learned a great deal as a participant observer. Within the study, I outline my experience and demonstrate how editing changed the reality I experienced into what was presented to the viewers. O'Connor's (1996) representation context web serves as a model for the realities created through reality television. People derive various benefits from watching reality TV. Besides the obvious entertainment value of reality TV, viewers also gather information via this type of programming. Viewers want to see real people on television reacting to unusual circumstances without the use of scripts. By surveying reality TV show viewers and participants, this study gives insight into how real the viewers believe the shows are and how authentic they actually are. If these shows are presented as reality, viewers are probably taking what they see as historical fact. The results of the study indicate more must be done so that the “reality” of reality TV does not misinform viewers.
Date: May 2007
Creator: Holmes, Haley K.
Partner: UNT Libraries

Creating a Criterion-Based Information Agent Through Data Mining for Automated Identification of Scholarly Research on the World Wide Web

Description: This dissertation creates an information agent that correctly identifies Web pages containing scholarly research approximately 96% of the time. It does this by analyzing the Web page with a set of criteria, and then uses a classification tree to arrive at a decision. The criteria were gathered from the literature on selecting print and electronic materials for academic libraries. A Delphi study was done with an international panel of librarians to expand and refine the criteria until a list of 41 operationalizable criteria was agreed upon. A Perl program was then designed to analyze a Web page and determine a numerical value for each criterion. A large collection of Web pages was gathered comprising 5,000 pages that contain the full work of scholarly research and 5,000 random pages, representative of user searches, which do not contain scholarly research. Datasets were built by running the Perl program on these Web pages. The datasets were split into model building and testing sets. Data mining was then used to create different classification models. Four techniques were used: logistic regression, nonparametric discriminant analysis, classification trees, and neural networks. The models were created with the model datasets and then tested against the test dataset. Precision and recall were used to judge the effectiveness of each model. In addition, a set of pages that were difficult to classify because of their similarity to scholarly research was gathered and classified with the models. The classification tree created the most effective classification model, with a precision ratio of 96% and a recall ratio of 95.6%. However, logistic regression created a model that was able to correctly classify more of the problematic pages. This agent can be used to create a database of scholarly research published on the Web. In addition, the technique can be used to create a ...
Date: May 2000
Creator: Nicholson, Scott
Partner: UNT Libraries

CT3 as an Index of Knowledge Domain Structure: Distributions for Order Analysis and Information Hierarchies

Description: The problem with which this study is concerned is articulating all possible CT3 and KR21 reliability measures for every case of a 5x5 binary matrix (32,996,500 possible matrices). The study has three purposes. The first purpose is to calculate CT3 for every matrix and compare the results to the proposed optimum range of .3 to .5. The second purpose is to compare the results from the calculation of KR21 and CT3 reliability measures. The third purpose is to calculate CT3 and KR21 on every strand of a class test whose item set has been reduced using the difficulty strata identified by Order Analysis. The study was conducted by writing a computer program to articulate all possible 5 x 5 matrices. The program also calculated CT3 and KR21 reliability measures for each matrix. The nonparametric technique of Order Analysis was applied to two sections of test items to stratify the items into difficulty levels. The difficulty levels were used to reduce the item set from 22 to 9 items. All possible strands or chains of these items were identified so that both reliability measures (CT3 and KR21) could be calculated. One major finding of this study indicates that .3 to .5 is a desirable range for CT3 (cumulative p=.86 to p=.98) if cumulative frequencies are measured. A second major finding is that the KR21 reliability measure produced an invalid result more than half the time. The last major finding is that CT3, rescaled to range between 0 and 1, supports De Vellis' guidelines for reliability measures. The major conclusion is that CT3 is a better measure of reliability since it considers both inter- and intra-item variances.
Date: December 2002
Creator: Swartz Horn, Rebecca
Partner: UNT Libraries

Customers' Attitudes toward Mobile Banking Applications in Saudi Arabia

Description: Mobile banking services have changed the design and delivery of financial services and the whole banking sector. Financial service companies employ mobile banking applications as new alternative channels to increase customers' convenience and to reduce costs and maintain profitability. The primary focus of this study was to explore the Saudi bank customers' perceptions about the adoption of mobile banking applications and to test the relationships between the factors that influence mobile banking adoption as independent variables and the action to adopt them as the dependent variable. Saudi customers' perceptions were tested based on the extended versions of IDT, TAM and other diffusion of innovation theories and frameworks to generate a model of constructs that can be used to study the use and the adoption of mobile technology by users. Koenig-Lewis, Palmer, & Moll's (2010) model was used to test its constructs of (1) perceived usefulness, (2) perceived ease of use, (3) perceived compatibility, (4) perceived credibility, (5) perceived trust, (6) perceived risk, and (7) perceived cost, and these were the independent variables in current study. This study revealed a high level of adoption that 82.7% of Saudis had adopted mobile banking applications. Also, the findings of this study identified a statistically significant relationship between all of demographic differences: gender, education level, monthly income, and profession and mobile banking services among adopters and non-adopters. Seven attributes relating to the adoption of mobile banking applications were evaluated in this study to assess which variables affected Saudi banks customers in their adoption of mobile banking services. The findings indicated that the attributes that significantly affected the adoption of mobile banking applications among Saudis were perceived trust, perceived cost, and perceived risk. These three predictors, as a result, explained more than 60% of variance in intention to adopt mobile banking technology in Saudi Arabia. ...
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: August 2016
Creator: Alshara, Mohammed Ali
Partner: UNT Libraries

The Denial of Relevance: Biography of a Quest(ion) Amidst the Min(d)fields—Groping and Stumbling

Description: Early research on just why it might be the case that “the mass of men lead lives of quiet desperation” suggested that denial of relevance was a significant factor. Asking why denial of relevance would be significant and how it might be resolved began to raise issues of the very nature of questions. Pursuing the nature of questions, in light of denial of relevance and Thoreau’s “quiet desperation” provoked a journey of modeling questions and constructing a biography of the initial question of this research and its evolution. Engaging literature from philosophy, neuroscience, and retrieval then combined with deep interviews of successful lawyers to render a thick, biographical model of questioning.
Date: August 2014
Creator: VanBebber, Marion Turner
Partner: UNT Libraries

Detecting the Presence of Disease by Unifying Two Methods of Remote Sensing.

Description: There is currently no effective tool available to quickly and economically measure a change in landmass in the setting of biomedical professionals and environmental specialists. The purpose of this study is to structure and demonstrate a statistical change-detection method using remotely sensed data that can detect the presence of an infectious land borne disease. Data sources included the Texas Department of Health database, which provided the types of infectious land borne diseases and indicated the geographical area to study. Methods of data collection included the gathering of images produced by digital orthophoto quadrangle and aerial videography and Landsat. Also, a method was developed to identify statistically the severity of changes of the landmass over a three-year period. Data analysis included using a unique statistical detection procedure to measure the severity of change in landmass when a disease was not present and when the disease was present. The statistical detection method was applied to two different remotely sensed platform types and again to two like remotely sensed platform types. The results indicated that when the statistical change detection method was used for two different types of remote sensing mediums (i.e.-digital orthophoto quadrangle and aerial videography), the results were negative due to skewed and unreliable data. However, when two like remote sensing mediums were used (i.e.- videography to videography and Landsat to Landsat) the results were positive and the data were reliable.
Date: May 2002
Creator: Reames, Steve
Partner: UNT Libraries

Development and Validation of an Instrument to Operationalize Information System Requirements Capabilities

Description: As a discipline, information systems (IS) has struggled with the challenge of alignment of product (primarily software and the infrastructure needed to run it) with the needs of the organization it supports. This has been characterized as the pursuit of alignment of information technology (IT) with the business or organization, which begins with the gathering of the requirements of the organization, which then guide the creation of the IS requirements, which in turn guide the creation of the IT solution itself. This research is primarily focused on developing and validating an instrument to operationalize such requirements capabilities. Requirements capabilities at the development of software or the implementation of a specific IT solution are referred to as capabilities for software requirements or more commonly systems analysis and design (SA&D) capabilities. This research describes and validates an instrument for SA&D capabilities for content validity, construct validity, internal consistency, and an exploratory factor analysis. SA&D capabilities were expected to coalesce strongly around a single dimension. Yet in validating the SA&D capabilities instrument, it became apparent that SA&D capabilities are not the unidimensional construct traditionally perceived. Instead it appears that four dimensions underlie SA&D capabilities, and these are associated with alignment maturity (governance, partnership, communications, and value). These sub factors of requirements capabilities are described in this research and represent distinct capabilities critical to the successful alignment of IT with the business.
Date: May 2014
Creator: Pettit, Alex Z.
Partner: UNT Libraries

Development of an Instrument to Measure the Level of Acceptability and Tolerability of Cyber Aggression: Mixed-Methods Research on Saudi Arabian Social Media Users

Description: Cyber aggression came about as a result of advances in information communication technology and the aggressive usage of the technology in real life. Cyber aggression can take on many forms and facets. However, the main focus of this study is cyberbullying and cyberstalking through information sharing practices that might constitute digital aggressive acts. Human aggression has been extensively investigated. Studies focusing on understanding the causes and effects that can lead to physical and digital aggression have shown the prevalence of cyber aggression in different settings. Moreover, these studies have shown strong relationship between cyber aggression and the physiological and physical trauma on both perpetrators and their victims. Nevertheless, the literature shows a lack of studies that could measure the level of acceptance and tolerance of these dangerous digital acts. This study is divided into two main stages; Stage one is a qualitative pilot study carried out to explore the concept of cyber aggression and its existence in Saudi Arabia. In-depth interviews were conducted with 14 Saudi social media users to collect understanding and meanings of cyber aggression. The researcher followed the Colaizzi’s methods to analyze the descriptive data. A proposed model was generated to describe cyber aggression in social media applications. The results showed that there is a level of acceptance to some cyber aggression acts due to a number of factors. The second stage of the study is focused on developing scales with reliable items that could determine acceptability and tolerability of cyber aggression. In this second stage, the researcher used the factors discovered during the first stage as source to create the scales’ items. The proposed methods and scales were analyzed and tested to increase reliability as indicated by the Cronbach’s Alpha value. The scales were designed to measure how acceptable and tolerable is cyber-bullying, cyber-stalking in Saudi ...
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2016
Creator: Albar, Ali A
Partner: UNT Libraries

Diagnosing Learner Deficiencies in Algorithmic Reasoning

Description: It is hypothesized that useful diagnostic information can reside in the wrong answers of multiple-choice tests, and that properly designed distractors can yield indications of misinformation and missing information in algorithmic reasoning on the part of the test taker. In addition to summarizing the literature regarding diagnostic research as opposed to scoring research, this study proposes a methodology for analyzing test results and compares the findings with those from the research of Birenbaum and Tatsuoka and others. The proposed method identifies the conditions of misinformation and missing information, and it contains a statistical compensation for careless errors. Strengths and weaknesses of the method are explored, and suggestions for further research are offered.
Date: May 1995
Creator: Hubbard, George U.
Partner: UNT Libraries

Diffusion across the digital divide: Assessing use of the Connecticut Digital Library (ICONN) in K-12 schools in Connecticut.

Description: State digital libraries are manifestations of the diffusion of technology that has provided both access to and delivery of digital content. Whether the content is being accessed and used equitably in K-12 schools has not been assessed. Determining patterns of the diffusion of use across socioeconomic groups in K-12 schools may help measure the success of existing efforts to provide equitable access and use of digital content, and help guide policies and implementation to more effectively address remaining disparities. This study examined use of the Connecticut Digital Library (ICONN) in K-12 schools in Connecticut by determining annual patterns of use per school/district over a four-year period, using transaction log search statistics. The data were analyzed in the paradigm that Rogers (2003) describes as the first and second dimensions of the consequences of an innovation - the overall growth and the equality of the diffusion to individuals within an intended audience --- in this case, students in K-12 schools. Data were compared by school district and the established socioeconomic District Reference Groups (DRGs) defined by the Connecticut State Board of Education. At the time of this study, ICONN used aggregate data (total searches) for K-12 schools, but did not have relevant data on diffusion within the public schools in Connecticut related to district or DRGs.
Date: December 2008
Creator: Bogel, Gayle
Partner: UNT Libraries

Discovering a Descriptive Taxonomy of Attributes of Exemplary School Library Websites

Description: This descriptive study examines effective online school library practice. A Delphi panel selected a sample of 10 exemplary sites and helped to create two research tools--taxonomies designed to analyze the features and characteristics of school library Websites. Using the expert-identified sites as a sample, a content analysis was conducted to systematically identify site features and characteristics. Anne Clyde's longitudinal content analysis of school library Websites was used as a baseline to examine trends in practice; in addition, the national guidelines document, Information Power: Building Partnerships for Learning, was examined to explore ways in which the traditional mission and roles of school library programs are currently translated online. Results indicated great variation in depth and coverage even among Websites considered exemplary. Sites in the sample are growing more interactive and student-centered, using blogs as supplemental communication strategies. Nevertheless, even these exemplary sites were slow to adopt the advances in technology to meet the learning needs and interests of young adult users. Ideally the study's findings will contribute to understanding of state-of-the-art and will serve to identify trends, as well as serving as a guide to practitioners in planning, developing, and maintaining school library Websites.
Date: August 2007
Creator: Valenza, Joyce Kasman
Partner: UNT Libraries

An E-government Readiness Model

Description: The purpose of this study is to develop an e-government readiness model and to test this model. Consistent with this model several instruments, IS assessment (ISA), IT governance (ITG), and Organization-IS alignment (IS-ALIGN) are examined for their ability to measure the readiness of one organization for e-government and to test the instruments fit in the proposed e-government model. The ISA instrument used is the result of adapting and combining the IS-SERVQUAL instrument proposed by Van Dyke, Kappelman, and Pybutok (1997), and the IS-SUCCESS instrument developed by Kappelman and Chong (2001) for the City of Denton (COD) project at UNT. The IS Success Model was first proposed by DeLone and McLean (1992), but they did not validate this model. The ITG instrument was based on the goals of the COD project for IT governance and was developed by Sanchez and Kappelman (2001) from UNT. The ISALIGN instrument was also developed by Sanchez and Kappelman (2001) for the COD project. It is an instrument based on the Malcolm Baldrige National Quality Award (MBNQA) that measures how effectively a government organization utilizes IT to support its various objectives. The EGOV instrument was adapted from the study of the Action-Audience Model developed by Koh and Balthazrd (1997) to measure how well a government organization is prepared to usher in e-government in terms of various success factors at planning, system and data levels. An on-line survey was conducted with employees of the City of Denton, Texas. An invitation letter to participate in the survey was sent to the 1100 employees of the City of Denton via email, 339 responses were received, yielding a response rate of 31%. About 168 responses were discarded because they were incomplete and had the missing values, leaving 171 usable surveys, for a usable set of responses that had a response ...
Date: December 2001
Creator: Liu, Shin-Ping
Partner: UNT Libraries