UNT Libraries - 18 Matching Results

Search Results

Application-Specific Things Architectures for IoT-Based Smart Healthcare Solutions

Description: Human body is a complex system organized at different levels such as cells, tissues and organs, which contributes to 11 important organ systems. The functional efficiency of this complex system is evaluated as health. Traditional healthcare is unable to accommodate everyone's need due to the ever-increasing population and medical costs. With advancements in technology and medical research, traditional healthcare applications are shaping into smart healthcare solutions. Smart healthcare helps in continuously monitoring our body parameters, which helps in keeping people health-aware. It provides the ability for remote assistance, which helps in utilizing the available resources to maximum potential. The backbone of smart healthcare solutions is Internet of Things (IoT) which increases the computing capacity of the real-world components by using cloud-based solutions. The basic elements of these IoT based smart healthcare solutions are called "things." Things are simple sensors or actuators, which have the capacity to wirelessly connect with each other and to the internet. The research for this dissertation aims in developing architectures for these things, focusing on IoT-based smart healthcare solutions. The core for this dissertation is to contribute to the research in smart healthcare by identifying applications which can be monitored remotely. For this, application-specific thing architectures were proposed based on monitoring a specific body parameter; monitoring physical health for family and friends; and optimizing the power budget of IoT body sensor network using human body communications. The experimental results show promising scope towards improving the quality of life, through needle-less and cost-effective smart healthcare solutions.
Date: May 2018
Creator: Sundaravadivel, Prabha

Accurate Joint Detection from Depth Videos towards Pose Analysis

Description: Joint detection is vital for characterizing human pose and serves as a foundation for a wide range of computer vision applications such as physical training, health care, entertainment. This dissertation proposed two methods to detect joints in the human body for pose analysis. The first method detects joints by combining body model and automatic feature points detection together. The human body model maps the detected extreme points to the corresponding body parts of the model and detects the position of implicit joints. The dominant joints are detected after implicit joints and extreme points are located by a shortest path based methods. The main contribution of this work is a hybrid framework to detect joints on the human body to achieve robustness to different body shapes or proportions, pose variations and occlusions. Another contribution of this work is the idea of using geodesic features of the human body to build a model for guiding the human pose detection and estimation. The second proposed method detects joints by segmenting human body into parts first and then detect joints by making the detection algorithm focusing on each limb. The advantage of applying body part segmentation first is that the body segmentation method narrows down the searching area for each joint so that the joint detection method can provide more stable and accurate results.
Date: May 2018
Creator: Kong, Longbo

Validation and Evaluation of Emergency Response Plans through Agent-Based Modeling and Simulation

Description: Biological emergency response planning plays a critical role in protecting the public from possible devastating results of sudden disease outbreaks. These plans describe the distribution of medical countermeasures across a region using limited resources within a restricted time window. Thus, the ability to determine that such a plan will be feasible, i.e. successfully provide service to affected populations within the time limit, is crucial. Many of the current efforts to validate plans are in the form of live drills and training, but those may not test plan activation at the appropriate scale or with sufficient numbers of participants. Thus, this necessitates the use of computational resources to aid emergency managers and planners in developing and evaluating plans before they must be used. Current emergency response plan generation software packages such as RE-PLAN or RealOpt, provide rate-based validation analyses. However, these types of analysis may neglect details of real-world traffic dynamics. Therefore, this dissertation presents Validating Emergency Response Plan Execution Through Simulation (VERPETS), a novel, computational system for the agent-based simulation of biological emergency response plan activation. This system converts raw road network, population distribution, and emergency response plan data into a format suitable for simulation, and then performs these simulations using SUMO, or Simulations of Urban Mobility, to simulate realistic traffic dynamics. Additionally, high performance computing methodologies were utilized to decrease agent load on simulations and improve performance. Further strategies, such as use of agent scaling and a time limit on simulation execution, were also examined. Experimental results indicate that the time to plan completion, i.e. the time when all individuals of the population have received medication, determined by VERPETS aligned well with current alternate methodologies. It was determined that the dynamic of traffic congestion at the POD itself was one of the major factors affecting the completion time of ...
Date: May 2018
Creator: Helsing, Joseph

Computational Approaches for Analyzing Social Support in Online Health Communities

Description: Online health communities (OHCs) have become a medium for patients to share their personal experiences and interact with peers on topics related to a disease, medication, side effects, and therapeutic processes. Many studies show that using OHCs regularly decreases mortality and improves patients mental health. As a result of their benefits, OHCs are a popular place for patients to refer to, especially patients with a severe disease, and to receive emotional and informational support. The main reasons for developing OHCs are to present valid and high-quality information and to understand the mechanism of social support in changing patients' mental health. Given the purpose of OHC moderators for developing OHCs applications and the purpose of patients for using OHCs, there is no facility, feature, or sub-application in OHCs to satisfy patient and moderator goals. OHCs are only equipped with a primary search engine that is a keyword-based search tool. In other words, if a patient wants to obtain information about a side-effect, he/she needs to browse many threads in the hope that he/she can find several related comments. In the same way, OHC moderators cannot browse all information which is exchanged among patients to validate their accuracy. Thus, it is critical for OHCs to be equipped with computational tools which are supported by several sophisticated computational models that provide moderators and patients with the collection of messages that they need for making decisions or predictions. We present multiple computational models to alleviate the problem of OHCs in providing specific types of messages in response to the specific moderator and patient needs. Specifically, we focused on proposing computational models for the following tasks: identifying emotional support, which presents OHCs moderators, psychologists, and sociologists with insightful views on the emotional states of individuals and groups, and identifying informational support, which provides patients with ...
Date: May 2018
Creator: Khan Pour, Hamed

Hybrid Approaches in Test Suite Prioritization

Description: The rapid advancement of web and mobile application technologies has recently posed numerous challenges to the Software Engineering community, including how to cost-effectively test applications that have complex event spaces. Many software testing techniques attempt to cost-effectively improve the quality of such software. This dissertation primarily focuses on that of hybrid test suite prioritization. The techniques utilize two or more criteria to perform test suite prioritization as it is often insufficient to use only a single criterion. The dissertation consists of the following contributions: (1) a weighted test suite prioritization technique that employs the distance between criteria as a weighting factor, (2) a coarse-to-fine grained test suite prioritization technique that uses a multilevel approach to increase the granularity of the criteria at each subsequent iteration, (3) the Caret-HM tool for Android user session-based testing that allows testers to record, replay, and create heat maps from user interactions with Android applications via a web browser, and (4) Android user session-based test suite prioritization techniques that utilize heuristics developed from user sessions created by Caret-HM. Each of the chapters empirically evaluate the respective techniques. The proposed techniques generally show improved or equally good performance when compared to the baselines, depending on an application under test. Further, this dissertation provides guidance to testers as it relates to the use of the proposed hybrid techniques.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2018
Creator: Nurmuradov, Dmitriy

Multi-Modal Insider Threat Detection and Prevention based on Users' Behaviors

Description: Insider threat is one of the greatest concerns for information security that could cause more significant financial losses and damages than any other attack. However, implementing an efficient detection system is a very challenging task. It has long been recognized that solutions to insider threats are mainly user-centric and several psychological and psychosocial models have been proposed. A user's psychophysiological behavior measures can provide an excellent source of information for detecting user's malicious behaviors and mitigating insider threats. In this dissertation, we propose a multi-modal framework based on the user's psychophysiological measures and computer-based behaviors to distinguish between a user's behaviors during regular activities versus malicious activities. We utilize several psychophysiological measures such as electroencephalogram (EEG), electrocardiogram (ECG), and eye movement and pupil behaviors along with the computer-based behaviors such as the mouse movement dynamics, and keystrokes dynamics to build our framework for detecting malicious insiders. We conduct human subject experiments to capture the psychophysiological measures and the computer-based behaviors for a group of participants while performing several computer-based activities in different scenarios. We analyze the behavioral measures, extract useful features, and evaluate their capability in detecting insider threats. We investigate each measure separately, then we use data fusion techniques to build two modules and a comprehensive multi-modal framework. The first module combines the synchronized EEG and ECG psychophysiological measures, and the second module combines the eye movement and pupil behaviors with the computer-based behaviors to detect the malicious insiders. The multi-modal framework utilizes all the measures and behaviors in one model to achieve better detection accuracy. Our findings demonstrate that psychophysiological measures can reveal valuable knowledge about a user's malicious intent and can be used as an effective indicator in designing insider threat monitoring and detection frameworks. Our work lays out the necessary foundation to establish a new generation ...
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: August 2018
Creator: Hashem, Yassir

Computational Methods to Optimize High-Consequence Variants of the Vehicle Routing Problem for Relief Networks in Humanitarian Logistics

Description: Optimization of relief networks in humanitarian logistics often exemplifies the need for solutions that are feasible given a hard constraint on time. For instance, the distribution of medical countermeasures immediately following a biological disaster event must be completed within a short time-frame. When these supplies are not distributed within the maximum time allowed, the severity of the disaster is quickly exacerbated. Therefore emergency response plans that fail to facilitate the transportation of these supplies in the time allowed are simply not acceptable. As a result, all optimization solutions that fail to satisfy this criterion would be deemed infeasible. This creates a conflict with the priority optimization objective in most variants of the generic vehicle routing problem (VRP). Instead of efficiently maximizing usage of vehicle resources available to construct a feasible solution, these variants ordinarily prioritize the construction of a minimum cost set of vehicle routes. Research presented in this dissertation focuses on the design and analysis of efficient computational methods for optimizing high-consequence variants of the VRP for relief networks. The conflict between prioritizing the minimization of the number of vehicles required or the minimization of total travel time is demonstrated. The optimization of the time and capacity constraints in the context of minimizing the required vehicles are independently examined. An efficient meta-heuristic algorithm based on a continuous spatial partitioning scheme is presented for constructing a minimized set of vehicle routes in practical instances of the VRP that include critically high-cost penalties. Multiple optimization priority strategies that extend this algorithm are examined and compared in a large-scale bio-emergency case study. The algorithms designed from this research are implemented and integrated into an existing computational framework that is currently used by public health officials. These computational tools enhance an emergency response planner's ability to derive a set of vehicle routes specifically ...
Date: August 2018
Creator: Urbanovsky, Joshua C

Dataflow Processing in Memory Achieves Significant Energy Efficiency

Description: The large difference between processor CPU cycle time and memory access time, often referred to as the memory wall, severely limits the performance of streaming applications. Some data centers have shown servers being idle three out of four clocks. High performance instruction sequenced systems are not energy efficient. The execute stage of even simple pipeline processors only use 9% of the pipeline's total energy. A hybrid dataflow system within a memory module is shown to have 7.2 times the performance with 368 times better energy efficiency than an Intel Xeon server processor on the analyzed benchmarks. The dataflow implementation exploits the inherent parallelism and pipelining of the application to improve performance without the overhead functions of caching, instruction fetch, instruction decode, instruction scheduling, reorder buffers, and speculative execution used by high performance out-of-order processors. Coarse grain reconfigurable logic in an energy efficient silicon process provides flexibility to implement multiple algorithms in a low energy solution. Integrating the logic within a 3D stacked memory module provides lower latency and higher bandwidth access to memory while operating independently from the host system processor.
Date: August 2018
Creator: Shelor, Charles F.

Simulation of Dengue Outbreak in Thailand

Description: The dengue virus has become widespread worldwide in recent decades. It has no specific treatment and affects more than 40% of the entire population in the world. In Thailand, dengue has been a health concern for more than half a century. The highest number of cases in one year was 174,285 in 1987, leading to 1,007 deaths. In the present day, dengue is distributed throughout the entire country. Therefore, dengue has become a major challenge for public health in terms of both prevention and control of outbreaks. Different methodologies and ways of dealing with dengue outbreaks have been put forward by researchers. Computational models and simulations play an important role, as they have the ability to help researchers and officers in public health gain a greater understanding of the virus's epidemic activities. In this context, this dissertation presents a new framework, Modified Agent-Based Modeling (mABM), a hybrid platform between a mathematical model and a computational model, to simulate a dengue outbreak in human and mosquito populations. This framework improves on the realism of former models by utilizing the reported data from several Thai government organizations, such as the Thai Ministry of Public Health (MoPH), the National Statistical Office, and others. Additionally, its implementation takes into account the geography of Thailand, as well as synthetic mosquito and synthetic human populations. mABM can be used to represent human behavior in a large population across variant distances by specifying demographic factors and assigning mobility patterns for weekdays, weekends, and holidays for the synthetic human population. The mosquito dynamic population model (MDP), which is a component of the mABM framework, is used for representing the synthetic mosquito population dynamic and their ecology by integrating the regional model to capture the effect of dengue outbreak. The two synthetic populations can be linked to each other ...
Date: August 2018
Creator: Meesumrarn, Thiraphat

Reading with Robots: A Platform to Promote Cognitive Exercise through Identification and Discussion of Creative Metaphor in Books

Description: Maintaining cognitive health is often a pressing concern for aging adults, and given the world's shifting age demographics, it is impractical to assume that older adults will be able to rely on individualized human support for doing so. Recently, interest has turned toward technology as an alternative. Companion robots offer an attractive vehicle for facilitating cognitive exercise, but the language technologies guiding their interactions are still nascent; in elder-focused human-robot systems proposed to date, interactions have been limited to motion or buttons and canned speech. The incapacity of these systems to autonomously participate in conversational discourse limits their ability to engage users at a cognitively meaningful level. I addressed this limitation by developing a platform for human-robot book discussions, designed to promote cognitive exercise by encouraging users to consider the authors' underlying intentions in employing creative metaphors. The choice of book discussions as the backdrop for these conversations has an empirical basis in neuro- and social science research that has found that reading often, even in late adulthood, has been correlated with a decreased likelihood to exhibit symptoms of cognitive decline. The more targeted focus on novel metaphors within those conversations stems from prior work showing that processing novel metaphors is a cognitively challenging task, for young adults and even more so in older adults with and without dementia. A central contribution arising from the work was the creation of the first computational method for modelling metaphor novelty in word pairs. I show that the method outperforms baseline strategies as well as a standard metaphor detection approach, and additionally discover that incorporating a sentence-based classifier as a preliminary filtering step when applying the model to new books results in a better final set of scored word pairs. I trained and evaluated my methods using new, large corpora from two sources, ...
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: August 2018
Creator: Parde, Natalie

Secure and Trusted Execution for Virtualization Workloads

Description: In this dissertation, we have analyzed various security and trustworthy solutions for modern computing systems and proposed a framework that will provide holistic security and trust for the entire lifecycle of a virtualized workload. The framework consists of 3 novel techniques and a set of guidelines. These 3 techniques provide necessary elements for secure and trusted execution environment while the guidelines ensure that the virtualized workload remains in a secure and trusted state throughout its lifecycle. We have successfully implemented and demonstrated that the framework provides security and trust guarantees at the time of launch, any time during the execution, and during an update of the virtualized workload. Given the proliferation of virtualization from cloud servers to embedded systems, techniques presented in this dissertation can be implemented on most computing systems.
Date: August 2018
Creator: Kotikela, Srujan D

Radio Resource Control Approaches for LTE-Advanced Femtocell Networks

Description: The architecture of mobile networks has dramatically evolved in order to fulfill the growing demands on wireless services and data. The radio resources, which are used by the current mobile networks, are limited while the users demands are substantially increasing. In the future, tremendous Internet applications are expected to be served by mobile networks. Therefore, increasing the capacity of mobile networks has become a vital issue. Heterogeneous networks (HetNets) have been considered as a promising paradigm for future mobile networks. Accordingly, the concept of small cell has been introduced in order to increase the capacity of the mobile networks. A femtocell network is a kind of small cell networks. Femtocells are deployed within macrocells coverage. Femtocells cover small areas and operate with low transmission power while providing high capacity. Also, UEs can be offloaded from macrocells to femtocells. Thus, the capacity can be increased. However, this will introduce different technical challenges. The interference has become one of the key challenges for deploying femtocells within a certain macrocells coverage. Undesirable impact of the interference can degrade the performance of the mobile networks. Therefore, radio resource management mechanisms are needed in order to address key challenges of deploying femtocells. The objective of this work is to introduce radio resource control approaches, which are used to increase mobile networks' capacity and alleviate undesirable impact of the interference. In addition, proposed radio resource control approaches ensure the coexistence between macrocell and femtocells based on LTE-Advanced environment. Firstly, a novel mechanism is proposed in order to address the interference challenge. The proposed approach mitigates the impact of interference based on controlling radio sub-channels' assignment and dynamically adjusting the transmission power. Secondly, a dynamic strategy is proposed for the FFR mechanism. In the FFR mechanism, the whole spectrum is divided into four fixed sub-channels and each ...
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: August 2018
Creator: Alotaibi, Sultan Radhi

A Control Theoretic Approach for Resilient Network Services

Description: Resilient networks have the ability to provide the desired level of service, despite challenges such as malicious attacks and misconfigurations. The primary goal of this dissertation is to be able to provide uninterrupted network services in the face of an attack or any failures. This dissertation attempts to apply control system theory techniques with a focus on system identification and closed-loop feedback control. It explores the benefits of system identification technique in designing and validating the model for the complex and dynamic networks. Further, this dissertation focuses on designing robust feedback control mechanisms that are both scalable and effective in real-time. It focuses on employing dynamic and predictive control approaches to reduce the impact of an attack on network services. The closed-loop feedback control mechanisms tackle this issue by degrading the network services gracefully to an acceptable level and then stabilizing the network in real-time (less than 50 seconds). Employing these feedback mechanisms also provide the ability to automatically configure the settings such that the QoS metrics of the network is consistent with those specified in the service level agreements.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2018
Creator: Vempati, Jagannadh Ambareesh

Ontology Based Security Threat Assessment and Mitigation for Cloud Systems

Description: A malicious actor often relies on security vulnerabilities of IT systems to launch a cyber attack. Most cloud services are supported by an orchestration of large and complex systems which are prone to vulnerabilities, making threat assessment very challenging. In this research, I developed formal and practical ontology-based techniques that enable automated evaluation of a cloud system's security threats. I use an architecture for threat assessment of cloud systems that leverages a dynamically generated ontology knowledge base. I created an ontology model and represented the components of a cloud system. These ontologies are designed for a set of domains that covers some cloud's aspects and information technology products' cyber threat data. The inputs to our architecture are the configurations of cloud assets and components specification (which encompass the desired assessment procedures) and the outputs are actionable threat assessment results. The focus of this work is on ways of enumerating, assessing, and mitigating emerging cyber security threats. A research toolkit system has been developed to evaluate our architecture. We expect our techniques to be leveraged by any cloud provider or consumer in closing the gap of identifying and remediating known or impending security threats facing their cloud's assets.
Date: December 2018
Creator: Kamongi, Patrick

On-Loom Fabric Defect Inspection Using Contact Image Sensors and Activation Layer Embedded Convolutional Neural Network

Description: Malfunctions on loom machines are the main causes of faulty fabric production. An on-loom fabric inspection system is a real-time monitoring device that enables immediate defect detection for human intervention. This dissertation presented a solution for the on-loom fabric defect inspection, including the new hardware design—the configurable contact image sensor (CIS) module—for on-loom fabric scanning and the defect detection algorithms. The main contributions of this work include (1) creating a configurable CIS module adaptable to a loom width, which brings CIS unique features, such as sub-millimeter resolution, compact size, short working distance and low cost, to the fabric defect inspection system, (2) designing a two-level hardware architecture that can be efficiently deployed in a weaving factory with hundreds of looms, (3) developing a two-level inspecting scheme, with which the initial defect screening is performed on the Raspberry Pi and the intensive defect verification is processed on the cloud server, (4) introducing the novel pairwise-potential activation layer to a convolutional neural network that leads to high accuracies of defect segmentation on fabrics with fine and imbalanced structures, (5) achieving a real-time defect detection that allows a possible defect to be examined multiple times, and (6) implementing a new color segmentation technique suitable for processing multi-color fabric defects. The novel CIS-based on-loom scanning system offered real-time and high-resolution fabric images, which was able to deliver the information of single thread on a fabric. The algorithm evaluation on the fabric defect datasets showed a non-miss-detection rate on defect-free fabrics. The average precision of defect existed images reached above 90% at the pixel level. The detected defect pixels' integrity—the recall scored around 70%. Possible defect regions overestimated on ground truth images and the morphologies of fine defects similar to regular fabric pattern were the two major reasons that caused the imperfection in defect pixel ...
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2018
Creator: Ouyang, Wenbin

Improving Software Quality through Syntax and Semantics Verification of Requirements Models

Description: Software defects can frequently be traced to poorly-specified requirements. Many software teams manage their requirements using tools such as checklists and databases, which lack a formal semantic mapping to system behavior. Such a mapping can be especially helpful for safety-critical systems. Another limitation of many requirements analysis methods is that much of the analysis must still be done manually. We propose techniques that automate portions of the requirements analysis process, as well as clarify the syntax and semantics of requirements models using a variety of methods, including machine learning tools and our own tool, VeriCCM. The machine learning tools used help us identify potential model elements and verify their correctness. VeriCCM, a formalized extension of the causal component model (CCM), uses formal methods to ensure that requirements are well-formed, as well as providing the beginnings of a full formal semantics. We also explore the use of statecharts to identify potential abnormal behaviors from a given set of requirements. At each stage, we perform empirical studies to evaluate the effectiveness of our proposed approaches.
Date: December 2018
Creator: Gaither, Danielle

Detection of Generalizable Clone Security Coding Bugs Using Graphs and Learning Algorithms

Description: This research methodology isolates coding properties and identifies the probability of security vulnerabilities using machine learning and historical data. Several approaches characterize the effectiveness of detecting security-related bugs that manifest as vulnerabilities, but none utilize vulnerability patch information. The main contribution of this research is a framework to analyze LLVM Intermediate Representation Code and merging core source code representations using source code properties. This research is beneficial because it allows source programs to be transformed into a graphical form and users can extract specific code properties related to vulnerable functions. The result is an improved approach to detect, identify, and track software system vulnerabilities based on a performance evaluation. The methodology uses historical function level vulnerability information, unique feature extraction techniques, a novel code property graph, and learning algorithms to minimize the amount of end user domain knowledge necessary to detect vulnerabilities in applications. The analysis shows approximately 99% precision and recall to detect known vulnerabilities in the National Institute of Standards and Technology (NIST) Software Assurance Metrics and Tool Evaluation (SAMATE) project. Furthermore, 72% percent of the historical vulnerabilities in the OpenSSL testing environment were detected using a linear support vector classifier (SVC) model.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2018
Creator: Mayo, Quentin R

Toward Supporting Fine-Grained, Structured, Meaningful and Engaging Feedback in Educational Applications

Description: Recent advancements in machine learning have started to put their mark on educational technology. Technology is evolving fast and, as people adopt it, schools and universities must also keep up (nearly 70% of primary and secondary schools in the UK are now using tablets for various purposes). As these numbers are likely going to follow the same increasing trend, it is imperative for schools to adapt and benefit from the advantages offered by technology: real-time processing of data, availability of different resources through connectivity, efficiency, and many others. To this end, this work contributes to the growth of educational technology by developing several algorithms and models that are meant to ease several tasks for the instructors, engage students in deep discussions and ultimately, increase their learning gains. First, a novel, fine-grained knowledge representation is introduced that splits phrases into their constituent propositions that are both meaningful and minimal. An automated extraction algorithm of the propositions is also introduced. Compared with other fine-grained representations, the extraction model does not require any human labor after it is trained, while the results show considerable improvement over two meaningful baselines. Second, a proposition alignment model is created that relies on even finer-grained units of text while also outperforming several alternative systems. Third, a detailed machine learning based analysis of students' unrestricted natural language responses to questions asked in classrooms is made by leveraging the proposition extraction algorithm to make computational predictions of textual assessment. Two computational approaches are introduced that use and compare manually engineered machine learning features with word embeddings input into a two-hidden layers neural network. Both methods achieve notable improvements over two alternative approaches, a recent short answer grading system and DiSAN – a recent, pre-trained, light-weight neural network that obtained state-of-the-art performance on multiple NLP tasks and corpora. Fourth, a ...
Date: December 2018
Creator: Bulgarov, Florin Adrian