Search Results

FruitPAL: An IoT-Enabled Framework for Automatic Monitoring of Fruit Consumption in Smart Healthcare
This research proposes FruitPAL and FruitPAL 2.0. They are full automatic devices that can detect fruit consumption to reduce the risk of disease. Allergies to fruits can seriously impair the immune system. A novel device (FruitPAL) detecting fruit that can cause allergies is proposed in this thesis. The device can detect fifteen types of fruit and alert the caregiver when an allergic reaction may have happened. The YOLOv8 model is employed to enhance accuracy and response time in detecting dangers. The notification will be transmitted to the mobile device through the cloud, as it is a commonly utilized medium. The proposed device can detect the fruit with an overall precision of 86%. FruitPAL 2.0 is envisioned as a device that encourages people to consume fruit. Fruits contain a variety of essential nutrients that contribute to the general health of the human body. FruitPAL 2.0 is capable of analyzing the consumed fruit and then determining its nutritional value. FruitPAL 2.0 has been trained on YOLOv5 V6.0. FruitPAL 2.0 has an overall precision of 90% in detecting the fruit. The purpose of this study is to encourage fruit consumption unless it causes illness. Even though fruit plays an important role in people's health, it might cause dangers. The proposed work can not only alert people to fruit that can cause allergies, but also it encourages people to consume fruit that is beneficial for their health.
Using Blockchain to Ensure Reputation Credibility in Decentralized Review Management
In recent years, there have been incidents which decreased people's trust in some organizations and authorities responsible for ratings and accreditation. For a few prominent examples, there was a security breach at Equifax (2017), misconduct was found in the Standard & Poor's Ratings Services (2015), and the Accrediting Council for Independent Colleges and Schools (2022) validated some of the low-performing schools as delivering higher standards than they actually were. A natural solution to these types of issues is to decentralize the relevant trust management processes using blockchain technologies. The research problems which are tackled in this thesis consider the issue of trust in reputation for assessment and review credibility at different angles, in the context of blockchain applications. We first explored the following questions. How can we trust courses in one college to provide students with the type and level of knowledge which is needed in a specific workplace? Micro-accreditation on a blockchain was our solution, including using a peer-review system to determine the rigor of a course (through a consensus). Rigor is the level of difficulty in regard to a student's expected level of knowledge. Currently, we make assumptions about the quality and rigor of what is learned, but this is prone to human bias and misunderstandings. We present a decentralized approach that tracks student records throughout the academic progress at a school and helps to match employers' requirements to students' knowledge. We do this by applying micro-accredited topics and Knowledge Units (KU) defined by NSA's Center of Academic Excellence to courses and assignments. We demonstrate that the system was successful in increasing accuracy of hires through simulated datasets, and that it is efficient, as well as scalable. Another problem is how can we trust that the peer reviews are honest and reflect an accurate rigor score? Assigning reputation …
Blockchain for AI: Smarter Contracts to Secure Artificial Intelligence Algorithms
In this dissertation, I investigate the existing smart contract problems that limit cognitive abilities. I use Taylor's serious expansion, polynomial equation, and fraction-based computations to overcome the limitations of calculations in smart contracts. To prove the hypothesis, I use these mathematical models to compute complex operations of naive Bayes, linear regression, decision trees, and neural network algorithms on Ethereum public test networks. The smart contracts achieve 95\% prediction accuracy compared to traditional programming language models, proving the soundness of the numerical derivations. Many non-real-time applications can use our solution for trusted and secure prediction services.
Deep Learning Approaches to Radio Map Estimation
Radio map estimation (RME) is the task of predicting radio power at all locations in a two-dimensional area and at all frequencies in a given band. This thesis explores four deep learning approaches to RME: dual path autoencoders, skip connection autoencoders, diffusion, and joint learning with transmitter localization.
Improving Communication and Collaboration Using Artificial Intelligence: An NLP-Enabled Pair Programming Collaborative-ITS Case Study
This dissertation investigates computational models and methods to improve collaboration skills among students. The study targets pair programming, a popular collaborative learning practice in computer science education. This research led to the first machine learning models capable of detecting micromanagement, exclusive language, and other types of collaborative talk during pair programming. The investigation of computational models led to a novel method for adapting pretrained language models by first training them with a multi-task learning objective. I performed computational linguistic analysis of the types of interactions commonly seen in pair programming and obtained computationally tractable features to classify collaborative talk. In addition, I evaluated a novel metric utilized in evaluating the models in this dissertation. This metric is applicable in the areas of affective systems, formative feedback systems and the broader field of computer science. Lastly, I present a computational method, CollabAssist, for providing real-time feedback to improve collaboration. The empirical evaluation of CollabAssist demonstrated a statistically significant reduction in micromanagement during pair programming. Overall, this dissertation contributes to the development of better collaborative learning practices and facilitates greater student learning gains thereby improving students' computer science skills.
Paradigm Shift from Vague Legal Contracts to Blockchain-Based Smart Contracts
In this dissertation, we address the problem of vagueness in traditional legal contracts by presenting novel methodologies that aid in the paradigm shift from traditional legal contracts to smart contracts. We discuss key enabling technologies that assist in converting the traditional natural language legal contract, which is full of vague words, phrases, and sentences to the blockchain-based precise smart contract, including metrics evaluation during our conversion experiment. To address the challenge of this contract-transformation process, we propose four novel proof-of-concept approaches that take vagueness and different possible interpretations into significant consideration, where we experiment with popular vendors' existing vague legal contracts. We show through experiments that our proposed methodologies are able to study the degree of vagueness in every interpretation and demonstrate which vendor's translated-smart contract can be more accurate, optimized, and have a lesser degree of vagueness. We also incorporated the method of fuzzy logic inside the blockchain-based smart contract, to successfully model the semantics of linguistic expressions. Our experiments and results show that the smart contract with the higher degrees of truth can be very complex technically but more accurate at the same time. By using fuzzy logic inside a smart contract, it becomes easier to solve the problem of contractual ambiguities as well as expedite the process of claiming compensation when implemented in a blockchain-based smart contract.
A Platform for Aligning Academic Assessments to Industry and Federal Job Postings
The proposed tool will provide users with a platform to access a side-by-side comparison of classroom assessment and job posting requirements. Using techniques and methodologies from NLP, machine learning, data analysis, and data mining: the employed algorithm analyzes job postings and classroom assessments, extracts and classifies skill units within, then compares sets of skills from different input volumes. This effectively provides a predicted alignment between academic and career sources, both federal and industrial. The compilation of tool results indicates an overall accuracy score of 82%, and an alignment score of only 75.5% between the input assessments and overall job postings. These results describe that the 50 UNT assessments and 5,000 industry and federal job postings examined, demonstrate a compatibility (alignment) of 75.5%; and, that this measure was calculated using a tool operating at an 82% precision rate.
Reinforcement Learning-Based Test Case Generation with Test Suite Prioritization for Android Application Testing
This dissertation introduces a hybrid strategy for automated testing of Android applications that combines reinforcement learning and test suite prioritization. These approaches aim to improve the effectiveness of the testing process by employing reinforcement learning algorithms, namely Q-learning and SARSA (State-Action-Reward-State-Action), for automated test case generation. The studies provide compelling evidence that reinforcement learning techniques hold great potential in generating test cases that consistently achieve high code coverage; however, the generated test cases may not always be in the optimal order. In this study, novel test case prioritization methods are developed, leveraging pairwise event interactions coverage, application state coverage, and application activity coverage, so as to optimize the rates of code coverage specifically for SARSA-generated test cases. Additionally, test suite prioritization techniques are introduced based on UI element coverage, test case cost, and test case complexity to further enhance the ordering of SARSA-generated test cases. Empirical investigations demonstrate that applying the proposed test suite prioritization techniques to the test suites generated by the reinforcement learning algorithm SARSA improved the rates of code coverage over original orderings and random orderings of test cases.
Scalable Next Generation Blockchains for Large Scale Complex Cyber-Physical Systems and Their Embedded Systems in Smart Cities
The original FlexiChain and its descendants are a revolutionary distributed ledger technology (DLT) for cyber-physical systems (CPS) and their embedded systems (ES). FlexiChain, a DLT implementation, uses cryptography, distributed ledgers, peer-to-peer communications, scalable networks, and consensus. FlexiChain facilitates data structure agreements. This thesis offers a Block Directed Acyclic Graph (BDAG) architecture to link blocks to their forerunners to speed up validation. These data blocks are securely linked. This dissertation introduces Proof of Rapid Authentication, a novel consensus algorithm. This innovative method uses a distributed file to safely store a unique identifier (UID) based on node attributes to verify two blocks faster. This study also addresses CPS hardware security. A system of interconnected, user-unique identifiers allows each block's history to be monitored. This maintains each transaction and the validators who checked the block to ensure trustworthiness and honesty. We constructed a digital version that stays in sync with the distributed ledger as all nodes are linked by a NodeChain. The ledger is distributed without compromising node autonomy. Moreover, FlexiChain Layer 0 distributed ledger is also introduced and can connect and validate Layer 1 blockchains. This project produced a DAG-based blockchain integration platform with hardware security. The results illustrate a practical technique for creating a system depending on diverse applications' needs. This research's design and execution showed faster authentication, less cost, less complexity, greater scalability, higher interoperability, and reduced power consumption.
Deep Learning Methods to Investigate Online Hate Speech and Counterhate Replies to Mitigate Hateful Content
Hateful content and offensive language are commonplace on social media platforms. Many surveys prove that high percentages of social media users experience online harassment. Previous efforts have been made to detect and remove online hate content automatically. However, removing users' content restricts free speech. A complementary strategy to address hateful content that does not interfere with free speech is to counter the hate with new content to divert the discourse away from the hate. In this dissertation, we complement the lack of previous work on counterhate arguments by analyzing and detecting them. Firstly, we study the relationships between hateful tweets and replies. Specifically, we analyze their fine-grained relationships by indicating whether the reply counters the hate, provides a justification, attacks the author of the tweet, or adds additional hate. The most obvious finding is that most replies generally agree with the hateful tweets; only 20% of them counter the hate. Secondly, we focus on the hate directed toward individuals and detect authentic counterhate arguments from online articles. We propose a methodology that assures the authenticity of the argument and its specificity to the individual of interest. We show that finding arguments in online articles is an efficient alternative compared to counterhate generation approaches that may hallucinate unsupported arguments. Thirdly, we investigate the replies to counterhate tweets beyond whether the reply agrees or disagrees with the counterhate tweet. We analyze the language of the counterhate tweet that leads to certain types of replies and predict which counterhate tweets may elicit more hate instead of stopping it. We find that counterhate tweets with profanity content elicit replies that agree with the counterhate tweet. This dissertation presents several corpora, detailed corpus analyses, and deep learning-based approaches for the three tasks mentioned above.
Evaluating Stack Overflow Usability Posts in Conjunction with Usability Heuristics
This thesis explores the critical role of usability in software development and uses usability heuristics as a cost-effective and efficient method for evaluating various software functions and interfaces. With the proliferation of software development in the modern digital age, developing user-friendly interfaces that meet the needs and preferences of users has become a complex process. Usability heuristics, a set of guidelines based on principles of human-computer interaction, provide a starting point for designers to create intuitive, efficient, and easy-to-use interfaces that provide a seamless user experience. The study uses Jakob Nieson's ten usability heuristics to evaluate the usability of Stack Overflow posts, a popular Q\&A website for developers. Through the analysis of 894 posts related to usability, the study identifies common usability problems faced by users and developers, providing valuable insights into the effectiveness of usability guidelines in software development practice. The research findings emphasize the need for ongoing evaluation and improvement of software interfaces to ensure a seamless user experience. The thesis concludes by highlighting the potential of usability heuristics in guiding the design of user-friendly software interfaces and improving the overall user experience in software development.
Multiomics Data Integration and Multiplex Graph Neural Network Approaches
With increasing data and technology, multiple types of data from the same set of nodes have been generated. Since each data modality contains a unique aspect of the underlying mechanisms, multiple datatypes are integrated. In addition to multiple datatypes, networks are important to store information representing associations between entities such as genes of a protein-protein interaction network and authors of a citation network. Recently, some advanced approaches to graph-structured data leverage node associations and features simultaneously, called Graph Neural Network (GNN), but they have limitations for integrative approaches. The overall aim of this dissertation is to integrate multiple data modalities on graph-structured data to infer some context-specific gene regulation and predict outcomes of interest. To this end, first, we introduce a computational tool named CRINET to infer genome-wide competing endogenous RNA (ceRNA) networks. By integrating multiple data properly, we had a better understanding of gene regulatory circuitry addressing important drawbacks pertaining to ceRNA regulation. We tested CRINET on breast cancer data and found that ceRNA interactions and groups were significantly enriched in the cancer-related genes and processes. CRINET-inferred ceRNA groups supported the studies claiming the relation between immunotherapy and cancer. Second, we present SUPREME, a node classification framework, by comprehensively analyzing multiple data and associations between nodes with graph convolutions on multiple networks. Our results on survival analysis suggested that SUPREME could demystify the characteristics of classes with proper utilization of multiple data and networks. Finally, we introduce an attention-aware fusion approach, called GRAF, which fuses multiple networks and utilizes attention mechanisms on graph-structured data. Utilization of learned node- and association-level attention with network fusion allowed us to prioritize the edges properly, leading to improvement in the prediction results. Given the findings of all three tools and their outperformance over state-of-the-art methods, the proposed dissertation shows the importance of integrating …
Toward Leveraging Artificial Intelligence to Support the Identification of Accessibility Challenges
The goal of this thesis is to support the automated identification of accessibility in user reviews or bug reports, to help technology professionals prioritize their handling, and, thus, to create more inclusive apps. Particularly, we propose a model that takes as input accessibility user reviews or bug reports and learns their keyword-based features to make a classification decision, for a given review, on whether it is about accessibility or not. Our empirically driven study follows a mixture of qualitative and quantitative methods. We introduced models that can accurately identify accessibility reviews and bug reports and automate detecting them. Our models can automatically classify app reviews and bug reports as accessibility-related or not so developers can easily detect accessibility issues with their products and improve them to more accessible and inclusive apps utilizing the users' input. Our goal is to create a sustainable change by including a model in the developer's software maintenance pipeline and raising awareness of existing errors that hinder the accessibility of mobile apps, which is a pressing need. In light of our findings from the Blackboard case study, Blackboard and the course material are not easily accessible to deaf students and hard of hearing. Thus, deaf students find that learning is extremely stressful during the pandemic.
Integrating Multiple Deep Learning Models for Disaster Description in Low-Altitude Videos
Computer vision technologies are rapidly improving and becoming more important in disaster response. The majority of disaster description techniques now focus either on identify objects or categorize disasters. In this study, we trained multiple deep neural networks on low-altitude imagery with highly imbalanced and noisy labels. We utilize labeled images from the LADI dataset to formulate a solution for general problem in disaster classification and object detection. Our research integrated and developed multiple deep learning models that does the object detection task as well as the disaster scene classification task. Our solution is competitive in the TRECVID Disaster Scene Description and Indexing (DSDI) task, demonstrating that it is comparable to other suggested approaches in retrieving disaster-related video clips.
Machine Learning Methods for Data Quality Aspects in Edge Computing Platforms
In this research, three aspects of data quality with regard to artifical intelligence (AI) have been investigated: detection of misleading fake data, especially deepfakes, data scarcity, and data insufficiency, especially how much training data is required for an AI application. Different application domains where the selected aspects pose issues have been chosen. To address the issues of data privacy, security, and regulation, these solutions are targeted for edge devices. In Chapter 3, two solutions have been proposed that aim to preempt such misleading deepfake videos and images on social media. These solutions are deployable at edge devices. In Chapter 4, a deepfake resilient digital ID system has been described. Another data quality aspect, data scarcity, has been addressed in Chapter 5. One of such agricultural problems is estimating crop damage due to natural disasters. Data insufficiency is another aspect of data quality. The amount of data required to achieve acceptable accuracy in a machine learning (ML) model has been studied in Chapter 6. As the data scarcity problem is studied in the agriculture domain, a similar scenario—plant disease detection and damage estimation—has been chosen for this verification. This research aims to provide ML or deep learning (DL)-based methods to solve several data quality-related issues in different application domains and achieve high accuracy. We hope that this work will contribute to research on the application of machine learning techniques in domains where data quality is a barrier to success.
Registration of Point Sets with Large and Uneven Non-Rigid Deformation
Non-rigid point set registration of significantly uneven deformations is a challenging problem for many applications such as pose estimation, three-dimensional object reconstruction, human movement tracking. In this dissertation, we present a novel probabilistic non-rigid registration method to align point sets with significantly uneven deformations by enforcing constraints from corresponding key points and preserving local neighborhood structures. The registration method is treated as a density estimation problem. Incorporating correspondence among key points regulates the optimization process for large, uneven deformations. In addition, by leveraging neighborhood embedding using Stochastic Neighbor Embedding (SNE) as well as an alternative means based on Locally Linear Embedding (LLE), our method penalizes the incoherent transformation and hence preserves the local structure of point sets. Also, our method detects key points in the point sets based on geodesic distance. Correspondences are established using a new cluster-based, region-aware feature descriptor. This feature descriptor encodes the association of a cluster to the left-right (symmetry) or upper-lower regions of the point sets. We conducted comparison studies using public point sets and our Human point sets. Our experimental results demonstrate that our proposed method successfully reduced the registration error by at least 42.2% in contrast to the state-of-the-art method. Especially, our method demonstrated much superior performance in the case of a large degree of deformations. Experimental results show more than 30% improvements when key point correspondence is used. Our study shows that the influence of the global constraint (key point correspondences) is greater than that of the local constraint. Our analysis of using incorrect key point correspondence reveals the sensitivity of the proposed method. Given erroneous but symmetric correspondence, our method was able to produce fairly good results. In addition, our study on time reports a competitive computational efficiency of the proposed method.
Reliability and Throughput Improvement in Vehicular Communication by Using 5G Technologies
The vehicular community is moving towards a whole new paradigm with the advancement of new technology. Vehicular communication not only supports safety services but also provides non-safety services like navigation support, toll collection, web browsing, media streaming, etc. The existing communication frameworks like Dedicated Short Range Communication (DSRC) and Cellular V2X (C-V2X) might not meet the required capacity in the coming days. So, the vehicular community needs to adopt new technologies and upgrade the existing communication frameworks so that it can fulfill the desired expectations. Therefore, an increment in reliability and data rate is required. Multiple Input Multiple Output (MIMO), 5G New Radio, Low Density Parity Check (LDPC) Code, and Massive MIMO signal detection and equalization algorithms are the latest addition to the 5G wireless communication domain. These technologies have the potential to make the existing V2X communication framework more robust. As a result, more reliability and throughput can be achieved. This work demonstrates these technologies' compatibility and positive impact on existing V2X communication standard.
Secure and Decentralized Data Cooperatives via Reputation Systems and Blockchain
This dissertation focuses on a novel area of secure data management referred to as data cooperatives. A data cooperative solution promises its users better protection and control of their personal data as compared to the traditional way of their handling by the data collectors (such as governments, big data companies, and others). However, despite the many interesting benefits that the data cooperative approach tends to provide its users, it suffers from a few challenges hindering its development, adoption, and widespread use among data providers and consumers. To address these issues, we have divided this dissertation into two parts. In the first part, we identify the existing challenges and propose and implement a decentralized architecture built atop a blockchain system. Our solution leverages the inherent decentralized, tamper-resistant, and security properties of the blockchain. The implementation of our system was carried out on an existing blockchain test network, Ropsten, and our results show that blockchain is an efficient and scalable platform for the development of a decentralized data cooperative solution. In the second part of this work, we further addressed the existing challenges and the limitations of the implementation from the first part of our work. In particular, we addressed inclusivity---a core component of establishing trust in our system---by implementing a multi-tier reputation-based selection lottery that gives a 'fair' chance to newly-joined members of the data cooperative, as compared to existing approaches where newly-joined members are ignored until they build their reputation or stake to an acceptable level in the system. We also developed a mathematical model using an infinitely repeated game-theoretic approach that ensures that malicious members in the system are swiftly eliminated while honest-behaving members are incentivized, resulting in a system where the majority of the members are more interested in their long-term payoff and play according to the system's …
Understanding and Addressing Accessibility Barriers Faced by People with Visual Impairments on Block-Based Programming Environments
There is an increased use of block-based programming environments in K-12 education and computing outreach activities to introduce novices to programming and computational thinking skills. However, despite their appealing design that allows students to focus on concepts rather than syntax, block-based programming by design is inaccessible to people with visual impairments and people who cannot use the mouse. In addition to this inaccessibility, little is known about the instructional experiences of students with visual impairments on current block-based programming environments. This dissertation addresses this gap by (1) investigating the challenges that students with visual impairments face on current block-based programming environments and (2) exploring ways in which we can use the keyboard and the screen reader to create block-based code. Through formal survey and interview studies with teachers of students with visual impairments and students with visual impairments, we identify several challenges faced by students with visual impairments on block-based programming environments. Using the knowledge of these challenges and building on prior work, we explore how to leverage the keyboard and the screen reader to improve the accessibility of block-based programming environments through a prototype of an accessible block-based programming library. In this dissertation, our empirical evaluations demonstrate that people with visual impairments can effectively and efficiently create, edit, and navigate block-based code using the keyboard and screen reader alone. Addressing the inaccessibility of block-based programming environments would allow students with visual impairments to participate in programming and computing activities where these systems are used, thus fostering inclusion and promoting diversity.
Understanding and Reasoning with Negation
In this dissertation, I start with an analysis of negation in eleven benchmark corpora covering six Natural Language Understanding (NLU) tasks. With a thorough investigation, I first show that (a) these benchmarks contain fewer negations compared to general-purpose English and (b) the few negations they contain are often unimportant. Further, my empirical studies demonstrate that state-of-the-art transformers trained using these corpora obtain substantially worse results with the instances that contain negation, especially if the negations are important. Second, I investigate whether translating negation is also an issue for modern machine translation (MT) systems. My studies find that indeed the presence of negation can significantly impact translation quality, in some cases resulting in reductions of over 60%. In light of these findings, I investigate strategies to better understand the semantics of negation. I start with identifying the focus of negation. I develop a neural model that takes into account the scope of negation, context from neighboring sentences, or both. My best proposed system obtains an accuracy improvement of 7.4% over prior work. Further, I analyze the main error categories of the systems through a detailed error analysis. Next, I explore more practical ways to understand the semantics of negation. I consider revealing the meaning of negation by revealing their affirmative interpretations. First, I propose a question-answer driven approach to create AFIN, a collection of 3,001 sentences with verbal negations and their affirmative interpretations. Then, I present an automated procedure to collect pairs of sentences with negation and their affirmative interpretations, resulting in over 150,000 pairs. Experimental results demonstrate that leveraging these pairs helps (a) a T5 system generate affirmative interpretations from negations in AFIN and (b) state-of-the-art transformers solve natural language understanding tasks, including natural language inference and sentiment analysis. Furthermore, I develop a plug-and-play affirmative interpretation generator that is potentially …
Advanced Stochastic Signal Processing and Computational Methods: Theories and Applications
Compressed sensing has been proposed as a computationally efficient method to estimate the finite-dimensional signals. The idea is to develop an undersampling operator that can sample the large but finite-dimensional sparse signals with a rate much below the required Nyquist rate. In other words, considering the sparsity level of the signal, the compressed sensing samples the signal with a rate proportional to the amount of information hidden in the signal. In this dissertation, first, we employ compressed sensing for physical layer signal processing of directional millimeter-wave communication. Second, we go through the theoretical aspect of compressed sensing by running a comprehensive theoretical analysis of compressed sensing to address two main unsolved problems, (1) continuous-extension compressed sensing in locally convex space and (2) computing the optimum subspace and its dimension using the idea of equivalent topologies using Köthe sequence. In the first part of this thesis, we employ compressed sensing to address various problems in directional millimeter-wave communication. In particular, we are focusing on stochastic characteristics of the underlying channel to characterize, detect, estimate, and track angular parameters of doubly directional millimeter-wave communication. For this purpose, we employ compressed sensing in combination with other stochastic methods such as Correlation Matrix Distance (CMD), spectral overlap, autoregressive process, and Fuzzy entropy to (1) study the (non) stationary behavior of the channel and (2) estimate and track channel parameters. This class of applications is finite-dimensional signals. Compressed sensing demonstrates great capability in sampling finite-dimensional signals. Nevertheless, it does not show the same performance sampling the semi-infinite and infinite-dimensional signals. The second part of the thesis is more theoretical works on compressed sensing toward application. In chapter 4, we leverage the group Fourier theory and the stochastical nature of the directional communication to introduce families of the linear and quadratic family of displacement operators that …
Deep Learning Optimization and Acceleration
The novelty of this dissertation is the optimization and acceleration of deep neural networks aimed at real-time predictions with minimal energy consumption. It consists of cross-layer optimization, output directed dynamic quantization, and opportunistic near-data computation for deep neural network acceleration. On two datasets (CIFAR-10 and CIFAR-100), the proposed deep neural network optimization and acceleration frameworks are tested using a variety of Convolutional neural networks (e.g., LeNet-5, VGG-16, GoogLeNet, DenseNet, ResNet). Experimental results are promising when compared to other state-of-the-art deep neural network acceleration efforts in the literature.
Helping Students with Upper Limb Motor Impairments Program in a Block-Based Programming Environment Using Voice
Students with upper body motor impairments, such as cerebral palsy, multiple sclerosis, ALS, etc., face challenges when learning to program in block-based programming environments, because these environments are highly dependent on the physical manipulation of a mouse or keyboard to drag and drop elements on the screen. In my dissertation, I make the block-based programming environment Blockly, accessible to students with upper body motor impairment by adding speech as an alternative form of input. This voice-enabled version of Blockly will reduce the need for the use of a mouse or keyboard, making it more accessible to students with upper body motor impairments. The voice-enabled Blockly system consists of the original Blockly application, a speech recognition API, predefined voice commands, and a custom function. Three user studies have been conducted, a preliminary study, a usability study, and an A/B test. These studies revealed a lot of information, such as the need for simpler, shorter, and more intuitive commands, the need to change the target audience, the shortcomings of speech recognition systems, etc. The feedback received from each study influenced design decisions at different phases. The findings also gave me insight into the direction I would like to go in the future. This work was started and finished in 2 years.
Autonomic Zero Trust Framework for Network Protection
With the technological improvements, the number of Internet connected devices is increasing tremendously. We also observe an increase in cyberattacks since the attackers want to use all these interconnected devices for malicious intention. Even though there exist many proactive security solutions, it is not practical to run all the security solutions on them as they have limited computational resources and even battery operated. As an alternative, Zero Trust Architecture (ZTA) has become popular is because it defines boundaries and requires to monitor all events, configurations, and connections and evaluate them to enforce rejecting by default and accepting only if they are known and accepted as well as applies a continuous trust evaluation. In addition, we need to be able to respond as quickly as possible, which cannot be managed by human interaction but through autonomous computing paradigm. Therefore, in this work, we propose a framework that would implement ZTA using autonomous computing paradigm. The proposed solution, Autonomic ZTA Management Engine (AZME) framework, focusing on enforcing ZTA on network, uses a set of sensors to monitor a network, a set of user-defined policies to define which actions to be taken (through controller). We have implemented a Python prototype as a proof-of-concept that checks network packets and enforce ZTA by checking the individual source and destination based on the given policies and continuously evaluate the trust of connections. If an unaccepted connection is made, it can block the connection by creating firewall rule at runtime.
Design of a Low-Cost Spirometer to Detect COPD and Asthma for Remote Health Monitoring
This work develops a simple and low-cost microphone-based spirometer with a scalable infrastructure that can be used to monitor COPD and Asthma symptoms. The data acquired from the system is archived in the cloud for further procuring and reporting. To develop this system, we utilize an off-the-shelf ESP32 development board, MEMS microphone, oxygen mask, and 3D printable mounting tube to keep the costs low. The system utilizes the MEMS microphone to measure the audio signal of a user's exhalation, calculates diagnostic estimations and uploads the estimations to the cloud to be remotely monitored. Our results show a practical system that can identify COPD and Asthma symptoms and report the data to both the patient and the physician. The system developed can provide a means of gathering respiratory data to better assist doctors and assess patients to provide remote care.
An Investigation of Scale Factor in Deep Networks for Scene Recognition
Is there a significant difference in the design of deep networks for the tasks of classifying object-centric images and scenery images? How to design networks that extract the most representative features for scene recognition? To answer these questions, we design studies to examine the scales and richness of image features for scenery image recognition. Three methods are proposed that integrate the scale factor to the deep networks and reveal the fundamental network design strategies. In our first attempt to integrate scale factors into the deep network, we proposed a method that aggregates both the context and multi-scale object information of scene images by constructing a multi-scale pyramid. In our design, integration of object-centric multi-scale networks achieved a performance boost of 9.8%; integration of object- and scene-centric models obtained an accuracy improvement of 5.9% compared with single scene-centric models. We also exploit bringing the attention scheme to the deep network and proposed a Scale Attentive Network (SANet). The SANet streamlines the multi-scale scene recognition pipeline, learns comprehensive scene features at various scales and locations, addresses the inter-dependency among scales, and further assists feature re-calibration as well as the aggregation process. The proposed network achieved a Top-1 accuracy increase by 1.83% on Place365 standard dataset with only 0.12% additional parameters and 0.24% additional GFLOPs using ResNet-50 as the backbone. We further bring the scale factor implicitly into network backbone design by proposing a Deep-Narrow Network and Dilated Pooling module. The Deep-narrow architecture increased the depth of the network as well as decreased the width of the network, which uses a variety of receptive fields by stacking more layers. We further proposed a Dilated Pooling module which expanded the pooling scope and made use of multi-scale features in the pooling operation. By embedding the Dilated Pooling into Deep-Narrow Network, we obtained a Top-1 …
New Computational Methods for Literature-Based Discovery
In this work, we leverage the recent developments in computer science to address several of the challenges in current literature-based discovery (LBD) solutions. First, LBD solutions cannot use semantics or are too computational complex. To solve the problems we propose a generative model OverlapLDA based on topic modeling, which has been shown both effective and efficient in extracting semantics from a corpus. We also introduce an inference method of OverlapLDA. We conduct extensive experiments to show the effectiveness and efficiency of OverlapLDA in LBD. Second, we expand LBD to a more complex and realistic setting. The settings are that there can be more than one concept connecting the input concepts, and the connectivity pattern between concepts can also be more complex than a chain. Current LBD solutions can hardly complete the LBD task in the new setting. We simplify the hypotheses as concept sets and propose LBDSetNet based on graph neural networks to solve this problem. We also introduce different training schemes based on self-supervised learning to train LBDSetNet without relying on comprehensive labeled hypotheses that are extremely costly to get. Our comprehensive experiments show that LBDSetNet outperforms strong baselines on simple hypotheses and addresses complex hypotheses.
Improving Memory Performance for Both High Performance Computing and Embedded/Edge Computing Systems
CPU-memory bottleneck is a widely recognized problem. It is known that majority of high performance computing (HPC) database systems are configured with large memories and dedicated to process specific workloads like weather prediction, molecular dynamic simulations etc. My research on optimal address mapping improves the memory performance by increasing the channel and bank level parallelism. In an another research direction, I proposed and evaluated adaptive page migration techniques that obviates the need for offline analysis of an application to determine page migration strategies. Furthermore, I explored different migration strategies like reverse migration, sub page migration that I found to be beneficial depending on the application behavior. Ideally, page migration strategies redirect the demand memory traffic to faster memory to improve the memory performance. In my third contribution, I worked and evaluated a memory-side accelerator to assist the main computational core in locating the non-zero elements of a sparse matrix that are typically used in scientific, machine learning workloads on a low-power embedded system configuration. Thus my contributions narrow the speed-gap by improving the latency and/or bandwidth between CPU and memory.
Integrating Multiple Deep Learning Models to Classify Disaster Scene Videos
Recently, disaster scene description and indexing challenges attract the attention of researchers. In this dissertation, we solve a disaster-related multi-labeling task using a newly developed Low Altitude Disaster Imagery dataset. In the first task, we realize video content by selecting a set of summary key frames to represent the video sequence. Through inter-frame differences, the key frames are generated. The key frame extraction of disaster-related video clips is a powerful tool that can efficiently convert video data into image-level data, reduce the requirements for the extraction environment and improve the applicable environment. In the second, we propose a novel application of using deep learning methods on low altitude disaster video feature recognition. Supervised learning-based deep-learning approaches are effective in disaster-related features recognition via foreground object detection and background classification. Performed dataset validation, our model generalized well and improved performance by optimizing the YOLOv3 model and combining it with Resnet50. The comprehensive models showed more efficient and effective than those in prior published works. In the third task, we optimize the whole scene labeling classification by pruning the lightweight model MobileNetV3, which shows superior generalizability and can disaster features recognition from a disaster-related dataset be accomplished efficiently to assist disaster recovery.
Machine-Learning-Enabled Cooperative Perception on Connected Autonomous Vehicles
The main research objective of this dissertation is to understand the sensing and communication challenges to achieving cooperative perception among autonomous vehicles, and then, using the insights gained, guide the design of the suitable format of data to be exchanged, reliable and efficient data fusion algorithms on vehicles. By understanding what and how data are exchanged among autonomous vehicles, from a machine learning perspective, it is possible to realize precise cooperative perception on autonomous vehicles, enabling massive amounts of sensor information to be shared amongst vehicles. I first discuss the trustworthy perception information sharing on connected and autonomous vehicles. Then how to achieve effective cooperative perception on autonomous vehicles via exchanging feature maps among vehicles is discussed in the following. In the last methodology part, I propose a set of mechanisms to improve the solution proposed before, i.e., reducing the amount of data transmitted in the network to achieve an efficient cooperative perception. The effectiveness and efficiency of our mechanism is analyzed and discussed.
Online Testing of Context-Aware Android Applications
This dissertation presents novel approaches to test context aware applications that suffer from a cost prohibitive number of context and GUI events and event combinations. The contributions of this work to test context aware applications under test include: (1) a real-world context events dataset from 82 Android users over a 30-day period, (2) applications of Markov models, Closed Sequential Pattern Mining (CloSPAN), Deep Neural Networks- Long Short Term Memory (LSTM) and Gated Recurrent Units (GRU), and Conditional Random Fields (CRF) applied to predict context patterns, (3) data driven test case generation techniques that insert events at the beginning of each test case in a round-robin manner, iterate through multiple context events at the beginning of each test case in a round-robin manner, and interleave real-world context event sequences and GUI events, and (4) systematically interleaving context with a combinatorial-based approach. The results of our empirical studies indicate (1) CRF outperforms other models thereby predicting context events with F1 score of about 60% for our dataset, (2) the ISFreqOne that iterates over context events at the beginning of each test case in a round-robin manner as well as interleaves real-world context event sequences and GUI events at an interval one achieves up to four times better code coverage than not including context, 0.06 times better coverage than RSContext that inserts random context events at the beginning of each test case, 0.05 times better coverage than ISContext that iterates over context events to insert at the beginning of each test case in a round-robin manner, and 0.04 times better coverage than ISFreqTwo that iterates over context events at the beginning of each test case in a round-robin manner as well as interleaves real-world context event sequences and GUI events at an interval two on an average across four subject applications and, (3) …
Reliability Characterization and Performance Analysis of Solid State Drives in Data Centers
NAND flash-based solid state drives (SSDs) have been widely adopted in data centers and high performance computing (HPC) systems due to their better performance compared with hard disk drives. However, little is known about the reliability characteristics of SSDs in production systems. Existing works that study the statistical distributions of SSD failures in the field lack insights into distinct characteristics of SSDs. In this dissertation, I explore the SSD-specific SMART (Self-Monitoring, Analysis, and Reporting Technology) attributes and conduct in-depth analysis of SSD reliability in a production environment with a focus on the unique error types and health dynamics. QLC SSD delivers better performance in a cost-effective way. I study QLC SSDs in terms of their architecture and performance. In addition, I apply thermal stress tests to QLC SSDs and quantify their performance degradation processes. Various types of big data and machine learning workloads have been executed on SSDs under varying temperatures. The SSD throughput and application performance are analyzed and characterized.
SIMON: A Domain-Agnostic Framework for Secure Design and Validation of Cyber Physical Systems
Cyber physical systems (CPS) are an integration of computational and physical processes, where the cyber components monitor and control physical processes. Cyber-attacks largely target the cyber components with the intention of disrupting the functionality of the components in the physical domain. This dissertation explores the role of semantic inference in understanding such attacks and building resilient CPS systems. To that end, we present SIMON, an ontological design and verification framework that captures the intricate relationship(s) between cyber and physical components in CPS by leveraging several standard ontologies and extending the NIST CPS framework for the purpose of eliciting trustworthy requirements, assigning responsibilities and roles to CPS functionalities, and validating that the trustworthy requirements are met by the designed system. We demonstrate the capabilities of SIMON using two case studies – a vehicle to infrastructure (V2I) safety application and an additive manufacturing (AM) printer. In addition, we also present a taxonomy to capture threat feeds specific to the AM domain.
COVID-19 Diagnosis and Segmentation Using Machine Learning Analyses of Lung Computerized Tomography
COVID-19 is a highly contagious and virulent disease caused by the severe acute respiratory syndrome-coronavirus-2 (SARS-CoV-2). COVID-19 disease induces lung changes observed in lung computerized tomography (CT) and the percentage of those diseased areas on the CT correlates with the severity of the disease. Therefore, segmentation of CT images to delineate the diseased or lesioned areas is a logical first step to quantify disease severity, which will help physicians predict disease prognosis and guide early treatments to deliver more positive patient outcomes. It is crucial to develop an automated analysis of CT images to save their time and efforts. This dissertation proposes CoviNet, a deep three-dimensional convolutional neural network (3D-CNN) to diagnose COVID-19 in CT images. It also proposes CoviNet Enhanced, a hybrid approach with 3D-CNN and support vector machines. It also proposes CoviSegNet and CoviSegNet Enhanced, which are enhanced U-Net models to segment ground-glass opacities and consolidations observed in computerized tomography (CT) images of COVID-19 patients. We trained and tested the proposed approaches using several public datasets of CT images. The experimental results show the proposed methods are highly effective for COVID-19 detection and segmentation and exhibit better accuracy, precision, sensitivity, specificity, F-1 score, Matthew's correlation coefficient (MCC), dice score, and Jaccard index in comparison with recently published studies.
An Artificial Intelligence-Driven Model-Based Analysis of System Requirements for Exposing Off-Nominal Behaviors
With the advent of autonomous systems and deep learning systems, safety pertaining to these systems has become a major concern. The existing failure analysis techniques are not enough to thoroughly analyze the safety in these systems. Moreover, because these systems are created to operate in various conditions, they are susceptible to unknown safety issues. Hence, we need mechanisms which can take into account the complexity of operational design domains, identify safety issues other than failures, and expose unknown safety issues. Moreover, existing safety analysis approaches require a lot of effort and time for analysis and do not consider machine learning (ML) safety. To address these limitations, in this dissertation, we discuss an artificial-intelligence driven model-based methodology that aids in identifying unknown safety issues and analyzing ML safety. Our methodology consists of 4 major tasks: 1) automated model generation, 2) automated analysis of component state transition model specification, 3) undesired states analysis, and 4) causal factor analysis. In our methodology we identify unknown safety issues by finding undesired combinations of components' states and environmental entities' states as well as causes resulting in these undesired combinations. In our methodology, we refer to the behaviors that occur because of undesired combinations as off-nominal behaviors (ONBs). To identify undesired combinations and ONBs that aid in exposing unknown safety issues with less effort and time we proposed various approaches for each of the task and performed corresponding empirical studies. We also discussed machine learning safety analysis from the perspective of machine learning engineers as well as system and software safety engineers. The results of studies conducted as part of our research shows that our proposed methodology helps in identifying unknown safety issues effectively. Our results also show that combinatorial methods are effective in reducing effort and time for analysis of off-nominal behaviors without overlooking any …
An Extensible Computing Architecture Design for Connected Autonomous Vehicle System
Autonomous vehicles have made milestone strides within the past decade. Advances up the autonomy ladder have come lock-step with the advances in machine learning, namely deep-learning algorithms and huge, open training sets. And while advances in CPUs have slowed, GPUs have edged into the previous decade's TOP 500 supercomputer territory. This new class of GPUs include novel deep-learning hardware that has essentially side-stepped Moore's law, outpacing the doubling observation by a factor of ten. While GPUs have make record progress, networks do not follow Moore's law and are restricted by several bottlenecks, from protocol-based latency lower bounds to the very laws of physics. In a way, the bottlenecks that plague modern networks gave rise to Edge computing, a key component of the Connected Autonomous Vehicle system, as the need for low-latency in some domains eclipsed the need for massive processing farms. The Connected Autonomous Vehicle ecosystem is one of the most complicated environments in all of computing. Not only is the hardware scaled all the way from 16 and 32-bit microcontrollers, to multi-CPU Edge nodes, and multi-GPU Cloud servers, but the networking also encompasses the gamut of modern communication transports. I propose a framework for negotiating, encapsulating and transferring data between vehicles ensuring efficient bandwidth utilization and respecting real-time privacy levels.
Hybrid Optimization Models for Depot Location-Allocation and Real-Time Routing of Emergency Deliveries
Prompt and efficient intervention is vital in reducing casualty figures during epidemic outbreaks, disasters, sudden civil strife or terrorism attacks. This can only be achieved if there is a fit-for-purpose and location-specific emergency response plan in place, incorporating geographical, time and vehicular capacity constraints. In this research, a comprehensive emergency response model for situations of uncertainties (in locations' demand and available resources), typically obtainable in low-resource countries, is designed. It involves the development of algorithms for optimizing pre-and post-disaster activities. The studies result in the development of four models: (1) an adaptation of a machine learning clustering algorithm, for pre-positioning depots and emergency operation centers, which optimizes the placement of these depots, such that the largest geographical location is covered, and the maximum number of individuals reached, with minimal facility cost; (2) an optimization algorithm for routing relief distribution, using heterogenous fleets of vehicle, with considerations for uncertainties in humanitarian supplies; (3) a genetic algorithm-based route improvement model; and (4) a model for integrating possible new locations into the routing network, in real-time, using emergency severity ranking, with a high priority on the most-vulnerable population. The clustering approach to solving dept location-allocation problem produces a better time complexity, and the benchmarking of the routing algorithm with existing approaches, results in competitive outcomes.
IoMT-Based Accurate Stress Monitoring for Smart Healthcare
This research proposes Stress-Lysis, iLog and SaYoPillow to automatically detect and monitor the stress levels of a person. To self manage psychological stress in the framework of smart healthcare, a deep learning based novel system (Stress-Lysis) is proposed in this dissertation. The learning system is trained such that it monitors stress levels in a person through human body temperature, rate of motion and sweat during physical activity. The proposed deep learning system has been trained with a total of 26,000 samples per dataset and demonstrates accuracy as high as 99.7%. The collected data are transmitted and stored in the cloud, which can help in real time monitoring of a person's stress levels, thereby reducing the risk of death and expensive treatments. The proposed system has the ability to produce results with an overall accuracy of 98.3% to 99.7%, is simple to implement and its cost is moderate. Chronic stress, uncontrolled or unmonitored food consumption, and obesity are intricately connected, even involving certain neurological adaptations. In iLog we propose a system which can not only monitor but also create awareness for the user of how much food is too much. iLog provides information on the emotional state of a person along with the classification of eating behaviors to Normal-Eating or Stress-Eating. This research proposes a deep learning model for edge computing platforms which can automatically detect, classify and quantify the objects in the plate of the user. Three different paradigms where the idea of iLog can be performed are explored in this research. Two different edge platforms have been implemented in iLog. The platforms include mobile, as it is widely used, and a single board computer which can easily be a part of network for executing experiments, with iLog Glasses being the main wearable. The iLog model has produced an overall …
Combinatorial-Based Testing Strategies for Mobile Application Testing
This work introduces three new coverage criteria based on combinatorial-based event and element sequences that occur in the mobile environment. The novel combinatorial-based criteria are used to reduce, prioritize, and generate test suites for mobile applications. The combinatorial-based criteria include unique coverage of events and elements with different respects to ordering. For instance, consider the coverage of a pair of events, e1 and e2. The least strict criterion, Combinatorial Coverage (CCov), counts the combination of these two events in a test case without respect to the order in which the events occur. That is, the combination (e1, e2) is the same as (e2, e1). The second criterion, Sequence-Based Combinatorial Coverage (SCov), considers the order of occurrence within a test case. Sequences (e1, ..., e2) and (e2,..., e1) are different sequences. The third and strictest criterion is Consecutive-Sequence Combinatorial Coverage (CSCov), which counts adjacent sequences of consecutive pairs. The sequence (e1, e2) is only counted if e1 immediately occurs before e2. The first contribution uses the novel combinatorial-based criteria for the purpose of test suite reduction. Empirical studies reveal that the criteria, when used with event sequences and sequences of size t=2, reduce the test suites by 22.8%-61.3% while the reduced test suites provide 98.8% to 100% fault finding effectiveness. Empirical studies in Android also reveal that the event sequence criteria of size t=2 reduce the test suites by 24.67%-66% while losing at most 0.39% code coverage. When the criteria are used with element sequences and sequences of size t=2, the test suites are reduced by 40\% to 72.67%, losing less than 0.87% code coverage. The second contribution of this work applies the combinatorial-based criteria for test suite prioritization of mobile application test suites. The results of an empirical study show that the prioritization criteria that use element and event sequences …
A Method of Combining GANs to Improve the Accuracy of Object Detection on Autonomous Vehicles
As the technology in the field of computer vision becomes more and more mature, the autonomous vehicles have achieved rapid developments in recent years. However, the object detection and classification tasks of autonomous vehicles which are based on cameras may face problems when the vehicle is driving at a relatively high speed. One is that the camera will collect blurred photos when driving at high speed which may affect the accuracy of deep neural networks. The other is that small objects far away from the vehicle are difficult to be recognized by networks. In this paper, we present a method to combine two kinds of GANs to solve these problems. We choose DeblurGAN as the base model to remove blur in images. SRGAN is another GAN we choose for solving small object detection problems. Due to the total time of these two are too long, we still do the model compression on it to make it lighter. Then we use the Yolov4 to do the object detection. Finally we do the evaluation of the whole model architecture and proposed a model version 2 based on DeblurGAN and ESPCN which is faster than previous one but the accuracy may be lower.
Red Door: Firewall Based Access Control in ROS
ROS is a set of computer operating system framework designed for robot software development, and Red Door, a lightweight software firewall that serves the ROS, is intended to strengthen its security. ROS has many flaws in security, such as clear text transmission of data, no authentication mechanism, etc. Red Door can achieve identity verification and access control policy with a small performance loss, all without modifying the ROS source code, to ensure the availability and authentication of ROS applications to the greatest extent.
Building Reliable and Cost-Effective Storage Systems for High-Performance Computing Datacenters
In this dissertation, I first incorporate declustered redundant array of independent disks (RAID) technology in the existing system by maximizing the aggregated recovery I/O and accelerating post-failure remediation. Our analytical model affirms the accelerated data recovery stage significantly improves storage reliability. Then I present a proactive data protection framework that augments storage availability and reliability. It utilizes the failure prediction methods to efficiently rescue data on drives before failures occur, which significantly reduces the storage downtime and lowers the risk of nested failures. Finally, I investigate how an active storage system enables energy-efficient computing. I explore an emerging storage device named Ethernet drive to offload data-intensive workloads from the host to drives and process the data on drives. It not only minimizes data movement and power usage, but also enhances data availability and storage scalability. In summary, my dissertation research provides intelligence at the drive, storage node, and system levels to tackle the rising reliability challenge in modern HPC datacenters. The results indicate that this novel storage paradigm cost-effectively improves storage scalability, availability, and reliability.
Cooperative Perception for Connected Autonomous Vehicle Edge Computing System
This dissertation first conducts a study on raw-data level cooperative perception for enhancing the detection ability of self-driving systems for connected autonomous vehicles (CAVs). A LiDAR (Light Detection and Ranging sensor) point cloud-based 3D object detection method is deployed to enhance detection performance by expanding the effective sensing area, capturing critical information in multiple scenarios and improving detection accuracy. In addition, a point cloud feature based cooperative perception framework is proposed on edge computing system for CAVs. This dissertation also uses the features' intrinsically small size to achieve real-time edge computing, without running the risk of congesting the network. In order to distinguish small sized objects such as pedestrian and cyclist in 3D data, an end-to-end multi-sensor fusion model is developed to implement 3D object detection from multi-sensor data. Experiments show that by solving multiple perception on camera and LiDAR jointly, the detection model can leverage the advantages from high resolution image and physical world LiDAR mapping data, which leads the KITTI benchmark on 3D object detection. At last, an application of cooperative perception is deployed on edge to heal the live map for autonomous vehicles. Through 3D reconstruction and multi-sensor fusion detection, experiments on real-world dataset demonstrate that a high definition (HD) map on edge can afford well sensed local data for navigation to CAVs.
Extracting Dimensions of Interpersonal Interactions and Relationships
People interact with each other through natural language to express feelings, thoughts, intentions, instructions etc. These interactions as a result form relationships. Besides names of relationships like siblings, spouse, friends etc., a number of dimensions (e.g. cooperative vs. competitive, temporary vs. enduring, equal vs. hierarchical etc.) can also be used to capture the underlying properties of interpersonal interactions and relationships. More fine-grained descriptors (e.g. angry, rude, nice, supportive etc.) can also be used to indicate the reasons or social-acts behind the dimension cooperative vs. competitive. The way people interact with others may also tell us about their personal traits, which in turn may be indicative of their probable success in their future. The works presented in the dissertation involve creating corpora with fine-grained descriptors of interactions and relationships. We also described experiments and their results that indicated that the processes of identifying the dimensions can be automated.
Frameworks for Attribute-Based Access Control (ABAC) Policy Engineering
In this disseration we propose semi-automated top-down policy engineering approaches for attribute-based access control (ABAC) development. Further, we propose a hybrid ABAC policy engineering approach to combine the benefits and address the shortcomings of both top-down and bottom-up approaches. In particular, we propose three frameworks: (i) ABAC attributes extraction, (ii) ABAC constraints extraction, and (iii) hybrid ABAC policy engineering. Attributes extraction framework comprises of five modules that operate together to extract attributes values from natural language access control policies (NLACPs); map the extracted values to attribute keys; and assign each key-value pair to an appropriate entity. For ABAC constraints extraction framework, we design a two-phase process to extract ABAC constraints from NLACPs. The process begins with the identification phase which focuses on identifying the right boundary of constraint expressions. Next is the normalization phase, that aims at extracting the actual elements that pose a constraint. On the other hand, our hybrid ABAC policy engineering framework consists of 5 modules. This framework combines top-down and bottom-up policy engineering techniques to overcome the shortcomings of both approaches and to generate policies that are more intuitive and relevant to actual organization policies. With this, we believe that our work takes essential steps towards a semi-automated ABAC policy development experience.
Kriging Methods to Exploit Spatial Correlations of EEG Signals for Fast and Accurate Seizure Detection in the IoMT
Epileptic seizure presents a formidable threat to the life of its sufferers, leaving them unconscious within seconds of its onset. Having a mortality rate that is at least twice that of the general population, it is a true cause for concern which has gained ample attention from various research communities. About 800 million people in the world will have at least one seizure experience in their lifespan. Injuries sustained during a seizure crisis are one of the leading causes of death in epilepsy. These can be prevented by an early detection of seizure accompanied by a timely intervention mechanism. The research presented in this dissertation explores Kriging methods to exploit spatial correlations of electroencephalogram (EEG) Signals from the brain, for fast and accurate seizure detection in the Internet of Medical Things (IoMT) using edge computing paradigms, by modeling the brain as a three-dimensional spatial object, similar to a geographical panorama. This dissertation proposes basic, hierarchical and distributed Kriging models, with a deep neural network (DNN) wrapper in some instances. Experimental results from the models are highly promising for real-time seizure detection, with excellent performance in seizure detection latency and training time, as well as accuracy, sensitivity and specificity which compare well with other notable seizure detection research projects.
Optimization of Massive MIMO Systems for 5G Networks
In the first part of the dissertation, we provide an extensive overview of sub-6 GHz wireless access technology known as massive multiple-input multiple-output (MIMO) systems, highlighting its benefits, deployment challenges, and the key enabling technologies envisaged for 5G networks. We investigate the fundamental issues that degrade the performance of massive MIMO systems such as pilot contamination, precoding, user scheduling, and signal detection. In the second part, we optimize the performance of the massive MIMO system by proposing several algorithms, system designs, and hardware architectures. To mitigate the effect of pilot contamination, we propose a pilot reuse factor scheme based on the user environment and the number of active users. The results through simulations show that the proposed scheme ensures the system always operates at maximal spectral efficiency and achieves higher throughput. To address the user scheduling problem, we propose two user scheduling algorithms bases upon the measured channel gain. The simulation results show that our proposed user scheduling algorithms achieve better error performance, improve sum capacity and throughput, and guarantee fairness among the users. To address the uplink signal detection challenge in the massive MIMO systems, we propose four algorithms and their system designs. We show through simulations that the proposed algorithms are computationally efficient and can achieve near-optimal bit error rate performance. Additionally, we propose hardware architectures for all the proposed algorithms to identify the required physical components and their interrelationships.
BC Framework for CAV Edge Computing
Edge computing and CAV (Connected Autonomous Vehicle) fields can work as a team. With the short latency and high responsiveness of edge computing, it is a better fit than cloud computing in the CAV field. Moreover, containerized applications are getting rid of the annoying procedures for setting the required environment. So that deployment of applications on new machines is much more user-friendly than before. Therefore, this paper proposes a framework developed for the CAV edge computing scenario. This framework consists of various programs written in different languages. The framework uses Docker technology to containerize these applications so that the deployment could be simple and easy. This framework consists of two parts. One is for the vehicle on-board unit, which exposes data to the closest edge device and receives the output generated by the edge device. Another is for the edge device, which is responsible for collecting and processing big load of data and broadcasting output to vehicles. So the vehicle does not need to perform the heavyweight tasks that could drain up the limited power.
Determining Event Outcomes from Social Media
An event is something that happens at a time and location. Events include major life events such as graduating college or getting married, and also simple day-to-day activities such as commuting to work or eating lunch. Most work on event extraction detects events and the entities involved in events. For example, cooking events will usually involve a cook, some utensils and appliances, and a final product. In this work, we target the task of determining whether events result in their expected outcomes. Specifically, we target cooking and baking events, and characterize event outcomes into two categories. First, we distinguish whether something edible resulted from the event. Second, if something edible resulted, we distinguish between perfect, partial and alternative outcomes. The main contributions of this thesis are a corpus of 4,000 tweets annotated with event outcome information and experimental results showing that the task can be automated. The corpus includes tweets that have only text as well as tweets that have text and an image.
Encrypted Collaborative Editing Software
Cloud-based collaborative editors enable real-time document processing via remote connections. Their common application is to allow Internet users to collaboratively work on their documents stored in the cloud, even if these users are physically a world apart. However, this convenience comes at a cost in terms of user privacy. Hence, the growth of popularity of cloud computing application stipulates the growth in importance of cloud security. A major concern with the cloud is who has access to user data. In order to address this issue, various third-party services offer encryption mechanisms for protection of the user data in the case of insider attacks or data leakage. However, these services often only encrypt data-at-rest, leaving the data which is being processed potentially vulnerable. The purpose of this study is to propose a prototype software system that encrypts collaboratively edited data in real-time, preserving the user experience similar to that of, e.g., Google Docs.
Back to Top of Screen