This system will be undergoing maintenance April 18th between 9:00AM and 12:00PM CDT.

Search Results

A Survey of Computer Systems: IBM System/360, 3031, The Decsystem-20, The Univac 1100, and The Cray-1, and The AS/5000
This is a brief survey of some of the popular computer systems. As many features as possible have been covered in order to get an overview of the systems under consideration.
Notes on the SWTPC MP-N Calculator Interface and the Calc-1 Program
This interface was bought to perform floating-point arithmetic and for its function capabilities such as SIN, COS, and e^x. My application required an integer truncation function that is not performed by this calculator, so i wrote a small assembly language subroutine to do it. A potentially irritating problem is that the calculator chip does not automatically convert to scientific notation if the numbers become too big to display in floating point. The control program must keep track of the display mode.
VISOR (Variable Interval Schedule Of Reinforcement) System Documentation
This program will be used in operant behavior research to monitor and record responses and trigger and record reinforcements on a variable reinforcement (VI) schedule. The original application of this program will be the servicing of several rat cages simultaneously. The response will be the pressing of a metal bar in the cage, the reinforcement will be the triggering of a feeding mechanism which disperses a food pellet into the cage. The subsequent applications of this program are not limited, in that the actual response and reinforcement devices and the subject type are all treated indifferently by the program.
The NTSU School of Music Practice Room Scheduling System
This is a report concerning the project I completed for my 590 (special problem) credit. The subject of this project was a system for interactive practice room scheduling by music students at NTSU. This system was created in the fall semester of 1982 as a class project for software Development (CSCI 553) with Dr. Irby. The system was not completely finished, and I received permission from Dr. Irby to finish it and help implement its use at the Music department. I was able to observe three usages of the system: Spring, Summer I, and Summer II semesters of 1983. This report details the problems encountered during each of these usages, and changes made to the system due to them. Results of a first-use survey, under documentation, and complete final code listings were also included.
FORTRAN Graphics Library
The objective of this work is to help the faculty, staffs and students of NTSU to use the CalComp plotting facility very easily. Therefore, this work is written in such a step by step and self-explanatory way to help the reader to understand and grasp the essential technique of the computer plotting. Each subroutine illustrated in this work has been run and checked by our NTSU computer-CalComp plotting facility; the results of sample programs and illustrated graphs are believed to be very useful to understand each individual subroutine. Basically, software packages are stored in the magnetic disk of the IBM 360 computer as the standard graphic subroutines. These subroutines were written in FORTRAN IV. The user can write the driving program to call these subroutines and also inputs the desire data to the computer for computation. The results of computation will be outputed and stored in the magnetic tape.
PILOT for the Apple II Microcomputer
PILOT (Programmed Inquiry, Learning or Teaching) is a simple, conversational language developed in 1969 by John A. Starkweather at the University of California Medical Center in San Francisco. Originally designed for computer assisted instructional needs, PILOT also has been effectively used as an introductory computer language. The PILOT system developed for the Apple II microcomputer consists of two programs, PILOT EDITOR and PILOT DRIVER, which are written in Applesoft and which use the Apple II disk operating system. The PILOT system was designed to facilitate easy authoring and execution of programs written in an extended version of the PILOT language. Due to the memory requirements of the programs and the Apple II disk operating system, the PILOT system described here should be executed on a machine with at least 32k bytes of random access memory.
Design and Implementation of a Parser for the DBase II Query Language
In this paper the DBase II query language of an RDBMS for personal computers is discussed. Other languages will be provided by large and sophisticated DBMS will not be discussed here. The reason for selecting the DBase II query language for discussion are as follows: 1. It is a simple language that can be learned easily [TOWN 84, DINE 84]. Within a short period, users can learn all of the facilities and manage the system very well. 2. It is a language suitable for interactive programming and execution like BASIC. 3. It provides adequate facilities for a small data base system and serves as an introductory guide for more sophisticated systems.
Development of a Text Formatted Under VAX/VMS Operating System
No matter how extended the use of the computer is, the printed document is still the primary medium for the presentation information, and will continue to be for some time. The use of computing facilities for preparation and production of the document is becoming as prevalent as their use for numeric computation. Commercially, document preparation systems are now a standard facility at research institution, and they have become quite common on each computer program. A conventional document preparation system usually contains two parts: a text editor used to create, enter, update, and maintain the text and control words that comprise the document in its "input" form, and a text formatter used to process that input and produce the final document.
Triangle: A Teaching Program of High School Geometry
Among the early applications of computers, one can find frequent mention of intelligent instructional systems. Such intelligent instructional systems represent a new generation of learner-based computer aided instruction, preceded in time by the original frame-based systems and an intervening generation of expert-based CAI. The history of CAI is characterized by three generations: Frame-based CAI, Expert-based CAI and Learner-based CAI.
A Method for Applying Scientific Subroutine Package in Microprocessor
The scientific subroutine package is one of the most important parts of the software for the scientific industry. By now, most big computers have scientific packages, but applying such a software package in microprocessors requires consideration of the microprocessor's facilities, such as limited main memory, slow execution time, and only a few small registers. In any scientific package, the trigonometric functions are the ones more widely used. This paper discusses a method for implementing several trigonometric function programs in a scientific package in microprocessors. These programs will contain routines for computing sin, cos, tan, and cot of any angle within the range of (-360°,+360°).
Macro - Preprocessor for 6809 Cross Assembler
It is frequently considered to be apparent two stages during assembly time. The first is the preprocessor stage in which a single instruction called the micro instruction is replaced with the sequence of instructions called the macro definition. The second is the processor stage in which the output from the first stage is assembled into machine language instructions for a particular computer. This paper descibes the first one which is macro-preprocessor stage.
Initial Research for the Development or Purchase of a Computerized Synthesizer For Use as a Composer's Aid
The author's primary goal is to begin research leading ot the attainment of a low cost computer/music system which will allow the composer to write polyphonic music of up to eight voices into a computer through a terminal, and have the music played back by means of computer synthesized sound or by means of a conventional synthesizer controlled by a computer via digital-to-analog converters. The goal system will allow the composer to retreat and hear his product objectively as the painter steps back to review his canvas.
Field Programmable Devices and Reconfigurable Computing
The motivation behind this research has been the idea of the capability of the computing device to dynamically reconfigure itself. The goal of this work is to measure the computational power of reconfigurable machines rather in an abstract manner by proposing a model the FPGAs abstract computing machines. Modeling FPGAs in terms of Automata Theory would give a base to answer more fundamental questions about the capabilities and possible answers. If a Finite State Machine (FSM) or a Turing Machine (TM) has the capability of reconfiguring its finite control, does this ability give the abstract computing device new computational power. In other words is a reconfigurable FSM, TM or a Cellular Automata more powerful than their corresponding non-configurable versions?
Extracting Temporally-Anchored Knowledge from Tweets
Twitter has quickly become one of the most popular social media sites. It has 313 million monthly active users, and 500 million tweets are published daily. With the massive number of tweets, Twitter users share information about a location along with the temporal awareness. In this work, I focus on tweets where author of the tweets exclusively mentions a location in the tweet. Natural language processing systems can leverage wide range of information from the tweets to build applications like recommender systems that predict the location of the author. This kind of system can be used to increase the visibility of the targeted audience and can also provide recommendations interesting places to visit, hotels to stay, restaurants to eat, targeted on-line advertising, and co-traveler matching based on the temporal information extracted from a tweet. In this work I determine if the author of the tweet is present in the mentioned location of the tweet. I also determine if the author is present in the location before tweeting, while tweeting, or after tweeting. I introduce 5 temporal tags (before the tweet but > 24 hours; before the tweet but < 24 hours; during the tweet is posted; after the tweet is posted but < 24 hours; and after the tweet is posted but > 24 hours). The major contributions of this paper are: (1) creation of a corpus of 1062 tweets containing 1200 location named entities, containing annotations whether author of a tweet is or is not located in the location he tweets about with respect to 5 temporal tags; (2) detailed corpus analysis including real annotation examples and label distributions per temporal tag; (3) detailed inter-annotator agreements, including Cohen's kappa, Krippendorff's alpha and confusion matrices per temporal tag; (4) label distributions and analysis; and (5) supervised learning experiments, along with …
Detecting Component Failures and Critical Components in Safety Critical Embedded Systems using Fault Tree Analysis
Component failures can result in catastrophic behaviors in safety critical embedded systems, sometimes resulting in loss of life. Component failures can be treated as off nominal behaviors (ONBs) with respect to the components and sub systems involved in an embedded system. A lot of research is being carried out to tackle the problem of ONBs. These approaches are mainly focused on the states (i.e., desired and undesired states of a system at a given point of time to detect ONBs). In this paper, an approach is discussed to detect component failures and critical components of an embedded system. The approach is based on fault tree analysis (FTA), applied to the requirements specification of embedded systems at design time to find out the relationship between individual component failures and overall system failure. FTA helps in determining both qualitative and quantitative relationship between component failures and system failure. Analyzing the system at design time helps in detecting component failures and critical components and helps in devising strategies to mitigate component failures at design time and improve overall safety and reliability of a system.
Mobile-Based Smart Auscultation
In developing countries, acute respiratory infections (ARIs) are responsible for two million deaths per year. Most victims are children who are less than 5 years old. Pneumonia kills 5000 children per day. The statistics for cardiovascular diseases (CVDs) are even more alarming. According to a 2009 report from the World Health Organization (WHO), CVDs kill 17 million people per year. In many resource-poor parts of the world such as India and China, many people are unable to access cardiologists, pulmonologists, and other specialists. Hence, low skilled health professionals are responsible for screening people for ARIs and CVDs in these areas. For example, in the rural areas of the Philippines, there is only one doctor for every 10,000 people. By contrast, the United States has one doctor for every 500 Americans. Due to advances in technology, it is now possible to use a smartphone for audio recording, signal processing, and machine learning. In my thesis, I have developed an Android application named Smart Auscultation. Auscultation is a process in which physicians listen to heart and lung sounds to diagnose disorders. Cardiologists spend years mastering this skill. The Smart Auscultation application is capable of recording and classifying heart sounds, and can be used by public or clinical health workers. This application can detect abnormal heart sounds with up to 92-98% accuracy. In addition, the application can record, but not yet classify, lung sounds. This application will be able to help save thousands of lives by allowing anyone to identify abnormal heart and lung sounds.
Object Recognition Using Scale-Invariant Chordiogram
This thesis describes an approach for object recognition using the chordiogram shape-based descriptor. Global shape representations are highly susceptible to clutter generated due to the background or other irrelevant objects in real-world images. To overcome the problem, we aim to extract precise object shape using superpixel segmentation, perceptual grouping, and connected components. The employed shape descriptor chordiogram is based on geometric relationships of chords generated from the pairs of boundary points of an object. The chordiogram descriptor applies holistic properties of the shape and also proven suitable for object detection and digit recognition mechanisms. Additionally, it is translation invariant and robust to shape deformations. In spite of such excellent properties, chordiogram is not scale-invariant. To this end, we propose scale invariant chordiogram descriptors and intend to achieve a similar performance before and after applying scale invariance. Our experiments show that we achieve similar performance with and without scale invariance for silhouettes and real world object images. We also show experiments at different scales to confirm that we obtain scale invariance for chordiogram.
Extracting Useful Information from Social Media during Disaster Events
In recent years, social media platforms such as Twitter and Facebook have emerged as effective tools for broadcasting messages worldwide during disaster events. With millions of messages posted through these services during such events, it has become imperative to identify valuable information that can help the emergency responders to develop effective relief efforts and aid victims. Many studies implied that the role of social media during disasters is invaluable and can be incorporated into emergency decision-making process. However, due to the "big data" nature of social media, it is very labor-intensive to employ human resources to sift through social media posts and categorize/classify them as useful information. Hence, there is a growing need for machine intelligence to automate the process of extracting useful information from the social media data during disaster events. This dissertation addresses the following questions: In a social media stream of messages, what is the useful information to be extracted that can help emergency response organizations to become more situationally aware during and following a disaster? What are the features (or patterns) that can contribute to automatically identifying messages that are useful during disasters? We explored a wide variety of features in conjunction with supervised learning algorithms to automatically identify messages that are useful during disaster events. The feature design includes sentiment features to extract the geo-mapped sentiment expressed in tweets, as well as tweet-content and user detail features to predict the likelihood of the information contained in a tweet to be quickly spread in the network. Further experimentation is carried out to see how these features help in identifying the informative tweets and filter out those tweets that are conversational in nature.
Brain Computer Interface (BCI) Applications: Privacy Threats and Countermeasures
In recent years, brain computer interfaces (BCIs) have gained popularity in non-medical domains such as the gaming, entertainment, personal health, and marketing industries. A growing number of companies offer various inexpensive consumer grade BCIs and some of these companies have recently introduced the concept of BCI "App stores" in order to facilitate the expansion of BCI applications and provide software development kits (SDKs) for other developers to create new applications for their devices. The BCI applications access to users' unique brainwave signals, which consequently allows them to make inferences about users' thoughts and mental processes. Since there are no specific standards that govern the development of BCI applications, its users are at the risk of privacy breaches. In this work, we perform first comprehensive analysis of BCI App stores including software development kits (SDKs), application programming interfaces (APIs), and BCI applications w.r.t privacy issues. The goal is to understand the way brainwave signals are handled by BCI applications and what threats to the privacy of users exist. Our findings show that most applications have unrestricted access to users' brainwave signals and can easily extract private information about their users without them even noticing. We discuss potential privacy threats posed by current practices used in BCI App stores and then describe some countermeasures that could be used to mitigate the privacy threats. Also, develop a prototype which gives the BCI app users a choice to restrict their brain signal dynamically.
Automated GUI Tests Generation for Android Apps Using Q-learning
Mobile applications are growing in popularity and pose new problems in the area of software testing. In particular, mobile applications heavily depend upon user interactions and a dynamically changing environment of system events. In this thesis, we focus on user-driven events and use Q-learning, a reinforcement machine learning algorithm, to generate tests for Android applications under test (AUT). We implement a framework that automates the generation of GUI test cases by using our Q-learning approach and compare it to a uniform random (UR) implementation. A novel feature of our approach is that we generate user-driven event sequences through the GUI, without the source code or the model of the AUT. Hence, considerable amount of cost and time are saved by avoiding the need for model generation for generating the tests. Our results show that the systematic path exploration used by Q-learning results in higher average code coverage in comparison to the uniform random approach.
Exploring Simscape™ Modeling for Piezoelectric Sensor Based Energy Harvester
This work presents an investigation of a piezoelectric sensor based energy harvesting system, which collects energy from the surrounding environment. Increasing costs and scarcity of fossil fuels is a great concern today for supplying power to electronic devices. Furthermore, generating electricity by ordinary methods is a complicated process. Disposal of chemical batteries and cables is polluting the nature every day. Due to these reasons, research on energy harvesting from renewable resources has become mandatory in order to achieve improved methods and strategies of generating and storing electricity. Many low power devices being used in everyday life can be powered by harvesting energy from natural energy resources. Power overhead and power energy efficiency is of prime concern in electronic circuits. In this work, an energy harvester is modeled and simulated in Simscape™ for the functional analysis and comparison of achieved outcomes with previous work. Results demonstrate that the harvester produces power in the 0 μW to 100 μW range, which is an adequate amount to provide supply to low power devices. Power efficiency calculations also demonstrate that the implemented harvester is capable of generating and storing power for low power pervasive applications.
Determining Whether and When People Participate in the Events They Tweet About
This work describes an approach to determine whether people participate in the events they tweet about. Specifically, we determine whether people are participants in events with respect to the tweet timestamp. We target all events expressed by verbs in tweets, including past, present and events that may occur in future. We define event participant as people directly involved in an event regardless of whether they are the agent, recipient or play another role. We present an annotation effort, guidelines and quality analysis with 1,096 event mentions. We discuss the label distributions and event behavior in the annotated corpus. We also explain several features used and a standard supervised machine learning approach to automatically determine if and when the author is a participant of the event in the tweet. We discuss trends in the results obtained and devise important conclusions.
Influence of Underlying Random Walk Types in Population Models on Resulting Social Network Types and Epidemiological Dynamics
Epidemiologists rely on human interaction networks for determining states and dynamics of disease propagations in populations. However, such networks are empirical snapshots of the past. It will greatly benefit if human interaction networks are statistically predicted and dynamically created while an epidemic is in progress. We develop an application framework for the generation of human interaction networks and running epidemiological processes utilizing research on human mobility patterns and agent-based modeling. The interaction networks are dynamically constructed by incorporating different types of Random Walks and human rules of engagements. We explore the characteristics of the created network and compare them with the known theoretical and empirical graphs. The dependencies of epidemic dynamics and their outcomes on patterns and parameters of human motion and motives are encountered and presented through this research. This work specifically describes how the types and parameters of random walks define properties of generated graphs. We show that some configurations of the system of agents in random walk can produce network topologies with properties similar to small-world networks. Our goal is to find sets of mobility patterns that lead to empirical-like networks. The possibility of phase transitions in the graphs due to changes in the parameterization of agent walks is the focus of this research as this knowledge can lead to the possibility of disruptions to disease diffusions in populations. This research shall facilitate work of public health researchers to predict the magnitude of an epidemic and estimate resources required for mitigation.
Privacy Preserving EEG-based Authentication Using Perceptual Hashing
The use of electroencephalogram (EEG), an electrophysiological monitoring method for recording the brain activity, for authentication has attracted the interest of researchers for over a decade. In addition to exhibiting qualities of biometric-based authentication, they are revocable, impossible to mimic, and resistant to coercion attacks. However, EEG signals carry a wealth of information about an individual and can reveal private information about the user. This brings significant privacy issues to EEG-based authentication systems as they have access to raw EEG signals. This thesis proposes a privacy-preserving EEG-based authentication system that preserves the privacy of the user by not revealing the raw EEG signals while allowing the system to authenticate the user accurately. In that, perceptual hashing is utilized and instead of raw EEG signals, their perceptually hashed values are used in the authentication process. In addition to describing the authentication process, algorithms to compute the perceptual hash are developed based on two feature extraction techniques. Experimental results show that an authentication system using perceptual hashing can achieve performance comparable to a system that has access to raw EEG signals if enough EEG channels are used in the process. This thesis also presents a security analysis to show that perceptual hashing can prevent information leakage.
Modeling and Simulation of the Vector-Borne Dengue Disease and the Effects of Regional Variation of Temperature in the Disease Prevalence in Homogenous and Heterogeneous Human Populations
The history of mitigation programs to contain vector-borne diseases is a story of successes and failures. Due to the complex interplay among multiple factors that determine disease dynamics, the general principles for timely and specific intervention for incidence reduction or eradication of life-threatening diseases has yet to be determined. This research discusses computational methods developed to assist in the understanding of complex relationships affecting vector-borne disease dynamics. A computational framework to assist public health practitioners with exploring the dynamics of vector-borne diseases, such as malaria and dengue in homogenous and heterogeneous populations, has been conceived, designed, and implemented. The framework integrates a stochastic computational model of interactions to simulate horizontal disease transmission. The intent of the computational modeling has been the integration of stochasticity during simulation of the disease progression while reducing the number of necessary interactions to simulate a disease outbreak. While there are improvements in the computational time reducing the number of interactions needed for simulating disease dynamics, the realization of interactions can remain computationally expensive. Using multi-threading technology to improve performance upon the original computational model, multi-threading experimental results have been tested and reported. In addition, to the contact model, the modeling of biological processes specific to the corresponding pathogen-carrier vector to increase the specificity of the vector-borne disease has been integrated. Last, automation for requesting, retrieving, parsing, and storing specific weather data and geospatial information from federal agencies to study the differences between homogenous and heterogeneous populations has been implemented.
Network Security Tool for a Novice
Network security is a complex field that is handled by security professionals who need certain expertise and experience to configure security systems. With the ever increasing size of the networks, managing them is going to be a daunting task. What kind of solution can be used to generate effective security configurations by both security professionals and nonprofessionals alike? In this thesis, a web tool is developed to simplify the process of configuring security systems by translating direct human language input into meaningful, working security rules. These human language inputs yield the security rules that the individual wants to implement in their network. The human language input can be as simple as, "Block Facebook to my son's PC". This tool will translate these inputs into specific security rules and install the translated rules into security equipment such as virtualized Cisco FWSM network firewall, Netfilter host-based firewall, and Snort Network Intrusion Detection. This tool is implemented and tested in both a traditional network and a cloud environment. One thousand input policies were collected from various users such as staff from UNT departments' and health science, including individuals with network security background as well as students with a non-computer science background to analyze the tool's performance. The tool is tested for its accuracy (91%) in generating a security rule. It is also tested for accuracy of the translated rule (86%) compared to a standard rule written by security professionals. Nevertheless, the network security tool built has shown promise to both experienced and inexperienced people in network security field by simplifying the provisioning process to result in accurate and effective network security rules.
Data-Driven Decision-Making Framework for Large-Scale Dynamical Systems under Uncertainty
Managing large-scale dynamical systems (e.g., transportation systems, complex information systems, and power networks, etc.) in real-time is very challenging considering their complicated system dynamics, intricate network interactions, large scale, and especially the existence of various uncertainties. To address this issue, intelligent techniques which can quickly design decision-making strategies that are robust to uncertainties are needed. This dissertation aims to conquer these challenges by exploring a data-driven decision-making framework, which leverages big-data techniques and scalable uncertainty evaluation approaches to quickly solve optimal control problems. In particular, following techniques have been developed along this direction: 1) system modeling approaches to simplify the system analysis and design procedures for multiple applications; 2) effective simulation and analytical based approaches to efficiently evaluate system performance and design control strategies under uncertainty; and 3) big-data techniques that allow some computations of control strategies to be completed offline. These techniques and tools for analysis, design and control contribute to a wide range of applications including air traffic flow management, complex information systems, and airborne networks.
An Empirical Study of How Novice Programmers Use the Web
Students often use the web as a source of help for problems that they encounter on programming assignments.In this work, we seek to understand how students use the web to search for help on their assignments.We used a mixed methods approach with 344 students who complete a survey and 41 students who participate in a focus group meetings and helped in recording data about their search habits.The survey reveals data about student reported search habits while the focus group uses a web browser plug-in to record actual search patterns.We examine the results collectively and as broken down by class year.Survey results show that at least 2/3 of the students from each class year rely on search engines to locate resources for help with their programming bugs in at least half of their assignments;search habits vary by class year;and the value of different types of resources such as tutorials and forums varies by class year.Focus group results exposes the high frequency web sites used by the students in solving their programming assignments.
Learning from small data set for object recognition in mobile platforms.
Did you stand at a door with a bunch of keys and tried to find the right one to unlock the door? Did you hold a flower and wonder the name of it? A need of object recognition could rise anytime and any where in our daily lives. With the development of mobile devices object recognition applications become possible to provide immediate assistance. However, performing complex tasks in even the most advanced mobile platforms still faces great challenges due to the limited computing resources and computing power. In this thesis, we present an object recognition system that resides and executes within a mobile device, which can efficiently extract image features and perform learning and classification. To account for the computing constraint, a novel feature extraction method that minimizes the data size and maintains data consistency is proposed. This system leverages principal component analysis method and is able to update the trained classifier when new examples become available . Our system relieves users from creating a lot of examples and makes it user friendly. The experimental results demonstrate that a learning method trained with a very small number of examples can achieve recognition accuracy above 90% in various acquisition conditions. In addition, the system is able to perform learning efficiently.
Detection of Ulcerative Colitis Severity and Enhancement of Informative Frame Filtering Using Texture Analysis in Colonoscopy Videos
There are several types of disorders that affect our colon’s ability to function properly such as colorectal cancer, ulcerative colitis, diverticulitis, irritable bowel syndrome and colonic polyps. Automatic detection of these diseases would inform the endoscopist of possible sub-optimal inspection during the colonoscopy procedure as well as save time during post-procedure evaluation. But existing systems only detects few of those disorders like colonic polyps. In this dissertation, we address the automatic detection of another important disorder called ulcerative colitis. We propose a novel texture feature extraction technique to detect the severity of ulcerative colitis in block, image, and video levels. We also enhance the current informative frame filtering methods by detecting water and bubble frames using our proposed technique. Our feature extraction algorithm based on accumulation of pixel value difference provides better accuracy at faster speed than the existing methods making it highly suitable for real-time systems. We also propose a hybrid approach in which our feature method is combined with existing feature method(s) to provide even better accuracy. We extend the block and image level detection method to video level severity score calculation and shot segmentation. Also, the proposed novel feature extraction method can detect water and bubble frames in colonoscopy videos with very high accuracy in significantly less processing time even when clustering is used to reduce the training size by 10 times.
Computational Methods for Vulnerability Analysis and Resource Allocation in Public Health Emergencies
POD (Point of Dispensing)-based emergency response plans involving mass prophylaxis may seem feasible when considering the choice of dispensing points within a region, overall population density, and estimated traffic demands. However, the plan may fail to serve particular vulnerable sub-populations, resulting in access disparities during emergency response. Federal authorities emphasize on the need to identify sub-populations that cannot avail regular services during an emergency due to their special needs to ensure effective response. Vulnerable individuals require the targeted allocation of appropriate resources to serve their special needs. Devising schemes to address the needs of vulnerable sub-populations is essential for the effectiveness of response plans. This research focuses on data-driven computational methods to quantify and address vulnerabilities in response plans that require the allocation of targeted resources. Data-driven methods to identify and quantify vulnerabilities in response plans are developed as part of this research. Addressing vulnerabilities requires the targeted allocation of appropriate resources to PODs. The problem of resource allocation to PODs during public health emergencies is introduced and the variants of the resource allocation problem such as the spatial allocation, spatio-temporal allocation and optimal resource subset variants are formulated. Generating optimal resource allocation and scheduling solutions can be computationally hard problems. The application of metaheuristic techniques to find near-optimal solutions to the resource allocation problem in response plans is investigated. A vulnerability analysis and resource allocation framework that facilitates the demographic analysis of population data in the context of response plans, and the optimal allocation of resources with respect to the analysis are described.
Maintaining Web Applications Integrity Running on RADIUM
Computer security attacks take place due to the presence of vulnerabilities and bugs in software applications. Bugs and vulnerabilities are the result of weak software architecture and lack of standard software development practices. Despite the fact that software companies are investing millions of dollars in the research and development of software designs security risks are still at large. In some cases software applications are found to carry vulnerabilities for many years before being identified. A recent such example is the popular Heart Bleed Bug in the Open SSL/TSL. In today’s world, where new software application are continuously being developed for a varied community of users; it’s highly unlikely to have software applications running without flaws. Attackers on computer system securities exploit these vulnerabilities and bugs and cause threat to privacy without leaving any trace. The most critical vulnerabilities are those which are related to the integrity of the software applications. Because integrity is directly linked to the credibility of software application and data it contains. Here I am giving solution of maintaining web applications integrity running on RADIUM by using daikon. Daikon generates invariants, these invariants are used to maintain the integrity of the web application and also check the correct behavior of web application at run time on RADIUM architecture in case of any attack or malware. I used data invariants and program flow invariants in my solution to maintain the integrity of web-application against such attack or malware. I check the behavior of my proposed invariants at run-time using Lib-VMI/Volatility memory introspection tool. This is a novel approach and proof of concept toward maintaining web application integrity on RADIUM.
Freeform Cursive Handwriting Recognition Using a Clustered Neural Network
Optical character recognition (OCR) software has advanced greatly in recent years. Machine-printed text can be scanned and converted to searchable text with word accuracy rates around 98%. Reasonably neat hand-printed text can be recognized with about 85% word accuracy. However, cursive handwriting still remains a challenge, with state-of-the-art performance still around 75%. Algorithms based on hidden Markov models have been only moderately successful, while recurrent neural networks have delivered the best results to date. This thesis explored the feasibility of using a special type of feedforward neural network to convert freeform cursive handwriting to searchable text. The hidden nodes in this network were grouped into clusters, with each cluster being trained to recognize a unique character bigram. The network was trained on writing samples that were pre-segmented and annotated. Post-processing was facilitated in part by using the network to identify overlapping bigrams that were then linked together to form words and sentences. With dictionary assisted post-processing, the network achieved word accuracy of 66.5% on a small, proprietary corpus. The contributions in this thesis are threefold: 1) the novel clustered architecture of the feed-forward neural network, 2) the development of an expanded set of observers combining image masks, modifiers, and feature characterizations, and 3) the use of overlapping bigrams as the textual working unit to assist in context analysis and reconstruction.
Automatic Removal of Complex Shadows From Indoor Videos
Shadows in indoor scenarios are usually characterized with multiple light sources that produce complex shadow patterns of a single object. Without removing shadow, the foreground object tends to be erroneously segmented. The inconsistent hue and intensity of shadows make automatic removal a challenging task. In this thesis, a dynamic thresholding and transfer learning-based method for removing shadows is proposed. The method suppresses light shadows with a dynamically computed threshold and removes dark shadows using an online learning strategy that is built upon a base classifier trained with manually annotated examples and refined with the automatically identified examples in the new videos. Experimental results demonstrate that despite variation of lighting conditions in videos our proposed method is able to adapt to the videos and remove shadows effectively. The sensitivity of shadow detection changes slightly with different confidence levels used in example selection for classifier retraining and high confidence level usually yields better performance with less retraining iterations.
Algorithm Optimizations in Genomic Analysis Using Entropic Dissection
In recent years, the collection of genomic data has skyrocketed and databases of genomic data are growing at a faster rate than ever before. Although many computational methods have been developed to interpret these data, they tend to struggle to process the ever increasing file sizes that are being produced and fail to take advantage of the advances in multi-core processors by using parallel processing. In some instances, loss of accuracy has been a necessary trade off to allow faster computation of the data. This thesis discusses one such algorithm that has been developed and how changes were made to allow larger input file sizes and reduce the time required to achieve a result without sacrificing accuracy. An information entropy based algorithm was used as a basis to demonstrate these techniques. The algorithm dissects the distinctive patterns underlying genomic data efficiently requiring no a priori knowledge, and thus is applicable in a variety of biological research applications. This research describes how parallel processing and object-oriented programming techniques were used to process larger files in less time and achieve a more accurate result from the algorithm. Through object oriented techniques, the maximum allowable input file size was significantly increased from 200 mb to 2000 mb. Using parallel processing techniques allowed the program to finish processing data in less than half the time of the sequential version. The accuracy of the algorithm was improved by reducing data loss throughout the algorithm. Finally, adding user-friendly options enabled the program to use requests more effectively and further customize the logic used within the algorithm.
Integrity Verification of Applications on RADIUM Architecture
Trusted Computing capability has become ubiquitous these days, and it is being widely deployed into consumer devices as well as enterprise platforms. As the number of threats is increasing at an exponential rate, it is becoming a daunting task to secure the systems against them. In this context, the software integrity measurement at runtime with the support of trusted platforms can be a better security strategy. Trusted Computing devices like TPM secure the evidence of a breach or an attack. These devices remain tamper proof if the hardware platform is physically secured. This type of trusted security is crucial for forensic analysis in the aftermath of a breach. The advantages of trusted platforms can be further leveraged if they can be used wisely. RADIUM (Race-free on-demand Integrity Measurement Architecture) is one such architecture, which is built on the strength of TPM. RADIUM provides an asynchronous root of trust to overcome the TOC condition of DRTM. Even though the underlying architecture is trusted, attacks can still compromise applications during runtime by exploiting their vulnerabilities. I propose an application-level integrity measurement solution that fits into RADIUM, to expand the trusted computing capability to the application layer. This is based on the concept of program invariants that can be used to learn the correct behavior of an application. I used Daikon, a tool to obtain dynamic likely invariants, and developed a method of observing these properties at runtime to verify the integrity. The integrity measurement component was implemented as a Python module on top of Volatility, a virtual machine introspection tool. My approach is a first step towards integrity attestation, using hypervisor-based introspection on RADIUM and a proof of concept of application-level measurement capability.
Radium: Secure Policy Engine in Hypervisor
The basis of today’s security systems is the trust and confidence that the system will behave as expected and are in a known good trusted state. The trust is built from hardware and software elements that generates a chain of trust that originates from a trusted known entity. Leveraging hardware, software and a mandatory access control policy technology is needed to create a trusted measurement environment. Employing a control layer (hypervisor or microkernel) with the ability to enforce a fine grained access control policy with hyper call granularity across multiple guest virtual domains can ensure that any malicious environment to be contained. In my research, I propose the use of radium's Asynchronous Root of Trust Measurement (ARTM) capability incorporated with a secure mandatory access control policy engine that would mitigate the limitations of the current hardware TPM solutions. By employing ARTM we can leverage asynchronous use of boot, launch, and use with the hypervisor proving its state and the integrity of the secure policy. My solution is using Radium (Race free on demand integrity architecture) architecture that will allow a more detailed measurement of applications at run time with greater semantic knowledge of the measured environments. Radium incorporation of a secure access control policy engine will give it the ability to limit or empower a virtual domain system. It can also enable the creation of a service oriented model of guest virtual domains that have the ability to perform certain operations such as introspecting other virtual domain systems to determine the integrity or system state and report it to a remote entity.
Unique Channel Email System
Email connects 85% of the world. This paper explores the pattern of information overload encountered by majority of email users and examine what steps key email providers are taking to combat the problem. Besides fighting spam, popular email providers offer very limited tools to reduce the amount of unwanted incoming email. Rather, there has been a trend to expand storage space and aid the organization of email. Storing email is very costly and harmful to the environment. Additionally, information overload can be detrimental to productivity. We propose a simple solution that results in drastic reduction of unwanted mail, also known as graymail.
Towards Resistance Detection in Health Behavior Change Dialogue Systems
One of the challenges fairly common in motivational interviewing is patient resistance to health behavior change. Hence, automated dialog systems aimed at counseling patients need to be capable of detecting resistance and appropriately altering dialog. This thesis focusses primarily on the development of such a system for automatic identification of patient resistance to behavioral change. This enables the dialogue system to direct the discourse towards a more agreeable ground and helping the patient overcome the obstacles in his or her way to change. This thesis also proposes a dialogue system framework for health behavior change via natural language analysis and generation. The proposed framework facilitates automated motivational interviewing from clinical psychology and involves three broad stages: rapport building and health topic identification, assessment of the patient’s opinion about making a change, and developing a plan. Using this framework patients can be encouraged to reflect on the options available and choose the best for a healthier life.
Computational Methods for Discovering and Analyzing Causal Relationships in Health Data
Publicly available datasets in health science are often large and observational, in contrast to experimental datasets where a small number of data are collected in controlled experiments. Variables' causal relationships in the observational dataset are yet to be determined. However, there is a significant interest in health science to discover and analyze causal relationships from health data since identified causal relationships will greatly facilitate medical professionals to prevent diseases or to mitigate the negative effects of the disease. Recent advances in Computer Science, particularly in Bayesian networks, has initiated a renewed interest for causality research. Causal relationships can be possibly discovered through learning the network structures from data. However, the number of candidate graphs grows in a more than exponential rate with the increase of variables. Exact learning for obtaining the optimal structure is thus computationally infeasible in practice. As a result, heuristic approaches are imperative to alleviate the difficulty of computations. This research provides effective and efficient learning tools for local causal discoveries and novel methods of learning causal structures with a combination of background knowledge. Specifically in the direction of constraint based structural learning, polynomial-time algorithms for constructing causal structures are designed with first-order conditional independence. Algorithms of efficiently discovering non-causal factors are developed and proved. In addition, when the background knowledge is partially known, methods of graph decomposition are provided so as to reduce the number of conditioned variables. Experiments on both synthetic data and real epidemiological data indicate the provided methods are applicable to large-scale datasets and scalable for causal analysis in health data. Followed by the research methods and experiments, this dissertation gives thoughtful discussions on the reliability of causal discoveries computational health science research, complexity, and implications in health science research.
Investigation on Segmentation, Recognition and 3D Reconstruction of Objects Based on LiDAR Data Or MRI
Segmentation, recognition and 3D reconstruction of objects have been cutting-edge research topics, which have many applications ranging from environmental and medical to geographical applications as well as intelligent transportation. In this dissertation, I focus on the study of segmentation, recognition and 3D reconstruction of objects using LiDAR data/MRI. Three main works are that (I). Feature extraction algorithm based on sparse LiDAR data. A novel method has been proposed for feature extraction from sparse LiDAR data. The algorithm and the related principles have been described. Also, I have tested and discussed the choices and roles of parameters. By using correlation of neighboring points directly, statistic distribution of normal vectors at each point has been effectively used to determine the category of the selected point. (II). Segmentation and 3D reconstruction of objects based on LiDAR/MRI. The proposed method includes that the 3D LiDAR data are layered, that different categories are segmented, and that 3D canopy surfaces of individual tree crowns and clusters of trees are reconstructed from LiDAR point data based on a region active contour model. The proposed method allows for delineations of 3D forest canopy naturally from the contours of raw LiDAR point clouds. The proposed model is suitable not only for a series of ideal cone shapes, but also for other kinds of 3D shapes as well as other kinds dataset such as MRI. (III). Novel algorithms for recognition of objects based on LiDAR/MRI. Aimed to the sparse LiDAR data, the feature extraction algorithm has been proposed and applied to classify the building and trees. More importantly, the novel algorithms based on level set methods have been provided and employed to recognize not only the buildings and trees, the different trees (e.g. Oak trees and Douglas firs), but also the subthalamus nuclei (STNs). By using the novel algorithms based …
Classifying Pairwise Object Interactions: A Trajectory Analytics Approach
We have a huge amount of video data from extensively available surveillance cameras and increasingly growing technology to record the motion of a moving object in the form of trajectory data. With proliferation of location-enabled devices and ongoing growth in smartphone penetration as well as advancements in exploiting image processing techniques, tracking moving objects is more flawlessly achievable. In this work, we explore some domain-independent qualitative and quantitative features in raw trajectory (spatio-temporal) data in videos captured by a fixed single wide-angle view camera sensor in outdoor areas. We study the efficacy of those features in classifying four basic high level actions by employing two supervised learning algorithms and show how each of the features affect the learning algorithms’ overall accuracy as a single factor or confounded with others.
Distributed Frameworks Towards Building an Open Data Architecture
Data is everywhere. The current Technological advancements in Digital, Social media and the ease at which the availability of different application services to interact with variety of systems are causing to generate tremendous volumes of data. Due to such varied services, Data format is now not restricted to only structure type like text but can generate unstructured content like social media data, videos and images etc. The generated Data is of no use unless been stored and analyzed to derive some Value. Traditional Database systems comes with limitations on the type of data format schema, access rates and storage sizes etc. Hadoop is an Apache open source distributed framework that support storing huge datasets of different formatted data reliably on its file system named Hadoop File System (HDFS) and to process the data stored on HDFS using MapReduce programming model. This thesis study is about building a Data Architecture using Hadoop and its related open source distributed frameworks to support a Data flow pipeline on a low commodity hardware. The Data flow components are, sourcing data, storage management on HDFS and data access layer. This study also discuss about a use case to utilize the architecture components. Sqoop, a framework to ingest the structured data from database onto Hadoop and Flume is used to ingest the semi-structured Twitter streaming json data on to HDFS for analysis. The data sourced using Sqoop and Flume have been analyzed using Hive for SQL like analytics and at a higher level of data access layer, Hadoop has been compared with an in memory computing system using Spark. Significant differences in query execution performances have been analyzed when working with Hadoop and Spark frameworks. This integration helps for ingesting huge Volumes of streaming json Variety data to derive better Value based analytics using Hive and …
SEM Predicting Success of Student Global Software Development Teams
The extensive use of global teams to develop software has prompted researchers to investigate various factors that can enhance a team’s performance. While a significant body of research exists on global software teams, previous research has not fully explored the interrelationships and collective impact of various factors on team performance. This study explored a model that added the characteristics of a team’s culture, ability, communication frequencies, response rates, and linguistic categories to a central framework of team performance. Data was collected from two student software development projects that occurred between teams located in the United States, Panama, and Turkey. The data was obtained through online surveys and recorded postings of team activities that occurred throughout the global software development projects. Partial least squares path modeling (PLS-PM) was chosen as the analytic technique to test the model and identify the most influential factors. Individual factors associated with response rates and linguistic characteristics proved to significantly affect a team’s activity related to grade on the project, group cohesion, and the number of messages received and sent. Moreover, an examination of possible latent homogeneous segments in the model supported the existence of differences among groups based on leadership style. Teams with assigned leaders tended to have stronger relationships between linguistic characteristics and team performance factors, while teams with emergent leaders had stronger. Relationships between response rates and team performance factors. The contributions in this dissertation are three fold. 1) Novel analysis techniques using PLS-PM and clustering, 2) Use of new, quantifiable variables in analyzing team activity, 3) Identification of plausible causal indicators for team performance and analysis of the same.
Video Analytics with Spatio-Temporal Characteristics of Activities
As video capturing devices become more ubiquitous from surveillance cameras to smart phones, the demand of automated video analysis is increasing as never before. One obstacle in this process is to efficiently locate where a human operator’s attention should be, and another is to determine the specific types of activities or actions without ambiguity. It is the special interest of this dissertation to locate spatial and temporal regions of interest in videos and to develop a better action representation for video-based activity analysis. This dissertation follows the scheme of “locating then recognizing” activities of interest in videos, i.e., locations of potentially interesting activities are estimated before performing in-depth analysis. Theoretical properties of regions of interest in videos are first exploited, based on which a unifying framework is proposed to locate both spatial and temporal regions of interest with the same settings of parameters. The approach estimates the distribution of motion based on 3D structure tensors, and locates regions of interest according to persistent occurrences of low probability. Two contributions are further made to better represent the actions. The first is to construct a unifying model of spatio-temporal relationships between reusable mid-level actions which bridge low-level pixels and high-level activities. Dense trajectories are clustered to construct mid-level actionlets, and the temporal relationships between actionlets are modeled as Action Graphs based on Allen interval predicates. The second is an effort for a novel and efficient representation of action graphs based on a sparse coding framework. Action graphs are first represented using Laplacian matrices and then decomposed as a linear combination of primitive dictionary items following sparse coding scheme. The optimization is eventually formulated and solved as a determinant maximization problem, and 1-nearest neighbor is used for action classification. The experiments have shown better results than existing approaches for regions-of-interest detection and action …
General Purpose Computing in Gpu - a Watermarking Case Study
The purpose of this project is to explore the GPU for general purpose computing. The GPU is a massively parallel computing device that has a high-throughput, exhibits high arithmetic intensity, has a large market presence, and with the increasing computation power being added to it each year through innovations, the GPU is a perfect candidate to complement the CPU in performing computations. The GPU follows the single instruction multiple data (SIMD) model for applying operations on its data. This model allows the GPU to be very useful for assisting the CPU in performing computations on data that is highly parallel in nature. The compute unified device architecture (CUDA) is a parallel computing and programming platform for NVIDIA GPUs. The main focus of this project is to show the power, speed, and performance of a CUDA-enabled GPU for digital video watermark insertion in the H.264 video compression domain. Digital video watermarking in general is a highly computationally intensive process that is strongly dependent on the video compression format in place. The H.264/MPEG-4 AVC video compression format has high compression efficiency at the expense of having high computational complexity and leaving little room for an imperceptible watermark to be inserted. Employing a human visual model to limit distortion and degradation of visual quality introduced by the watermark is a good choice for designing a video watermarking algorithm though this does introduce more computational complexity to the algorithm. Research is being conducted into how the CPU-GPU execution of the digital watermark application can boost the speed of the applications several times compared to running the application on a standalone CPU using NVIDIA visual profiler to optimize the application.
Smartphone-based Household Travel Survey - a Literature Review, an App, and a Pilot Survey
High precision data from household travel survey (HTS) is extremely important for the transportation research, traffic models and policy formulation. Traditional methods of data collection were imprecise because they relied on people’s memories of trip information, such as date and location, and the remainder data had to be obtained by certain supplemental tools. The traditional methods suffered from intensive labor, large time consumption, and unsatisfactory data precision. Recent research trends to employ smartphone apps to collect HTS data. In this study, there are two goals to be addressed. First, a smartphone app is developed to realize a smartphone-based method only for data collection. Second, the researcher evaluates whether this method can supply or replace the traditional tools of HTS. Based on this premise, the smartphone app, TravelSurvey, is specially developed and used for this study. TravelSurvey is currently compatible with iPhone 4 or higher and iPhone Operating System (iOS) 6 or higher, except iPhone 6 or iPhone 6 plus and iOS 8. To evaluate the feasibility, eight individuals are recruited to participate in a pilot HTS. Afterwards, seven of them are involved in a semi-structured interview. The interview is designed to collect interviewees’ feedback directly, so the interview mainly concerns the users’ experience of TravelSurvey. Generally, the feedback is positive. In this study, the pilot HTS data is successfully uploaded to the server by the participants, and the interviewees prefer this smartphone-based method. Therefore, as a new tool, the smartphone-based method feasibly supports a typical HTS for data collection.
A Comparison of File Organization Techniques
This thesis compares the file organization techniques that are implemented on two different types of computer systems, the large-scale and the small-scale. File organizations from representative computers in each class are examined in detail: the IBM System/370 (OS/370) and the Harris 1600 Distributed Processing System with the Extended Communications Operating System (ECOS). In order to establish the basic framework for comparison, an introduction to file organizations is presented. Additionally, the functional requirements for file organizations are described by their characteristics and user demands. Concluding remarks compare file organization techniques and discuss likely future developments of file systems.
Information Storage and Retrieval Systems
This thesis describes the implementation of a general purpose personal information storage and retrieval system. Chapter one contains an introduction to information storage and retrieval. Chapter two contains a description of the features a useful personal information retrieval system should contain. This description forms the basis for the implementation of the personal information storage and retrieval system described in chapter three. The system is implemented in UCSD Pascal on an Apple II microcomputer.
An Interpreter for the Basic Programming Language
In this thesis, the first chapter provides the general description of this interpreter. The second chapter contains a formal definition of the syntax of BASIC along with an introduction to the semantics. The third chapter contains the design of data structure. The fourth chapter contains the description of algorithms along with stages for testing the interpreter and the design of debug output. The stages and actions-are represented internally to the computer in tabular forms. For statement parsing working syntax equations are established. They serve as standards for the conversion of source statements into object pseudocodes. As the statement is parsed for legal form, pseudocodes for this statement are created. For pseudocode execution, pseudocodes are represented internally to the computer in tabular forms.
Back to Top of Screen