Search Results

Kalman Filtering Approach to Optimize OFDM Data Rate

Description: This study is based on applying a non-linear mapping method, here the unscented Kalman filter; to estimate and optimize data rate resulting from the arrival rate having a Poisson distribution in an orthogonal frequency division multiplexing (OFDM) transmission system. OFDM is an emerging multi-carrier modulation scheme. With the growing need for quality of service in wireless communications, it is highly necessary to optimize resources in such a way that the overall performance of the system models should rise while keeping in mind the objective to achieve high data rate and efficient spectral methods in the near future. In this study, the results from the OFDM-TDMA transmission system have been used to apply cross-layer optimization between layers so as to treat different resources between layers simultaneously. The main controller manages the transmission of data between layers using the multicarrier modulation techniques. The unscented Kalman filter is used here to perform nonlinear mapping by estimating and optimizing the data rate, which result from the arrival rate having a Poisson distribution.
Date: August 2011
Creator: Wunnava, Sashi Prabha
Partner: UNT Libraries

Study of the effects of background and motion camera on the efficacy of Kalman and particle filter algorithms.

Description: This study compares independent use of two known algorithms (Kalmar filter with background subtraction and Particle Filter) that are commonly deployed in object tracking applications. Object tracking in general is very challenging; it presents numerous problems that need to be addressed by the application in order to facilitate its successful deployment. Such problems range from abrupt object motion, during tracking, to a change in appearance of the scene and the object, as well as object to scene occlusions, and camera motion among others. It is important to take into consideration some issues, such as, accounting for noise associated with the image in question, ability to predict to an acceptable statistical accuracy, the position of the object at a particular time given its current position. This study tackles some of the issues raised above prior to addressing how the use of either of the aforementioned algorithm, minimize or in some cases eliminate the negative effects
Date: August 2009
Creator: Morita, Yasuhiro
Partner: UNT Libraries

The Design Of A Benchmark For Geo-stream Management Systems

Description: The recent growth in sensor technology allows easier information gathering in real-time as sensors have grown smaller, more accurate, and less expensive. The resulting data is often in a geo-stream format continuously changing input with a spatial extent. Researchers developing geo-streaming management systems (GSMS) require a benchmark system for evaluation, which is currently lacking. This thesis presents GSMark, a benchmark for evaluating GSMSs. GSMark provides a data generator that creates a combination of synthetic and real geo-streaming data, a workload simulator to present the data to the GSMS as a data stream, and a set of benchmark queries that evaluate typical GSMS functionality and query performance. In particular, GSMark generates both moving points and evolving spatial regions, two fundamental data types for a broad range of geo-stream applications, and the geo-streaming queries on this data.
Date: December 2011
Creator: Shen, Chao
Partner: UNT Libraries

Maintaining Web Applications Integrity Running on Radium

Description: Computer security attacks take place due to the presence of vulnerabilities and bugs in software applications. Bugs and vulnerabilities are the result of weak software architecture and lack of standard software development practices. Despite the fact that software companies are investing millions of dollars in the research and development of software designs security risks are still at large. In some cases software applications are found to carry vulnerabilities for many years before being identified. A recent such example is the popular Heart Bleed Bug in the Open SSL/TSL. In today’s world, where new software application are continuously being developed for a varied community of users; it’s highly unlikely to have software applications running without flaws. Attackers on computer system securities exploit these vulnerabilities and bugs and cause threat to privacy without leaving any trace. The most critical vulnerabilities are those which are related to the integrity of the software applications. Because integrity is directly linked to the credibility of software application and data it contains. Here I am giving solution of maintaining web applications integrity running on RADIUM by using daikon. Daikon generates invariants, these invariants are used to maintain the integrity of the web application and also check the correct behavior of web application at run time on RADIUM architecture in case of any attack or malware. I used data invariants and program flow invariants in my solution to maintain the integrity of web-application against such attack or malware. I check the behavior of my proposed invariants at run-time using Lib-VMI/Volatility memory introspection tool. This is a novel approach and proof of concept toward maintaining web application integrity on RADIUM.
Date: August 2015
Creator: Ur-Rehman, Wasi
Partner: UNT Libraries

Baseband Noise Suppression in Ofdm Using Kalman Filter

Description: As the technology is advances the reduced size of hardware gives rise to an additive 1/f baseband noise. This additive 1/f noise is a system noise generated due to miniaturization of hardware and affects the lower frequencies. Though 1/f noise does not show much effect in wide band channels because of its nature to affect only certain frequencies, 1/f noise becomes a prominent in OFDM communication systems where narrow band channels are used. in this thesis, I study the effects of 1/f noise on the OFDM systems and implement algorithms for estimation and suppression of the noise using Kalman filter. Suppression of the noise is achieved by subtracting the estimated noise from the received noise. I show that the performance of the system is considerably improved by applying the 1/f noise suppression.
Date: May 2012
Creator: Rodda, Lasya
Partner: UNT Libraries

Occlusion Tolerant Object Recognition Methods for Video Surveillance and Tracking of Moving Civilian Vehicles

Description: Recently, there is a great interest in moving object tracking in the fields of security and surveillance. Object recognition under partial occlusion is the core of any object tracking system. This thesis presents an automatic and real-time color object-recognition system which is not only robust but also occlusion tolerant. The intended use of the system is to recognize and track external vehicles entered inside a secured area like a school campus or any army base. Statistical morphological skeleton is used to represent the visible shape of the vehicle. Simple curve matching and different feature based matching techniques are used to recognize the segmented vehicle. Features of the vehicle are extracted upon entering the secured area. The vehicle is recognized from either a digital video frame or a static digital image when needed. The recognition engine will help the design of a high performance tracking system meant for remote video surveillance.
Date: December 2007
Creator: Pati, Nishikanta
Partner: UNT Libraries

Trajectory Analytics

Description: The numerous surveillance videos recorded by a single stationary wide-angle-view camera persuade the use of a moving point as the representation of each small-size object in wide video scene. The sequence of the positions of each moving point can be used to generate a trajectory containing both spatial and temporal information of object's movement. In this study, we investigate how the relationship between two trajectories can be used to recognize multi-agent interactions. For this purpose, we present a simple set of qualitative atomic disjoint trajectory-segment relations which can be utilized to represent the relationships between two trajectories. Given a pair of adjacent concurrent trajectories, we segment the trajectory pair to get the ordered sequence of related trajectory-segments. Each pair of corresponding trajectory-segments then is assigned a token associated with the trajectory-segment relation, which leads to the generation of a string called a pairwise trajectory-segment relationship sequence. From a group of pairwise trajectory-segment relationship sequences, we utilize an unsupervised learning algorithm, particularly the k-medians clustering, to detect interesting patterns that can be used to classify lower-level multi-agent activities. We evaluate the effectiveness of the proposed approach by comparing the activity classes predicted by our method to the actual classes from the ground-truth set obtained using the crowdsourcing technique. The results show that the relationships between a pair of trajectories can signify the low-level multi-agent activities.
Date: May 2015
Creator: Santiteerakul, Wasana
Partner: UNT Libraries

Trajectories As a Unifying Cross Domain Feature for Surveillance Systems

Description: Manual video analysis is apparently a tedious task. An efficient solution is of highly importance to automate the process and to assist operators. A major goal of video analysis is understanding and recognizing human activities captured by surveillance cameras, a very challenging problem; the activities can be either individual or interactional among multiple objects. It involves extraction of relevant spatial and temporal information from visual images. Most video analytics systems are constrained by specific environmental situations. Different domains may require different specific knowledge to express characteristics of interesting events. Spatial-temporal trajectories have been utilized to capture motion characteristics of activities. The focus of this dissertation is on how trajectories are utilized in assist in developing video analytic system in the context of surveillance. The research as reported in this dissertation begins real-time highway traffic monitoring and dynamic traffic pattern analysis and in the end generalize the knowledge to event and activity analysis in a broader context. The main contributions are: the use of the graph-theoretic dominant set approach to the classification of traffic trajectories; the ability to first partition the trajectory clusters using entry and exit point awareness to significantly improve the clustering effectiveness and to reduce the computational time and complexity in the on-line processing of new trajectories; A novel tracking method that uses the extended 3-D Hungarian algorithm with a Kalman filter to preserve the smoothness of motion; a novel camera calibration method to determine the second vanishing point with no operator assistance; and a logic reasoning framework together with a new set of context free LLEs which could be utilized across different domains. Additional efforts have been made for three comprehensive surveillance systems together with main contributions mentioned above.
Date: December 2014
Creator: Wan, Yiwen
Partner: UNT Libraries

Video Analytics with Spatio-Temporal Characteristics of Activities

Description: As video capturing devices become more ubiquitous from surveillance cameras to smart phones, the demand of automated video analysis is increasing as never before. One obstacle in this process is to efficiently locate where a human operator’s attention should be, and another is to determine the specific types of activities or actions without ambiguity. It is the special interest of this dissertation to locate spatial and temporal regions of interest in videos and to develop a better action representation for video-based activity analysis. This dissertation follows the scheme of “locating then recognizing” activities of interest in videos, i.e., locations of potentially interesting activities are estimated before performing in-depth analysis. Theoretical properties of regions of interest in videos are first exploited, based on which a unifying framework is proposed to locate both spatial and temporal regions of interest with the same settings of parameters. The approach estimates the distribution of motion based on 3D structure tensors, and locates regions of interest according to persistent occurrences of low probability. Two contributions are further made to better represent the actions. The first is to construct a unifying model of spatio-temporal relationships between reusable mid-level actions which bridge low-level pixels and high-level activities. Dense trajectories are clustered to construct mid-level actionlets, and the temporal relationships between actionlets are modeled as Action Graphs based on Allen interval predicates. The second is an effort for a novel and efficient representation of action graphs based on a sparse coding framework. Action graphs are first represented using Laplacian matrices and then decomposed as a linear combination of primitive dictionary items following sparse coding scheme. The optimization is eventually formulated and solved as a determinant maximization problem, and 1-nearest neighbor is used for action classification. The experiments have shown better results than existing approaches for regions-of-interest detection and action ...
Date: May 2015
Creator: Cheng, Guangchun
Partner: UNT Libraries

An Empirical Study of How Novice Programmers Use the Web

Description: Students often use the web as a source of help for problems that they encounter on programming assignments.In this work, we seek to understand how students use the web to search for help on their assignments.We used a mixed methods approach with 344 students who complete a survey and 41 students who participate in a focus group meetings and helped in recording data about their search habits.The survey reveals data about student reported search habits while the focus group uses a web browser plug-in to record actual search patterns.We examine the results collectively and as broken down by class year.Survey results show that at least 2/3 of the students from each class year rely on search engines to locate resources for help with their programming bugs in at least half of their assignments;search habits vary by class year;and the value of different types of resources such as tutorials and forums varies by class year.Focus group results exposes the high frequency web sites used by the students in solving their programming assignments.
Date: May 2016
Creator: Tula, Naveen
Partner: UNT Libraries

Simulink(R) Based Design and Implementation of a Solar Power Based Mobile Charger

Description: Electrical energy is used at approximately the rate of 15 Terawatts world-wide. Generating this much energy has become a primary concern for all nations. There are many ways of generating energy among which the most commonly used are non-renewable and will extinct much sooner than expected. Very active research is going on both to increase the use of renewable energy sources and to use the available energy with more efficiency. Among these sources, solar energy is being considered as the most abundant and has received high attention. The mobile phone has become one of the basic needs of modern life, with almost every human being having one.Individually a mobile phone consumes little power but collectively this becomes very large. This consideration motivated the research undertaken in this masters thesis. The objective of this thesis is to design a model for solar power based charging circuits for mobile phone using Simulink(R). This thesis explains a design procedure of solar power based mobile charger circuit using Simulink(R) which includes the models for the photo-voltaic array, maximum power point tracker, pulse width modulator, DC-DC converter and a battery.The first part of the thesis concentrates on electron level behavior of a solar cell, its structure and its electrical model.The second part is to design an array of solar cells to generate the desired output.Finally, the third part is to design a DC-DC converter which can stabilize and provide the required input to the battery with the help of the maximum power point tracker and pulse width modulation.The obtained DC-DC converter is adjustable to meet the requirements of the battery. This design is aimed at charging a lithium ion battery with nominal voltage of 3.7 V, which can be taken as baseline to charge different types of batteries with different nominal voltages.
Date: May 2016
Creator: Mukka, Manoj Kumar
Partner: UNT Libraries

Classifying Pairwise Object Interactions: A Trajectory Analytics Approach

Description: We have a huge amount of video data from extensively available surveillance cameras and increasingly growing technology to record the motion of a moving object in the form of trajectory data. With proliferation of location-enabled devices and ongoing growth in smartphone penetration as well as advancements in exploiting image processing techniques, tracking moving objects is more flawlessly achievable. In this work, we explore some domain-independent qualitative and quantitative features in raw trajectory (spatio-temporal) data in videos captured by a fixed single wide-angle view camera sensor in outdoor areas. We study the efficacy of those features in classifying four basic high level actions by employing two supervised learning algorithms and show how each of the features affect the learning algorithms’ overall accuracy as a single factor or confounded with others.
Date: May 2015
Creator: Janmohammadi, Siamak
Partner: UNT Libraries

Exploration of Visual, Acoustic, and Physiological Modalities to Complement Linguistic Representations for Sentiment Analysis

Description: This research is concerned with the identification of sentiment in multimodal content. This is of particular interest given the increasing presence of subjective multimodal content on the web and other sources, which contains a rich and vast source of people's opinions, feelings, and experiences. Despite the need for tools that can identify opinions in the presence of diverse modalities, most of current methods for sentiment analysis are designed for textual data only, and few attempts have been made to address this problem. The dissertation investigates techniques for augmenting linguistic representations with acoustic, visual, and physiological features. The potential benefits of using these modalities include linguistic disambiguation, visual grounding, and the integration of information about people's internal states. The main goal of this work is to build computational resources and tools that allow sentiment analysis to be applied to multimodal data. This thesis makes three important contributions. First, it shows that modalities such as audio, video, and physiological data can be successfully used to improve existing linguistic representations for sentiment analysis. We present a method that integrates linguistic features with features extracted from these modalities. Features are derived from verbal statements, audiovisual recordings, thermal recordings, and physiological sensors signals. The resulting multimodal sentiment analysis system is shown to significantly outperform the use of language alone. Using this system, we were able to predict the sentiment expressed in video reviews and also the sentiment experienced by viewers while exposed to emotionally loaded content. Second, the thesis provides evidence of the portability of the developed strategies to other affect recognition problems. We provided support for this by studying the deception detection problem. Third, this thesis contributes several multimodal datasets that will enable further research in sentiment and deception detection.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2014
Creator: Pérez-Rosas, Verónica
Partner: UNT Libraries

Smartphone-based Household Travel Survey - a Literature Review, an App, and a Pilot Survey

Description: High precision data from household travel survey (HTS) is extremely important for the transportation research, traffic models and policy formulation. Traditional methods of data collection were imprecise because they relied on people’s memories of trip information, such as date and location, and the remainder data had to be obtained by certain supplemental tools. The traditional methods suffered from intensive labor, large time consumption, and unsatisfactory data precision. Recent research trends to employ smartphone apps to collect HTS data. In this study, there are two goals to be addressed. First, a smartphone app is developed to realize a smartphone-based method only for data collection. Second, the researcher evaluates whether this method can supply or replace the traditional tools of HTS. Based on this premise, the smartphone app, TravelSurvey, is specially developed and used for this study. TravelSurvey is currently compatible with iPhone 4 or higher and iPhone Operating System (iOS) 6 or higher, except iPhone 6 or iPhone 6 plus and iOS 8. To evaluate the feasibility, eight individuals are recruited to participate in a pilot HTS. Afterwards, seven of them are involved in a semi-structured interview. The interview is designed to collect interviewees’ feedback directly, so the interview mainly concerns the users’ experience of TravelSurvey. Generally, the feedback is positive. In this study, the pilot HTS data is successfully uploaded to the server by the participants, and the interviewees prefer this smartphone-based method. Therefore, as a new tool, the smartphone-based method feasibly supports a typical HTS for data collection.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2014
Creator: Wang, Qian
Partner: UNT Libraries

Monitoring Dengue Outbreaks Using Online Data

Description: Internet technology has affected humans' lives in many disciplines. The search engine is one of the most important Internet tools in that it allows people to search for what they want. Search queries entered in a web search engine can be used to predict dengue incidence. This vector borne disease causes severe illness and kills a large number of people every year. This dissertation utilizes the capabilities of search queries related to dengue and climate to forecast the number of dengue cases. Several machine learning techniques are applied for data analysis, including Multiple Linear Regression, Artificial Neural Networks, and the Seasonal Autoregressive Integrated Moving Average. Predictive models produced from these machine learning methods are measured for their performance to find which technique generates the best model for dengue prediction. The results of experiments presented in this dissertation indicate that search query data related to dengue and climate can be used to forecast the number of dengue cases. The performance measurement of predictive models shows that Artificial Neural Networks outperform the others. These results will help public health officials in planning to deal with the outbreaks.
Date: May 2014
Creator: Chartree, Jedsada
Partner: UNT Libraries

Investigation on Segmentation, Recognition and 3D Reconstruction of Objects Based on Lidar Data Or Mri

Description: Segmentation, recognition and 3D reconstruction of objects have been cutting-edge research topics, which have many applications ranging from environmental and medical to geographical applications as well as intelligent transportation. In this dissertation, I focus on the study of segmentation, recognition and 3D reconstruction of objects using LiDAR data/MRI. Three main works are that (I). Feature extraction algorithm based on sparse LiDAR data. A novel method has been proposed for feature extraction from sparse LiDAR data. The algorithm and the related principles have been described. Also, I have tested and discussed the choices and roles of parameters. By using correlation of neighboring points directly, statistic distribution of normal vectors at each point has been effectively used to determine the category of the selected point. (II). Segmentation and 3D reconstruction of objects based on LiDAR/MRI. The proposed method includes that the 3D LiDAR data are layered, that different categories are segmented, and that 3D canopy surfaces of individual tree crowns and clusters of trees are reconstructed from LiDAR point data based on a region active contour model. The proposed method allows for delineations of 3D forest canopy naturally from the contours of raw LiDAR point clouds. The proposed model is suitable not only for a series of ideal cone shapes, but also for other kinds of 3D shapes as well as other kinds dataset such as MRI. (III). Novel algorithms for recognition of objects based on LiDAR/MRI. Aimed to the sparse LiDAR data, the feature extraction algorithm has been proposed and applied to classify the building and trees. More importantly, the novel algorithms based on level set methods have been provided and employed to recognize not only the buildings and trees, the different trees (e.g. Oak trees and Douglas firs), but also the subthalamus nuclei (STNs). By using the novel algorithms based ...
Date: May 2015
Creator: Tang, Shijun
Partner: UNT Libraries

Computational Methods for Discovering and Analyzing Causal Relationships in Health Data

Description: Publicly available datasets in health science are often large and observational, in contrast to experimental datasets where a small number of data are collected in controlled experiments. Variables' causal relationships in the observational dataset are yet to be determined. However, there is a significant interest in health science to discover and analyze causal relationships from health data since identified causal relationships will greatly facilitate medical professionals to prevent diseases or to mitigate the negative effects of the disease. Recent advances in Computer Science, particularly in Bayesian networks, has initiated a renewed interest for causality research. Causal relationships can be possibly discovered through learning the network structures from data. However, the number of candidate graphs grows in a more than exponential rate with the increase of variables. Exact learning for obtaining the optimal structure is thus computationally infeasible in practice. As a result, heuristic approaches are imperative to alleviate the difficulty of computations. This research provides effective and efficient learning tools for local causal discoveries and novel methods of learning causal structures with a combination of background knowledge. Specifically in the direction of constraint based structural learning, polynomial-time algorithms for constructing causal structures are designed with first-order conditional independence. Algorithms of efficiently discovering non-causal factors are developed and proved. In addition, when the background knowledge is partially known, methods of graph decomposition are provided so as to reduce the number of conditioned variables. Experiments on both synthetic data and real epidemiological data indicate the provided methods are applicable to large-scale datasets and scalable for causal analysis in health data. Followed by the research methods and experiments, this dissertation gives thoughtful discussions on the reliability of causal discoveries computational health science research, complexity, and implications in health science research.
Date: August 2015
Creator: Liang, Yiheng
Partner: UNT Libraries

A Computational Methodology for Addressing Differentiated Access of Vulnerable Populations During Biological Emergencies

Description: Mitigation response plans must be created to protect affected populations during biological emergencies resulting from the release of harmful biochemical substances. Medical countermeasures have been stockpiled by the federal government for such emergencies. However, it is the responsibility of local governments to maintain solid, functional plans to apply these countermeasures to the entire target population within short, mandated time frames. Further, vulnerabilities in the population may serve as barriers preventing certain individuals from participating in mitigation activities. Therefore, functional response plans must be capable of reaching vulnerable populations.Transportation vulnerability results from lack of access to transportation. Transportation vulnerable populations located too far from mitigation resources are at-risk of not being able to participate in mitigation activities. Quantification of these populations requires the development of computational methods to integrate spatial demographic data and transportation resource data from disparate sources into the context of planned mitigation efforts. Research described in this dissertation focuses on quantifying transportation vulnerable populations and maximizing participation in response efforts. Algorithms developed as part of this research are integrated into a computational framework to promote a transition from research and development to deployment and use by biological emergency planners.
Date: August 2014
Creator: O’Neill II, Martin Joseph
Partner: UNT Libraries

Uncertainty Evaluation in Large-scale Dynamical Systems: Theory and Applications

Description: Significant research efforts have been devoted to large-scale dynamical systems, with the aim of understanding their complicated behaviors and managing their responses in real-time. One pivotal technological obstacle in this process is the existence of uncertainty. Although many of these large-scale dynamical systems function well in the design stage, they may easily fail when operating in realistic environment, where environmental uncertainties modulate system dynamics and complicate real-time predication and management tasks. This dissertation aims to develop systematic methodologies to evaluate the performance of large-scale dynamical systems under uncertainty, as a step toward real-time decision support. Two uncertainty evaluation approaches are pursued: the analytical approach and the effective simulation approach. The analytical approach abstracts the dynamics of original stochastic systems, and develops tractable analysis (e.g., jump-linear analysis) for the approximated systems. Despite the potential bias introduced in the approximation process, the analytical approach provides rich insights valuable for evaluating and managing the performance of large-scale dynamical systems under uncertainty. When a system’s complexity and scale are beyond tractable analysis, the effective simulation approach becomes very useful. The effective simulation approach aims to use a few smartly selected simulations to quickly evaluate a complex system’s statistical performance. This approach was originally developed to evaluate a single uncertain variable. This dissertation extends the approach to be scalable and effective for evaluating large-scale systems under a large-number of uncertain variables. While a large portion of this dissertation focuses on the development of generic methods and theoretical analysis that are applicable to broad large-scale dynamical systems, many results are illustrated through a representative large-scale system application on strategic air traffic management application, which is concerned with designing robust management plans subject to a wide range of weather possibilities at 2-15 hours look-ahead time.
Date: December 2014
Creator: Zhou, Yi
Partner: UNT Libraries