30 Matching Results

Search Results

Advanced search parameters have been applied.

Boosting for Learning From Imbalanced, Multiclass Data Sets

Description: In many real-world applications, it is common to have uneven number of examples among multiple classes. The data imbalance, however, usually complicates the learning process, especially for the minority classes, and results in deteriorated performance. Boosting methods were proposed to handle the imbalance problem. These methods need elongated training time and require diversity among the classifiers of the ensemble to achieve improved performance. Additionally, extending the boosting method to handle multi-class data sets is not straightforward. Examples of applications that suffer from imbalanced multi-class data can be found in face recognition, where tens of classes exist, and in capsule endoscopy, which suffers massive imbalance between the classes. This dissertation introduces RegBoost, a new boosting framework to address the imbalanced, multi-class problems. This method applies a weighted stratified sampling technique and incorporates a regularization term that accommodates multi-class data sets and automatically determines the error bound of each base classifier. The regularization parameter penalizes the classifier when it misclassifies instances that were correctly classified in the previous iteration. The parameter additionally reduces the bias towards majority classes. Experiments are conducted using 12 diverse data sets with moderate to high imbalance ratios. The results demonstrate superior performance of the proposed method compared to several state-of-the-art algorithms for imbalanced, multi-class classification problems. More importantly, the sensitivity improvement of the minority classes using RegBoost is accompanied with the improvement of the overall accuracy for all classes. With unpredictability regularization, a diverse group of classifiers are created and the maximum accuracy improvement reaches above 24%. Using stratified undersampling, RegBoost exhibits the best efficiency. The reduction in computational cost is significant reaching above 50%. As the volume of training data increase, the gain of efficiency with the proposed method becomes more significant.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2013
Creator: Abouelenien, Mohamed
Partner: UNT Libraries

Evaluating Appropriateness of Emg and Flex Sensors for Classifying Hand Gestures

Description: Hand and arm gestures are a great way of communication when you don't want to be heard, quieter and often more reliable than whispering into a radio mike. In recent years hand gesture identification became a major active area of research due its use in various applications. The objective of my work is to develop an integrated sensor system, which will enable tactical squads and SWAT teams to communicate when there is absence of a Line of Sight or in the presence of any obstacles. The gesture set involved in this work is the standardized hand signals for close range engagement operations used by military and SWAT teams. The gesture sets involved in this work are broadly divided into finger movements and arm movements. The core components of the integrated sensor system are: Surface EMG sensors, Flex sensors and accelerometers. Surface EMG is the electrical activity produced by muscle contractions and measured by sensors directly attached to the skin. Bend Sensors use a piezo resistive material to detect the bend. The sensor output is determined by both the angle between the ends of the sensor as well as the flex radius. Accelerometers sense the dynamic acceleration and inclination in 3 directions simultaneously. EMG sensors are placed on the upper and lower forearm and assist in the classification of the finger and wrist movements. Bend sensors are mounted on a glove that is worn on the hand. The sensors are located over the first knuckle of each figure and can determine if the finger is bent or not. An accelerometer is attached to the glove at the base of the wrist and determines the speed and direction of the arm movement. Classification algorithm SVM is used to classify the gestures.
Date: May 2013
Creator: Akumalla, Sarath Chandra
Partner: UNT Libraries

Space and Spectrum Engineered High Frequency Components and Circuits

Description: With the increasing demand on wireless and portable devices, the radio frequency front end blocks are required to feature properties such as wideband, high frequency, multiple operating frequencies, low cost and compact size. However, the current radio frequency system blocks are designed by combining several individual frequency band blocks into one functional block, which increase the cost and size of devices. To address these issues, it is important to develop novel approaches to further advance the current design methodologies in both space and spectrum domains. In recent years, the concept of artificial materials has been proposed and studied intensively in RF/Microwave, Terahertz, and optical frequency range. It is a combination of conventional materials such as air, wood, metal and plastic. It can achieve the material properties that have not been found in nature. Therefore, the artificial material (i.e. meta-materials) provides design freedoms to control both the spectrum performance and geometrical structures of radio frequency front end blocks and other high frequency systems. In this dissertation, several artificial materials are proposed and designed by different methods, and their applications to different high frequency components and circuits are studied. First, quasi-conformal mapping (QCM) method is applied to design plasmonic wave-adapters and couplers working at the optical frequency range. Second, inverse QCM method is proposed to implement flattened Luneburg lens antennas and parabolic antennas in the microwave range. Third, a dual-band compact directional coupler is realized by applying artificial transmission lines. In addition, a fully symmetrical coupler with artificial lumped element structure is also implemented. Finally, a tunable on-chip inductor, compact CMOS transmission lines, and metamaterial-based interconnects are proposed using artificial metal structures. All the proposed designs are simulated in full-wave 3D electromagnetic solvers, and the measurement results agree well with the simulation results. These artificial material-based novel design methodologies pave the way ...
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2015
Creator: Arigong, Bayaner
Partner: UNT Libraries

Brain Computer Interface (BCI) Applications: Privacy Threats and Countermeasures

Description: In recent years, brain computer interfaces (BCIs) have gained popularity in non-medical domains such as the gaming, entertainment, personal health, and marketing industries. A growing number of companies offer various inexpensive consumer grade BCIs and some of these companies have recently introduced the concept of BCI "App stores" in order to facilitate the expansion of BCI applications and provide software development kits (SDKs) for other developers to create new applications for their devices. The BCI applications access to users' unique brainwave signals, which consequently allows them to make inferences about users' thoughts and mental processes. Since there are no specific standards that govern the development of BCI applications, its users are at the risk of privacy breaches. In this work, we perform first comprehensive analysis of BCI App stores including software development kits (SDKs), application programming interfaces (APIs), and BCI applications w.r.t privacy issues. The goal is to understand the way brainwave signals are handled by BCI applications and what threats to the privacy of users exist. Our findings show that most applications have unrestricted access to users' brainwave signals and can easily extract private information about their users without them even noticing. We discuss potential privacy threats posed by current practices used in BCI App stores and then describe some countermeasures that could be used to mitigate the privacy threats. Also, develop a prototype which gives the BCI app users a choice to restrict their brain signal dynamically.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2017
Creator: Bhalotiya, Anuj Arun
Partner: UNT Libraries

Freeform Cursive Handwriting Recognition Using a Clustered Neural Network

Description: Optical character recognition (OCR) software has advanced greatly in recent years. Machine-printed text can be scanned and converted to searchable text with word accuracy rates around 98%. Reasonably neat hand-printed text can be recognized with about 85% word accuracy. However, cursive handwriting still remains a challenge, with state-of-the-art performance still around 75%. Algorithms based on hidden Markov models have been only moderately successful, while recurrent neural networks have delivered the best results to date. This thesis explored the feasibility of using a special type of feedforward neural network to convert freeform cursive handwriting to searchable text. The hidden nodes in this network were grouped into clusters, with each cluster being trained to recognize a unique character bigram. The network was trained on writing samples that were pre-segmented and annotated. Post-processing was facilitated in part by using the network to identify overlapping bigrams that were then linked together to form words and sentences. With dictionary assisted post-processing, the network achieved word accuracy of 66.5% on a small, proprietary corpus. The contributions in this thesis are threefold: 1) the novel clustered architecture of the feed-forward neural network, 2) the development of an expanded set of observers combining image masks, modifiers, and feature characterizations, and 3) the use of overlapping bigrams as the textual working unit to assist in context analysis and reconstruction.
Date: August 2015
Creator: Bristow, Kelly H.
Partner: UNT Libraries

Incremental Learning with Large Datasets

Description: This dissertation focuses on the novel learning strategy based on geometric support vector machines to address the difficulties of processing immense data set. Support vector machines find the hyper-plane that maximizes the margin between two classes, and the decision boundary is represented with a few training samples it becomes a favorable choice for incremental learning. The dissertation presents a novel method Geometric Incremental Support Vector Machines (GISVMs) to address both efficiency and accuracy issues in handling massive data sets. In GISVM, skin of convex hulls is defined and an efficient method is designed to find the best skin approximation given available examples. The set of extreme points are found by recursively searching along the direction defined by a pair of known extreme points. By identifying the skin of the convex hulls, the incremental learning will only employ a much smaller number of samples with comparable or even better accuracy. When additional samples are provided, they will be used together with the skin of the convex hull constructed from previous dataset. This results in a small number of instances used in incremental steps of the training process. Based on the experimental results with synthetic data sets, public benchmark data sets from UCI and endoscopy videos, it is evident that the GISVM achieved satisfactory classifiers that closely model the underlying data distribution. GISVM improves the performance in sensitivity in the incremental steps, significantly reduced the demand for memory space, and demonstrates the ability of recovery from temporary performance degradation.
Date: May 2012
Creator: Giritharan, Balathasan
Partner: UNT Libraries

Autonomic Failure Identification and Diagnosis for Building Dependable Cloud Computing Systems

Description: The increasingly popular cloud-computing paradigm provides on-demand access to computing and storage with the appearance of unlimited resources. Users are given access to a variety of data and software utilities to manage their work. Users rent virtual resources and pay for only what they use. In spite of the many benefits that cloud computing promises, the lack of dependability in shared virtualized infrastructures is a major obstacle for its wider adoption, especially for mission-critical applications. Virtualization and multi-tenancy increase system complexity and dynamicity. They introduce new sources of failure degrading the dependability of cloud computing systems. To assure cloud dependability, in my dissertation research, I develop autonomic failure identification and diagnosis techniques that are crucial for understanding emergent, cloud-wide phenomena and self-managing resource burdens for cloud availability and productivity enhancement. We study the runtime cloud performance data collected from a cloud test-bed and by using traces from production cloud systems. We define cloud signatures including those metrics that are most relevant to failure instances. We exploit profiled cloud performance data in both time and frequency domain to identify anomalous cloud behaviors and leverage cloud metric subspace analysis to automate the diagnosis of observed failures. We implement a prototype of the anomaly identification system and conduct the experiments in an on-campus cloud computing test-bed and by using the Google datacenter traces. Our experimental results show that our proposed anomaly detection mechanism can achieve 93% detection sensitivity while keeping the false positive rate as low as 6.1% and outperform other tested anomaly detection schemes. In addition, the anomaly detector adapts itself by recursively learning from these newly verified detection results to refine future detection.
Date: May 2014
Creator: Guan, Qiang
Partner: UNT Libraries

Automatic Tagging of Communication Data

Description: Globally distributed software teams are widespread throughout industry. But finding reliable methods that can properly assess a team's activities is a real challenge. Methods such as surveys and manual coding of activities are too time consuming and are often unreliable. Recent advances in information retrieval and linguistics, however, suggest that automated and/or semi-automated text classification algorithms could be an effective way of finding differences in the communication patterns among individuals and groups. Communication among group members is frequent and generates a significant amount of data. Thus having a web-based tool that can automatically analyze the communication patterns among global software teams could lead to a better understanding of group performance. The goal of this thesis, therefore, is to compare automatic and semi-automatic measures of communication and evaluate their effectiveness in classifying different types of group activities that occur within a global software development project. In order to achieve this goal, we developed a web-based component that can be used to help clean and classify communication activities. The component was then used to compare different automated text classification techniques on various group activities to determine their effectiveness in correctly classifying data from a global software development team project.
Date: August 2012
Creator: Hoyt, Matthew Ray
Partner: UNT Libraries

Computational Methods for Vulnerability Analysis and Resource Allocation in Public Health Emergencies

Description: POD (Point of Dispensing)-based emergency response plans involving mass prophylaxis may seem feasible when considering the choice of dispensing points within a region, overall population density, and estimated traffic demands. However, the plan may fail to serve particular vulnerable sub-populations, resulting in access disparities during emergency response. Federal authorities emphasize on the need to identify sub-populations that cannot avail regular services during an emergency due to their special needs to ensure effective response. Vulnerable individuals require the targeted allocation of appropriate resources to serve their special needs. Devising schemes to address the needs of vulnerable sub-populations is essential for the effectiveness of response plans. This research focuses on data-driven computational methods to quantify and address vulnerabilities in response plans that require the allocation of targeted resources. Data-driven methods to identify and quantify vulnerabilities in response plans are developed as part of this research. Addressing vulnerabilities requires the targeted allocation of appropriate resources to PODs. The problem of resource allocation to PODs during public health emergencies is introduced and the variants of the resource allocation problem such as the spatial allocation, spatio-temporal allocation and optimal resource subset variants are formulated. Generating optimal resource allocation and scheduling solutions can be computationally hard problems. The application of metaheuristic techniques to find near-optimal solutions to the resource allocation problem in response plans is investigated. A vulnerability analysis and resource allocation framework that facilitates the demographic analysis of population data in the context of response plans, and the optimal allocation of resources with respect to the analysis are described.
Date: August 2015
Creator: Indrakanti, Saratchandra
Partner: UNT Libraries

A Global Stochastic Modeling Framework to Simulate and Visualize Epidemics

Description: Epidemics have caused major human and monetary losses through the course of human civilization. It is very important that epidemiologists and public health personnel are prepared to handle an impending infectious disease outbreak. the ever-changing demographics, evolving infrastructural resources of geographic regions, emerging and re-emerging diseases, compel the use of simulation to predict disease dynamics. By the means of simulation, public health personnel and epidemiologists can predict the disease dynamics, population groups at risk and their geographic locations beforehand, so that they are prepared to respond in case of an epidemic outbreak. As a consequence of the large numbers of individuals and inter-personal interactions involved in simulating infectious disease spread in a region such as a county, sizeable amounts of data may be produced that have to be analyzed. Methods to visualize this data would be effective in facilitating people from diverse disciplines understand and analyze the simulation. This thesis proposes a framework to simulate and visualize the spread of an infectious disease in a population of a region such as a county. As real-world populations have a non-homogeneous demographic and spatial distribution, this framework models the spread of an infectious disease based on population of and geographic distance between census blocks; social behavioral parameters for demographic groups. the population is stratified into demographic groups in individual census blocks using census data. Infection spread is modeled by means of local and global contacts generated between groups of population in census blocks. the strength and likelihood of the contacts are based on population, geographic distance and social behavioral parameters of the groups involved. the disease dynamics are represented on a geographic map of the region using a heat map representation, where the intensity of infection is mapped to a color scale. This framework provides a tool for public health personnel and ...
Date: May 2012
Creator: Indrakanti, Saratchandra
Partner: UNT Libraries

The Influence of Social Network Graph Structure on Disease Dynamics in a Simulated Environment

Description: The fight against epidemics/pandemics is one of man versus nature. Technological advances have not only improved existing methods for monitoring and controlling disease outbreaks, but have also provided new means for investigation, such as through modeling and simulation. This dissertation explores the relationship between social structure and disease dynamics. Social structures are modeled as graphs, and outbreaks are simulated based on a well-recognized standard, the susceptible-infectious-removed (SIR) paradigm. Two independent, but related, studies are presented. The first involves measuring the severity of outbreaks as social network parameters are altered. The second study investigates the efficacy of various vaccination policies based on social structure. Three disease-related centrality measures are introduced, contact, transmission, and spread centrality, which are related to previously established centrality measures degree, betweenness, and closeness, respectively. The results of experiments presented in this dissertation indicate that reducing the neighborhood size along with outside-of-neighborhood contacts diminishes the severity of disease outbreaks. Vaccination strategies can effectively reduce these parameters. Additionally, vaccination policies that target individuals with high centrality are generally shown to be slightly more effective than a random vaccination policy. These results combined with past and future studies will assist public health officials in their effort to minimize the effects of inevitable disease epidemics/pandemics.
Date: December 2010
Creator: Johnson, Tina V.
Partner: UNT Libraries

Analysis and Optimization of Graphene FET based Nanoelectronic Integrated Circuits

Description: Like cell to the human body, transistors are the basic building blocks of any electronics circuits. Silicon has been the industries obvious choice for making transistors. Transistors with large size occupy large chip area, consume lots of power and the number of functionalities will be limited due to area constraints. Thus to make the devices smaller, smarter and faster, the transistors are aggressively scaled down in each generation. Moore's law states that the transistors count in any electronic circuits doubles every 18 months. Following this Moore's law, the transistor has already been scaled down to 14 nm. However there are limitations to how much further these transistors can be scaled down. Particularly below 10 nm, these silicon based transistors hit the fundamental limits like loss of gate control, high leakage and various other short channel effects. Thus it is not possible to favor the silicon transistors for future electronics applications. As a result, the research has shifted to new device concepts and device materials alternative to silicon. Carbon is the next abundant element found in the Earth and one of such carbon based nanomaterial is graphene. Graphene when extracted from Graphite, the same material used as the lid in pencil, have a tremendous potential to take future electronics devices to new heights in terms of size, cost and efficiency. Thus after its first experimental discovery of graphene in 2004, graphene has been the leading research area for both academics as well as industries. This dissertation is focused on the analysis and optimization of graphene based circuits for future electronics. The first part of this dissertation considers graphene based transistors for analog/radio frequency (RF) circuits. In this section, a dual gate Graphene Field Effect Transistor (GFET) is considered to build the case study circuits like voltage controlled oscillator (VCO) and low ...
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2016
Creator: Joshi, Shital
Partner: UNT Libraries

A New Algorithm for Finding the Minimum Distance between Two Convex Hulls

Description: The problem of computing the minimum distance between two convex hulls has applications to many areas including robotics, computer graphics and path planning. Moreover, determining the minimum distance between two convex hulls plays a significant role in support vector machines (SVM). In this study, a new algorithm for finding the minimum distance between two convex hulls is proposed and investigated. A convergence of the algorithm is proved and applicability of the algorithm to support vector machines is demostrated. The performance of the new algorithm is compared with the performance of one of the most popular algorithms, the sequential minimal optimization (SMO) method. The new algorithm is simple to understand, easy to implement, and can be more efficient than the SMO method for many SVM problems.
Date: May 2009
Creator: Kaown, Dougsoo
Partner: UNT Libraries

Accurate Joint Detection from Depth Videos towards Pose Analysis

Description: Joint detection is vital for characterizing human pose and serves as a foundation for a wide range of computer vision applications such as physical training, health care, entertainment. This dissertation proposed two methods to detect joints in the human body for pose analysis. The first method detects joints by combining body model and automatic feature points detection together. The human body model maps the detected extreme points to the corresponding body parts of the model and detects the position of implicit joints. The dominant joints are detected after implicit joints and extreme points are located by a shortest path based methods. The main contribution of this work is a hybrid framework to detect joints on the human body to achieve robustness to different body shapes or proportions, pose variations and occlusions. Another contribution of this work is the idea of using geodesic features of the human body to build a model for guiding the human pose detection and estimation. The second proposed method detects joints by segmenting human body into parts first and then detect joints by making the detection algorithm focusing on each limb. The advantage of applying body part segmentation first is that the body segmentation method narrows down the searching area for each joint so that the joint detection method can provide more stable and accurate results.
Date: May 2018
Creator: Kong, Longbo
Partner: UNT Libraries

Assessment of Post-earthquake Building Damage Using High-resolution Satellite Images and LiDAR Data - a Case Study From Port-au-prince, Haiti

Description: When an earthquake happens, one of the most important tasks of disaster managers is to conduct damage assessment; this is mostly done from remotely sensed data. This study presents a new method for building detection and damage assessment using high-resolution satellite images and LiDAR data from Port-au-Prince, Haiti. A graph-cut method is used for building detection due to its advantages compared to traditional methods such as the Hough transform. Results of two methods are compared to understand how much our proposed technique is effective. Afterwards, sensitivity analysis is performed to show the effect of image resolution on the efficiency of our method. Results are in four groups. First: based on two criteria for sensitivity analysis, completeness and correctness, the more efficient method is graph-cut, and the final building mask layer is used for damage assessment. Next, building damage assessment is done using change detection technique from two images from period of before and after the earthquake. Third, to integrate LiDAR data and damage assessment, we showed there is a strong relationship between terrain roughness variables that are calculated using digital surface models. Finally, open street map and normalized digital surface model are used to detect possible road blockages. Results of detecting road blockages showed positive values of normalized digital surface model on the road centerline can represent blockages if we exclude other objects such as cars.
Date: August 2014
Creator: Koohikamali, Mehrdad
Partner: UNT Libraries

Privacy Preserving EEG-based Authentication Using Perceptual Hashing

Description: The use of electroencephalogram (EEG), an electrophysiological monitoring method for recording the brain activity, for authentication has attracted the interest of researchers for over a decade. In addition to exhibiting qualities of biometric-based authentication, they are revocable, impossible to mimic, and resistant to coercion attacks. However, EEG signals carry a wealth of information about an individual and can reveal private information about the user. This brings significant privacy issues to EEG-based authentication systems as they have access to raw EEG signals. This thesis proposes a privacy-preserving EEG-based authentication system that preserves the privacy of the user by not revealing the raw EEG signals while allowing the system to authenticate the user accurately. In that, perceptual hashing is utilized and instead of raw EEG signals, their perceptually hashed values are used in the authentication process. In addition to describing the authentication process, algorithms to compute the perceptual hash are developed based on two feature extraction techniques. Experimental results show that an authentication system using perceptual hashing can achieve performance comparable to a system that has access to raw EEG signals if enough EEG channels are used in the process. This thesis also presents a security analysis to show that perceptual hashing can prevent information leakage.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2016
Creator: Koppikar, Samir Dilip
Partner: UNT Libraries

Learning from small data set for object recognition in mobile platforms.

Description: Did you stand at a door with a bunch of keys and tried to find the right one to unlock the door? Did you hold a flower and wonder the name of it? A need of object recognition could rise anytime and any where in our daily lives. With the development of mobile devices object recognition applications become possible to provide immediate assistance. However, performing complex tasks in even the most advanced mobile platforms still faces great challenges due to the limited computing resources and computing power. In this thesis, we present an object recognition system that resides and executes within a mobile device, which can efficiently extract image features and perform learning and classification. To account for the computing constraint, a novel feature extraction method that minimizes the data size and maintains data consistency is proposed. This system leverages principal component analysis method and is able to update the trained classifier when new examples become available . Our system relieves users from creating a lot of examples and makes it user friendly. The experimental results demonstrate that a learning method trained with a very small number of examples can achieve recognition accuracy above 90% in various acquisition conditions. In addition, the system is able to perform learning efficiently.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2016
Creator: Liu, Siyuan
Partner: UNT Libraries

End of Insertion Detection in Colonoscopy Videos

Description: Colorectal cancer is the second leading cause of cancer-related deaths behind lung cancer in the United States. Colonoscopy is the preferred screening method for detection of diseases like Colorectal Cancer. In the year 2006, American Society for Gastrointestinal Endoscopy (ASGE) and American College of Gastroenterology (ACG) issued guidelines for quality colonoscopy. The guidelines suggest that on average the withdrawal phase during a screening colonoscopy should last a minimum of 6 minutes. My aim is to classify the colonoscopy video into insertion and withdrawal phase. The problem is that currently existing shot detection techniques cannot be applied because colonoscopy is a single camera shot from start to end. An algorithm to detect phase boundary has already been developed by the MIGLAB team. Existing method has acceptable levels of accuracy but the main issue is dependency on MPEG (Moving Pictures Expert Group) 1/2. I implemented exhaustive search for motion estimation to reduce the execution time and improve the accuracy. I took advantages of the C/C++ programming languages with multithreading which helped us get even better performances in terms of execution time. I propose a method for improving the current method of colonoscopy video analysis and also an extension for the same to make it usable for real time videos. The real time version we implemented is capable of handling streams coming directly from the camera in the form of uncompressed bitmap frames. Existing implementation could not be applied to real time scenario because of its dependency on MPEG 1/2. Future direction of this research includes improved motion search and GPU parallel computing techniques.
Date: August 2009
Creator: Malik, Avnish Rajbal
Partner: UNT Libraries

Automatic Removal of Complex Shadows From Indoor Videos

Description: Shadows in indoor scenarios are usually characterized with multiple light sources that produce complex shadow patterns of a single object. Without removing shadow, the foreground object tends to be erroneously segmented. The inconsistent hue and intensity of shadows make automatic removal a challenging task. In this thesis, a dynamic thresholding and transfer learning-based method for removing shadows is proposed. The method suppresses light shadows with a dynamically computed threshold and removes dark shadows using an online learning strategy that is built upon a base classifier trained with manually annotated examples and refined with the automatically identified examples in the new videos. Experimental results demonstrate that despite variation of lighting conditions in videos our proposed method is able to adapt to the videos and remove shadows effectively. The sensitivity of shadow detection changes slightly with different confidence levels used in example selection for classifier retraining and high confidence level usually yields better performance with less retraining iterations.
Date: August 2015
Creator: Mohapatra, Deepankar
Partner: UNT Libraries

Occlusion Tolerant Object Recognition Methods for Video Surveillance and Tracking of Moving Civilian Vehicles

Description: Recently, there is a great interest in moving object tracking in the fields of security and surveillance. Object recognition under partial occlusion is the core of any object tracking system. This thesis presents an automatic and real-time color object-recognition system which is not only robust but also occlusion tolerant. The intended use of the system is to recognize and track external vehicles entered inside a secured area like a school campus or any army base. Statistical morphological skeleton is used to represent the visible shape of the vehicle. Simple curve matching and different feature based matching techniques are used to recognize the segmented vehicle. Features of the vehicle are extracted upon entering the secured area. The vehicle is recognized from either a digital video frame or a static digital image when needed. The recognition engine will help the design of a high performance tracking system meant for remote video surveillance.
Date: December 2007
Creator: Pati, Nishikanta
Partner: UNT Libraries

Predictive Modeling for Persuasive Ambient Technology

Description: Computer scientists are increasingly aware of the power of ubiquitous computing systems that can display information in and about the user's environment. One sub category of ubiquitous computing is persuasive ambient information systems that involve an informative display transitioning between the periphery and center of attention. The goal of this ambient technology is to produce a behavior change, implying that a display must be informative, unobtrusive, and persuasive. While a significant body of research exists on ambient technology, previous research has not fully explored the different measures to identify behavior change, evaluation techniques for linking design characteristics to visual effectiveness, nor the use of short-term goals to affect long-term behavior change. This study uses the unique context of noise-induced hearing loss (NIHL) among collegiate musicians to explore these issues through developing the MIHL Reduction Feedback System that collects real-time data, translates it into visuals for music classrooms, provides predictive outcomes for goalsetting persuasion, and provides statistical measures of behavior change.
Date: August 2015
Creator: Powell, Jason W.
Partner: UNT Libraries

High Resolution Satellite Images and LiDAR Data for Small-Area Building Extraction and Population Estimation

Description: Population estimation in inter-censual years has many important applications. In this research, high-resolution pan-sharpened IKONOS image, LiDAR data, and parcel data are used to estimate small-area population in the eastern part of the city of Denton, Texas. Residential buildings are extracted through object-based classification techniques supported by shape indices and spectral signatures. Three population indicators -building count, building volume and building area at block level are derived using spatial joining and zonal statistics in GIS. Linear regression and geographically weighted regression (GWR) models generated using the three variables and the census data are used to estimate population at the census block level. The maximum total estimation accuracy that can be attained by the models is 94.21%. Accuracy assessments suggest that the GWR models outperformed linear regression models due to their better handling of spatial heterogeneity. Models generated from building volume and area gave better results. The models have lower accuracy in both densely populated census blocks and sparsely populated census blocks, which could be partly attributed to the lower accuracy of the LiDAR data used.
Date: December 2009
Creator: Ramesh, Sathya
Partner: UNT Libraries

Automated Tree Crown Discrimination Using Three-Dimensional Shape Signatures Derived from LiDAR Point Clouds

Description: Discrimination of different tree crowns based on their 3D shapes is essential for a wide range of forestry applications, and, due to its complexity, is a significant challenge. This study presents a modified 3D shape descriptor for the perception of different tree crown shapes in discrete-return LiDAR point clouds. The proposed methodology comprises of five main components, including definition of a local coordinate system, learning salient points, generation of simulated LiDAR point clouds with geometrical shapes, shape signature generation (from simulated LiDAR points as reference shape signature and actual LiDAR point clouds as evaluated shape signature), and finally, similarity assessment of shape signatures in order to extract the shape of a real tree. The first component represents a proposed strategy to define a local coordinate system relating to each tree to normalize 3D point clouds. In the second component, a learning approach is used to categorize all 3D point clouds into two ranks to identify interesting or salient points on each tree. The third component discusses generation of simulated LiDAR point clouds for two geometrical shapes, including a hemisphere and a half-ellipsoid. Then, the operator extracts 3D LiDAR point clouds of actual trees, either deciduous or evergreen. In the fourth component, a longitude-latitude transformation is applied to simulated and actual LiDAR point clouds to generate 3D shape signatures of tree crowns. A critical step is transformation of LiDAR points from their exact positions to their longitude and latitude positions using the longitude-latitude transformation, which is different from the geographic longitude and latitude coordinates, and labeled by their pre-assigned ranks. Then, natural neighbor interpolation converts the point maps to raster datasets. The generated shape signatures from simulated and actual LiDAR points are called reference and evaluated shape signatures, respectively. Lastly, the fifth component determines the similarity between evaluated and reference shape ...
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2018
Creator: Sadeghinaeenifard, Fariba
Partner: UNT Libraries