30 Matching Results

Search Results

Advanced search parameters have been applied.

Learning from small data set for object recognition in mobile platforms.

Description: Did you stand at a door with a bunch of keys and tried to find the right one to unlock the door? Did you hold a flower and wonder the name of it? A need of object recognition could rise anytime and any where in our daily lives. With the development of mobile devices object recognition applications become possible to provide immediate assistance. However, performing complex tasks in even the most advanced mobile platforms still faces great challenges due to the limited computing resources and computing power. In this thesis, we present an object recognition system that resides and executes within a mobile device, which can efficiently extract image features and perform learning and classification. To account for the computing constraint, a novel feature extraction method that minimizes the data size and maintains data consistency is proposed. This system leverages principal component analysis method and is able to update the trained classifier when new examples become available . Our system relieves users from creating a lot of examples and makes it user friendly. The experimental results demonstrate that a learning method trained with a very small number of examples can achieve recognition accuracy above 90% in various acquisition conditions. In addition, the system is able to perform learning efficiently.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2016
Creator: Liu, Siyuan
Partner: UNT Libraries

Privacy Preserving EEG-based Authentication Using Perceptual Hashing

Description: The use of electroencephalogram (EEG), an electrophysiological monitoring method for recording the brain activity, for authentication has attracted the interest of researchers for over a decade. In addition to exhibiting qualities of biometric-based authentication, they are revocable, impossible to mimic, and resistant to coercion attacks. However, EEG signals carry a wealth of information about an individual and can reveal private information about the user. This brings significant privacy issues to EEG-based authentication systems as they have access to raw EEG signals. This thesis proposes a privacy-preserving EEG-based authentication system that preserves the privacy of the user by not revealing the raw EEG signals while allowing the system to authenticate the user accurately. In that, perceptual hashing is utilized and instead of raw EEG signals, their perceptually hashed values are used in the authentication process. In addition to describing the authentication process, algorithms to compute the perceptual hash are developed based on two feature extraction techniques. Experimental results show that an authentication system using perceptual hashing can achieve performance comparable to a system that has access to raw EEG signals if enough EEG channels are used in the process. This thesis also presents a security analysis to show that perceptual hashing can prevent information leakage.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2016
Creator: Koppikar, Samir Dilip
Partner: UNT Libraries

Freeform Cursive Handwriting Recognition Using a Clustered Neural Network

Description: Optical character recognition (OCR) software has advanced greatly in recent years. Machine-printed text can be scanned and converted to searchable text with word accuracy rates around 98%. Reasonably neat hand-printed text can be recognized with about 85% word accuracy. However, cursive handwriting still remains a challenge, with state-of-the-art performance still around 75%. Algorithms based on hidden Markov models have been only moderately successful, while recurrent neural networks have delivered the best results to date. This thesis explored the feasibility of using a special type of feedforward neural network to convert freeform cursive handwriting to searchable text. The hidden nodes in this network were grouped into clusters, with each cluster being trained to recognize a unique character bigram. The network was trained on writing samples that were pre-segmented and annotated. Post-processing was facilitated in part by using the network to identify overlapping bigrams that were then linked together to form words and sentences. With dictionary assisted post-processing, the network achieved word accuracy of 66.5% on a small, proprietary corpus. The contributions in this thesis are threefold: 1) the novel clustered architecture of the feed-forward neural network, 2) the development of an expanded set of observers combining image masks, modifiers, and feature characterizations, and 3) the use of overlapping bigrams as the textual working unit to assist in context analysis and reconstruction.
Date: August 2015
Creator: Bristow, Kelly H.
Partner: UNT Libraries

Automatic Removal of Complex Shadows From Indoor Videos

Description: Shadows in indoor scenarios are usually characterized with multiple light sources that produce complex shadow patterns of a single object. Without removing shadow, the foreground object tends to be erroneously segmented. The inconsistent hue and intensity of shadows make automatic removal a challenging task. In this thesis, a dynamic thresholding and transfer learning-based method for removing shadows is proposed. The method suppresses light shadows with a dynamically computed threshold and removes dark shadows using an online learning strategy that is built upon a base classifier trained with manually annotated examples and refined with the automatically identified examples in the new videos. Experimental results demonstrate that despite variation of lighting conditions in videos our proposed method is able to adapt to the videos and remove shadows effectively. The sensitivity of shadow detection changes slightly with different confidence levels used in example selection for classifier retraining and high confidence level usually yields better performance with less retraining iterations.
Date: August 2015
Creator: Mohapatra, Deepankar
Partner: UNT Libraries

Determining Whether and When People Participate in the Events They Tweet About

Description: This work describes an approach to determine whether people participate in the events they tweet about. Specifically, we determine whether people are participants in events with respect to the tweet timestamp. We target all events expressed by verbs in tweets, including past, present and events that may occur in future. We define event participant as people directly involved in an event regardless of whether they are the agent, recipient or play another role. We present an annotation effort, guidelines and quality analysis with 1,096 event mentions. We discuss the label distributions and event behavior in the annotated corpus. We also explain several features used and a standard supervised machine learning approach to automatically determine if and when the author is a participant of the event in the tweet. We discuss trends in the results obtained and devise important conclusions.
Date: May 2017
Creator: Sanagavarapu, Krishna Chaitanya
Partner: UNT Libraries

Brain Computer Interface (BCI) Applications: Privacy Threats and Countermeasures

Description: In recent years, brain computer interfaces (BCIs) have gained popularity in non-medical domains such as the gaming, entertainment, personal health, and marketing industries. A growing number of companies offer various inexpensive consumer grade BCIs and some of these companies have recently introduced the concept of BCI "App stores" in order to facilitate the expansion of BCI applications and provide software development kits (SDKs) for other developers to create new applications for their devices. The BCI applications access to users' unique brainwave signals, which consequently allows them to make inferences about users' thoughts and mental processes. Since there are no specific standards that govern the development of BCI applications, its users are at the risk of privacy breaches. In this work, we perform first comprehensive analysis of BCI App stores including software development kits (SDKs), application programming interfaces (APIs), and BCI applications w.r.t privacy issues. The goal is to understand the way brainwave signals are handled by BCI applications and what threats to the privacy of users exist. Our findings show that most applications have unrestricted access to users' brainwave signals and can easily extract private information about their users without them even noticing. We discuss potential privacy threats posed by current practices used in BCI App stores and then describe some countermeasures that could be used to mitigate the privacy threats. Also, develop a prototype which gives the BCI app users a choice to restrict their brain signal dynamically.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2017
Creator: Bhalotiya, Anuj Arun
Partner: UNT Libraries

End of Insertion Detection in Colonoscopy Videos

Description: Colorectal cancer is the second leading cause of cancer-related deaths behind lung cancer in the United States. Colonoscopy is the preferred screening method for detection of diseases like Colorectal Cancer. In the year 2006, American Society for Gastrointestinal Endoscopy (ASGE) and American College of Gastroenterology (ACG) issued guidelines for quality colonoscopy. The guidelines suggest that on average the withdrawal phase during a screening colonoscopy should last a minimum of 6 minutes. My aim is to classify the colonoscopy video into insertion and withdrawal phase. The problem is that currently existing shot detection techniques cannot be applied because colonoscopy is a single camera shot from start to end. An algorithm to detect phase boundary has already been developed by the MIGLAB team. Existing method has acceptable levels of accuracy but the main issue is dependency on MPEG (Moving Pictures Expert Group) 1/2. I implemented exhaustive search for motion estimation to reduce the execution time and improve the accuracy. I took advantages of the C/C++ programming languages with multithreading which helped us get even better performances in terms of execution time. I propose a method for improving the current method of colonoscopy video analysis and also an extension for the same to make it usable for real time videos. The real time version we implemented is capable of handling streams coming directly from the camera in the form of uncompressed bitmap frames. Existing implementation could not be applied to real time scenario because of its dependency on MPEG 1/2. Future direction of this research includes improved motion search and GPU parallel computing techniques.
Date: August 2009
Creator: Malik, Avnish Rajbal
Partner: UNT Libraries

Urban surface characterization using LiDAR and aerial imagery.

Description: Many calamities in history like hurricanes, tornado and flooding are proof to the large scale impact they cause to the life and economy. Computer simulation and GIS helps in modeling a real world scenario, which assists in evacuation planning, damage assessment, assistance and reconstruction. For achieving computer simulation and modeling there is a need for accurate classification of ground objects. One of the most significant aspects of this research is that it achieves improved classification for regions within which light detection and ranging (LiDAR) has low spatial resolution. This thesis describes a method for accurate classification of bare ground, water body, roads, vegetation, and structures using LiDAR data and aerial Infrared imagery. The most basic step for any terrain modeling application is filtering which is classification of ground and non-ground points. We present an integrated systematic method that makes classification of terrain and non-terrain points effective. Our filtering method uses the geometric feature of the triangle meshes created from LiDAR samples and calculate the confidence for every point. Geometric homogenous blocks and confidence are derived from TIN model and gridded LiDAR samples. The results from two representations are used in a classifier to determine if the block belongs ground or otherwise. Another important step is detection of water body, which is based on the LiDAR sample density of the region. Objects like tress and bare ground are characterized by the geometric features present in the LiDAR and the color features in the infrared imagery. These features are fed into a SVM classifier which detects bare-ground in the given region. Similarly trees are extracted using another trained SVM classifier. Once we obtain bare-grounds and trees, roads are extracted by removing the bare grounds. Structures are identified by the properties of non-ground segments. Experiments were conducted using LiDAR samples and Infrared imagery ...
Date: December 2009
Creator: Sarma, Vaibhav
Partner: UNT Libraries

High Resolution Satellite Images and LiDAR Data for Small-Area Building Extraction and Population Estimation

Description: Population estimation in inter-censual years has many important applications. In this research, high-resolution pan-sharpened IKONOS image, LiDAR data, and parcel data are used to estimate small-area population in the eastern part of the city of Denton, Texas. Residential buildings are extracted through object-based classification techniques supported by shape indices and spectral signatures. Three population indicators -building count, building volume and building area at block level are derived using spatial joining and zonal statistics in GIS. Linear regression and geographically weighted regression (GWR) models generated using the three variables and the census data are used to estimate population at the census block level. The maximum total estimation accuracy that can be attained by the models is 94.21%. Accuracy assessments suggest that the GWR models outperformed linear regression models due to their better handling of spatial heterogeneity. Models generated from building volume and area gave better results. The models have lower accuracy in both densely populated census blocks and sparsely populated census blocks, which could be partly attributed to the lower accuracy of the LiDAR data used.
Date: December 2009
Creator: Ramesh, Sathya
Partner: UNT Libraries

Automatic Tagging of Communication Data

Description: Globally distributed software teams are widespread throughout industry. But finding reliable methods that can properly assess a team's activities is a real challenge. Methods such as surveys and manual coding of activities are too time consuming and are often unreliable. Recent advances in information retrieval and linguistics, however, suggest that automated and/or semi-automated text classification algorithms could be an effective way of finding differences in the communication patterns among individuals and groups. Communication among group members is frequent and generates a significant amount of data. Thus having a web-based tool that can automatically analyze the communication patterns among global software teams could lead to a better understanding of group performance. The goal of this thesis, therefore, is to compare automatic and semi-automatic measures of communication and evaluate their effectiveness in classifying different types of group activities that occur within a global software development project. In order to achieve this goal, we developed a web-based component that can be used to help clean and classify communication activities. The component was then used to compare different automated text classification techniques on various group activities to determine their effectiveness in correctly classifying data from a global software development team project.
Date: August 2012
Creator: Hoyt, Matthew Ray
Partner: UNT Libraries

Assessment of Post-earthquake Building Damage Using High-resolution Satellite Images and LiDAR Data - a Case Study From Port-au-prince, Haiti

Description: When an earthquake happens, one of the most important tasks of disaster managers is to conduct damage assessment; this is mostly done from remotely sensed data. This study presents a new method for building detection and damage assessment using high-resolution satellite images and LiDAR data from Port-au-Prince, Haiti. A graph-cut method is used for building detection due to its advantages compared to traditional methods such as the Hough transform. Results of two methods are compared to understand how much our proposed technique is effective. Afterwards, sensitivity analysis is performed to show the effect of image resolution on the efficiency of our method. Results are in four groups. First: based on two criteria for sensitivity analysis, completeness and correctness, the more efficient method is graph-cut, and the final building mask layer is used for damage assessment. Next, building damage assessment is done using change detection technique from two images from period of before and after the earthquake. Third, to integrate LiDAR data and damage assessment, we showed there is a strong relationship between terrain roughness variables that are calculated using digital surface models. Finally, open street map and normalized digital surface model are used to detect possible road blockages. Results of detecting road blockages showed positive values of normalized digital surface model on the road centerline can represent blockages if we exclude other objects such as cars.
Date: August 2014
Creator: Koohikamali, Mehrdad
Partner: UNT Libraries

Elicitation of Protein-Protein Interactions from Biomedical Literature Using Association Rule Discovery

Description: Extracting information from a stack of data is a tedious task and the scenario is no different in proteomics. Volumes of research papers are published about study of various proteins in several species, their interactions with other proteins and identification of protein(s) as possible biomarker in causing diseases. It is a challenging task for biologists to keep track of these developments manually by reading through the literatures. Several tools have been developed by computer linguists to assist identification, extraction and hypotheses generation of proteins and protein-protein interactions from biomedical publications and protein databases. However, they are confronted with the challenges of term variation, term ambiguity, access only to abstracts and inconsistencies in time-consuming manual curation of protein and protein-protein interaction repositories. This work attempts to attenuate the challenges by extracting protein-protein interactions in humans and elicit possible interactions using associative rule mining on full text, abstracts and captions from figures available from publicly available biomedical literature databases. Two such databases are used in our study: Directory of Open Access Journals (DOAJ) and PubMed Central (PMC). A corpus is built using articles based on search terms. A dataset of more than 38,000 protein-protein interactions from the Human Protein Reference Database (HPRD) is cross-referenced to validate discovered interactive pairs. A set of an optimal size of possible binary protein-protein interactions is generated to be made available for clinician or biological validation. A significant change in the number of new associations was found by altering the thresholds for support and confidence metrics. This study narrows down the limitations for biologists in keeping pace with discovery of protein-protein interactions via manually reading the literature and their needs to validate each and every possible interaction.
Date: August 2010
Creator: Samuel, Jarvie John
Partner: UNT Libraries

Evaluating Appropriateness of Emg and Flex Sensors for Classifying Hand Gestures

Description: Hand and arm gestures are a great way of communication when you don't want to be heard, quieter and often more reliable than whispering into a radio mike. In recent years hand gesture identification became a major active area of research due its use in various applications. The objective of my work is to develop an integrated sensor system, which will enable tactical squads and SWAT teams to communicate when there is absence of a Line of Sight or in the presence of any obstacles. The gesture set involved in this work is the standardized hand signals for close range engagement operations used by military and SWAT teams. The gesture sets involved in this work are broadly divided into finger movements and arm movements. The core components of the integrated sensor system are: Surface EMG sensors, Flex sensors and accelerometers. Surface EMG is the electrical activity produced by muscle contractions and measured by sensors directly attached to the skin. Bend Sensors use a piezo resistive material to detect the bend. The sensor output is determined by both the angle between the ends of the sensor as well as the flex radius. Accelerometers sense the dynamic acceleration and inclination in 3 directions simultaneously. EMG sensors are placed on the upper and lower forearm and assist in the classification of the finger and wrist movements. Bend sensors are mounted on a glove that is worn on the hand. The sensors are located over the first knuckle of each figure and can determine if the finger is bent or not. An accelerometer is attached to the glove at the base of the wrist and determines the speed and direction of the arm movement. Classification algorithm SVM is used to classify the gestures.
Date: May 2013
Creator: Akumalla, Sarath Chandra
Partner: UNT Libraries

Boosting for Learning From Imbalanced, Multiclass Data Sets

Description: In many real-world applications, it is common to have uneven number of examples among multiple classes. The data imbalance, however, usually complicates the learning process, especially for the minority classes, and results in deteriorated performance. Boosting methods were proposed to handle the imbalance problem. These methods need elongated training time and require diversity among the classifiers of the ensemble to achieve improved performance. Additionally, extending the boosting method to handle multi-class data sets is not straightforward. Examples of applications that suffer from imbalanced multi-class data can be found in face recognition, where tens of classes exist, and in capsule endoscopy, which suffers massive imbalance between the classes. This dissertation introduces RegBoost, a new boosting framework to address the imbalanced, multi-class problems. This method applies a weighted stratified sampling technique and incorporates a regularization term that accommodates multi-class data sets and automatically determines the error bound of each base classifier. The regularization parameter penalizes the classifier when it misclassifies instances that were correctly classified in the previous iteration. The parameter additionally reduces the bias towards majority classes. Experiments are conducted using 12 diverse data sets with moderate to high imbalance ratios. The results demonstrate superior performance of the proposed method compared to several state-of-the-art algorithms for imbalanced, multi-class classification problems. More importantly, the sensitivity improvement of the minority classes using RegBoost is accompanied with the improvement of the overall accuracy for all classes. With unpredictability regularization, a diverse group of classifiers are created and the maximum accuracy improvement reaches above 24%. Using stratified undersampling, RegBoost exhibits the best efficiency. The reduction in computational cost is significant reaching above 50%. As the volume of training data increase, the gain of efficiency with the proposed method becomes more significant.
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: December 2013
Creator: Abouelenien, Mohamed
Partner: UNT Libraries

A New Algorithm for Finding the Minimum Distance between Two Convex Hulls

Description: The problem of computing the minimum distance between two convex hulls has applications to many areas including robotics, computer graphics and path planning. Moreover, determining the minimum distance between two convex hulls plays a significant role in support vector machines (SVM). In this study, a new algorithm for finding the minimum distance between two convex hulls is proposed and investigated. A convergence of the algorithm is proved and applicability of the algorithm to support vector machines is demostrated. The performance of the new algorithm is compared with the performance of one of the most popular algorithms, the sequential minimal optimization (SMO) method. The new algorithm is simple to understand, easy to implement, and can be more efficient than the SMO method for many SVM problems.
Date: May 2009
Creator: Kaown, Dougsoo
Partner: UNT Libraries

Automated Tree Crown Discrimination Using Three-Dimensional Shape Signatures Derived from LiDAR Point Clouds

Description: Discrimination of different tree crowns based on their 3D shapes is essential for a wide range of forestry applications, and, due to its complexity, is a significant challenge. This study presents a modified 3D shape descriptor for the perception of different tree crown shapes in discrete-return LiDAR point clouds. The proposed methodology comprises of five main components, including definition of a local coordinate system, learning salient points, generation of simulated LiDAR point clouds with geometrical shapes, shape signature generation (from simulated LiDAR points as reference shape signature and actual LiDAR point clouds as evaluated shape signature), and finally, similarity assessment of shape signatures in order to extract the shape of a real tree. The first component represents a proposed strategy to define a local coordinate system relating to each tree to normalize 3D point clouds. In the second component, a learning approach is used to categorize all 3D point clouds into two ranks to identify interesting or salient points on each tree. The third component discusses generation of simulated LiDAR point clouds for two geometrical shapes, including a hemisphere and a half-ellipsoid. Then, the operator extracts 3D LiDAR point clouds of actual trees, either deciduous or evergreen. In the fourth component, a longitude-latitude transformation is applied to simulated and actual LiDAR point clouds to generate 3D shape signatures of tree crowns. A critical step is transformation of LiDAR points from their exact positions to their longitude and latitude positions using the longitude-latitude transformation, which is different from the geographic longitude and latitude coordinates, and labeled by their pre-assigned ranks. Then, natural neighbor interpolation converts the point maps to raster datasets. The generated shape signatures from simulated and actual LiDAR points are called reference and evaluated shape signatures, respectively. Lastly, the fifth component determines the similarity between evaluated and reference shape ...
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2018
Creator: Sadeghinaeenifard, Fariba
Partner: UNT Libraries

Accurate Joint Detection from Depth Videos towards Pose Analysis

Description: Joint detection is vital for characterizing human pose and serves as a foundation for a wide range of computer vision applications such as physical training, health care, entertainment. This dissertation proposed two methods to detect joints in the human body for pose analysis. The first method detects joints by combining body model and automatic feature points detection together. The human body model maps the detected extreme points to the corresponding body parts of the model and detects the position of implicit joints. The dominant joints are detected after implicit joints and extreme points are located by a shortest path based methods. The main contribution of this work is a hybrid framework to detect joints on the human body to achieve robustness to different body shapes or proportions, pose variations and occlusions. Another contribution of this work is the idea of using geodesic features of the human body to build a model for guiding the human pose detection and estimation. The second proposed method detects joints by segmenting human body into parts first and then detect joints by making the detection algorithm focusing on each limb. The advantage of applying body part segmentation first is that the body segmentation method narrows down the searching area for each joint so that the joint detection method can provide more stable and accurate results.
Date: May 2018
Creator: Kong, Longbo
Partner: UNT Libraries

Analysis and Optimization of Graphene FET based Nanoelectronic Integrated Circuits

Description: Like cell to the human body, transistors are the basic building blocks of any electronics circuits. Silicon has been the industries obvious choice for making transistors. Transistors with large size occupy large chip area, consume lots of power and the number of functionalities will be limited due to area constraints. Thus to make the devices smaller, smarter and faster, the transistors are aggressively scaled down in each generation. Moore's law states that the transistors count in any electronic circuits doubles every 18 months. Following this Moore's law, the transistor has already been scaled down to 14 nm. However there are limitations to how much further these transistors can be scaled down. Particularly below 10 nm, these silicon based transistors hit the fundamental limits like loss of gate control, high leakage and various other short channel effects. Thus it is not possible to favor the silicon transistors for future electronics applications. As a result, the research has shifted to new device concepts and device materials alternative to silicon. Carbon is the next abundant element found in the Earth and one of such carbon based nanomaterial is graphene. Graphene when extracted from Graphite, the same material used as the lid in pencil, have a tremendous potential to take future electronics devices to new heights in terms of size, cost and efficiency. Thus after its first experimental discovery of graphene in 2004, graphene has been the leading research area for both academics as well as industries. This dissertation is focused on the analysis and optimization of graphene based circuits for future electronics. The first part of this dissertation considers graphene based transistors for analog/radio frequency (RF) circuits. In this section, a dual gate Graphene Field Effect Transistor (GFET) is considered to build the case study circuits like voltage controlled oscillator (VCO) and low ...
Access: This item is restricted to UNT Community Members. Login required if off-campus.
Date: May 2016
Creator: Joshi, Shital
Partner: UNT Libraries

Computational Methods for Vulnerability Analysis and Resource Allocation in Public Health Emergencies

Description: POD (Point of Dispensing)-based emergency response plans involving mass prophylaxis may seem feasible when considering the choice of dispensing points within a region, overall population density, and estimated traffic demands. However, the plan may fail to serve particular vulnerable sub-populations, resulting in access disparities during emergency response. Federal authorities emphasize on the need to identify sub-populations that cannot avail regular services during an emergency due to their special needs to ensure effective response. Vulnerable individuals require the targeted allocation of appropriate resources to serve their special needs. Devising schemes to address the needs of vulnerable sub-populations is essential for the effectiveness of response plans. This research focuses on data-driven computational methods to quantify and address vulnerabilities in response plans that require the allocation of targeted resources. Data-driven methods to identify and quantify vulnerabilities in response plans are developed as part of this research. Addressing vulnerabilities requires the targeted allocation of appropriate resources to PODs. The problem of resource allocation to PODs during public health emergencies is introduced and the variants of the resource allocation problem such as the spatial allocation, spatio-temporal allocation and optimal resource subset variants are formulated. Generating optimal resource allocation and scheduling solutions can be computationally hard problems. The application of metaheuristic techniques to find near-optimal solutions to the resource allocation problem in response plans is investigated. A vulnerability analysis and resource allocation framework that facilitates the demographic analysis of population data in the context of response plans, and the optimal allocation of resources with respect to the analysis are described.
Date: August 2015
Creator: Indrakanti, Saratchandra
Partner: UNT Libraries

Advanced Power Amplifiers Design for Modern Wireless Communication

Description: Modern wireless communication systems use spectrally efficient modulation schemes to reach high data rate transmission. These schemes are generally involved with signals with high peak-to-average power ratio (PAPR). Moreover, the development of next generation wireless communication systems requires the power amplifiers to operate over a wide frequency band or multiple frequency bands to support different applications. These wide-band and multi-band solutions will lead to reductions in both the size and cost of the whole system. This dissertation presents several advanced power amplifier solutions to provide wide-band and multi-band operations with efficiency improvement at power back-offs.
Date: August 2015
Creator: Shao, Jin
Partner: UNT Libraries

Predictive Modeling for Persuasive Ambient Technology

Description: Computer scientists are increasingly aware of the power of ubiquitous computing systems that can display information in and about the user's environment. One sub category of ubiquitous computing is persuasive ambient information systems that involve an informative display transitioning between the periphery and center of attention. The goal of this ambient technology is to produce a behavior change, implying that a display must be informative, unobtrusive, and persuasive. While a significant body of research exists on ambient technology, previous research has not fully explored the different measures to identify behavior change, evaluation techniques for linking design characteristics to visual effectiveness, nor the use of short-term goals to affect long-term behavior change. This study uses the unique context of noise-induced hearing loss (NIHL) among collegiate musicians to explore these issues through developing the MIHL Reduction Feedback System that collects real-time data, translates it into visuals for music classrooms, provides predictive outcomes for goalsetting persuasion, and provides statistical measures of behavior change.
Date: August 2015
Creator: Powell, Jason W.
Partner: UNT Libraries

Evaluation of Call Mobility on Network Productivity in Long Term Evolution Advanced (LTE-A) Femtocells

Description: The demand for higher data rates for indoor and cell-edge users led to evolution of small cells. LTE femtocells, one of the small cell categories, are low-power low-cost mobile base stations, which are deployed within the coverage area of the traditional macro base station. The cross-tier and co-tier interferences occur only when the macrocell and femtocell share the same frequency channels. Open access (OSG), closed access (CSG), and hybrid access are the three existing access-control methods that decide users' connectivity to the femtocell access point (FAP). We define a network performance function, network productivity, to measure the traffic that is carried successfully. In this dissertation, we evaluate call mobility in LTE integrated network and determine optimized network productivity with variable call arrival rate in given LTE deployment with femtocell access modes (OSG, CSG, HYBRID) for a given call blocking vector. The solution to the optimization is maximum network productivity and call arrival rates for all cells. In the second scenario, we evaluate call mobility in LTE integrated network with increasing femtocells and maximize network productivity with variable femtocells distribution per macrocell with constant call arrival rate in uniform LTE deployment with femtocell access modes (OSG, CSG, HYBRID) for a given call blocking vector. The solution to the optimization is maximum network productivity and call arrival rates for all cells for network deployment where peak productivity is identified. We analyze the effects of call mobility on network productivity by simulating low, high, and no mobility scenarios and study the impact based on offered load, handover traffic and blocking probabilities. Finally, we evaluate and optimize performance of fractional frequency reuse (FFR) mechanism and study the impact of proposed metric weighted user satisfaction with sectorized FFR configuration.
Date: December 2017
Creator: Sawant, Uttara
Partner: UNT Libraries