You limited your search to:

  Partner: UNT Libraries
 Department: Department of Computer Science and Engineering
 Collection: UNT Theses and Dissertations
Modeling Alcohol Consumption Using Blog Data
How do the content and writing style of people who drink alcohol beverages stand out from non-drinkers? How much information can we learn about a person's alcohol consumption behavior by reading text that they have authored? This thesis attempts to extend the methods deployed in authorship attribution and authorship profiling research into the domain of automatically identifying the human action of drinking alcohol beverages. I examine how a psycholinguistics dictionary (the Linguistics Inquiry and Word Count lexicon, developed by James Pennebaker), together with Kenneth Burke's concept of words as symbols of human action, and James Wertsch's concept of mediated action provide a framework for analyzing meaningful data patterns from the content of blogs written by consumers of alcohol beverages. The contributions of this thesis to the research field are twofold. First, I show that it is possible to automatically identify blog posts that have content related to the consumption of alcohol beverages. And second, I provide a framework and tools to model human behavior through text analysis of blog data. digital.library.unt.edu/ark:/67531/metadc271843/
Optimizing Non-pharmaceutical Interventions Using Multi-coaffiliation Networks
Computational modeling is of fundamental significance in mapping possible disease spread, and designing strategies for its mitigation. Conventional contact networks implement the simulation of interactions as random occurrences, presenting public health bodies with a difficult trade off between a realistic model granularity and robust design of intervention strategies. Recently, researchers have been investigating the use of agent-based models (ABMs) to embrace the complexity of real world interactions. At the same time, theoretical approaches provide epidemiologists with general optimization models in which demographics are intrinsically simplified. The emerging study of affiliation networks and co-affiliation networks provide an alternative to such trade off. Co-affiliation networks maintain the realism innate to ABMs while reducing the complexity of contact networks into distinctively smaller k-partite graphs, were each partition represent a dimension of the social model. This dissertation studies the optimization of intervention strategies for infectious diseases, mainly distributed in school systems. First, concepts of synthetic populations and affiliation networks are extended to propose a modified algorithm for the synthetic reconstruction of populations. Second, the definition of multi-coaffiliation networks is presented as the main social model in which risk is quantified and evaluated, thereby obtaining vulnerability indications for each school in the system. Finally, maximization of the mitigation coverage and minimization of the overall cost of intervention strategies are proposed and compared, based on centrality measures. digital.library.unt.edu/ark:/67531/metadc271860/
Exploring Memristor Based Analog Design in Simscape
With conventional CMOS technologies approaching their scaling limits, researchers are actively investigating alternative technologies for ever increasing computing and mobile demand. A number of different technologies are currently being studied by different research groups. In the last decade, one-dimensional (1D) carbon nanotubes (CNT), graphene, which is a two-dimensional (2D) natural occurring carbon rolled in tubular form, and zero-dimensional (0D) fullerenes have been the subject of intensive research. In 2008, HP Labs announced a ground-breaking fabrication of memristors, the fourth fundamental element postulated by Chua at the University of California, Berkeley in 1971. In the last few years, the memristor has gained a lot of attention from the research community. In-depth studies of the memristor and its analog behavior have convinced the community that it has the potential in future nano-architectures for optimization of high-density memory and neuromorphic computing architectures. The objective of this thesis is to explore memristors for analog and mixed-signal system design using Simscape. This thesis presents a memristor model in the Simscape language. Simscape has been used as it has the potential for modeling large systems. A memristor based programmable oscillator is also presented with simulation results and characterization. In addition, simulation results of different memristor models are presented which are crucial for the detailed understanding of the memristor along with its properties. digital.library.unt.edu/ark:/67531/metadc271817/
Finding Meaning in Context Using Graph Algorithms in Mono- and Cross-lingual Settings
Making computers automatically find the appropriate meaning of words in context is an interesting problem that has proven to be one of the most challenging tasks in natural language processing (NLP). Widespread potential applications of a possible solution to the problem could be envisaged in several NLP tasks such as text simplification, language learning, machine translation, query expansion, information retrieval and text summarization. Ambiguity of words has always been a challenge in these applications, and the traditional endeavor to solve the problem of this ambiguity, namely doing word sense disambiguation using resources like WordNet, has been fraught with debate about the feasibility of the granularity that exists in WordNet senses. The recent trend has therefore been to move away from enforcing any given lexical resource upon automated systems from which to pick potential candidate senses,and to instead encourage them to pick and choose their own resources. Given a sentence with a target ambiguous word, an alternative solution consists of picking potential candidate substitutes for the target, filtering the list of the candidates to a much shorter list using various heuristics, and trying to match these system predictions against a human generated gold standard, with a view to ensuring that the meaning of the sentence does not change after the substitutions. This solution has manifested itself in the SemEval 2007 task of lexical substitution and the more recent SemEval 2010 task of cross-lingual lexical substitution (which I helped organize), where given an English context and a target word within that context, the systems are required to provide between one and ten appropriate substitutes (in English) or translations (in Spanish) for the target word. In this dissertation, I present a comprehensive overview of state-of-the-art research and describe new experiments to tackle the tasks of lexical substitution and cross-lingual lexical substitution. In particular I attempt to answer some research questions pertinent to the tasks, mostly focusing on completely unsupervised approaches. I present a new framework for unsupervised lexical substitution using graphs and centrality algorithms. An additional novelty in this approach is the use of directional similarity rather than the traditional, symmetric word similarity. Additionally, the thesis also explores the extension of the monolingual framework into a cross-lingual one, and examines how well this cross-lingual framework can work for the monolingual lexical substitution and cross-lingual lexical substitution tasks. A comprehensive set of comparative investigations are presented amongst supervised and unsupervised methods, several graph based methods, and the use of monolingual and multilingual information. digital.library.unt.edu/ark:/67531/metadc271899/
Layout-accurate Ultra-fast System-level Design Exploration Through Verilog-ams
This research addresses problems in designing analog and mixed-signal (AMS) systems by bridging the gap between system-level and circuit-level simulation by making simulations fast like system-level and accurate like circuit-level. The tools proposed include metamodel integrated Verilog-AMS based design exploration flows. The research involves design centering, metamodel generation flows for creating efficient behavioral models, and Verilog-AMS integration techniques for model realization. The core of the proposed solution is transistor-level and layout-level metamodeling and their incorporation in Verilog-AMS. Metamodeling is used to construct efficient and layout-accurate surrogate models for AMS system building blocks. Verilog-AMS, an AMS hardware description language, is employed to build surrogate model implementations that can be simulated with industrial standard simulators. The case-study circuits and systems include an operational amplifier (OP-AMP), a voltage-controlled oscillator (VCO), a charge-pump phase-locked loop (PLL), and a continuous-time delta-sigma modulator (DSM). The minimum and maximum error rates of the proposed OP-AMP model are 0.11 % and 2.86 %, respectively. The error rates for the PLL lock time and power estimation are 0.7 % and 3.0 %, respectively. The OP-AMP optimization using the proposed approach is ~17000× faster than the transistor-level model based approach. The optimization achieves a ~4× power reduction for the OP-AMP design. The PLL parasitic-aware optimization achieves a 10× speedup and a 147 µW power reduction. Thus the experimental results validate the effectiveness of the proposed solution. digital.library.unt.edu/ark:/67531/metadc271923/
Toward a Data-Type-Based Real Time Geospatial Data Stream Management System
The advent of sensory and communication technologies enables the generation and consumption of large volumes of streaming data. Many of these data streams are geo-referenced. Existing spatio-temporal databases and data stream management systems are not capable of handling real time queries on spatial extents. In this thesis, we investigated several fundamental research issues toward building a data-type-based real time geospatial data stream management system. The thesis makes contributions in the following areas: geo-stream data models, aggregation, window-based nearest neighbor operators, and query optimization strategies. The proposed geo-stream data model is based on second-order logic and multi-typed algebra. Both abstract and discrete data models are proposed and exemplified. I further propose two useful geo-stream operators, namely Region By and WNN, which abstract common aggregation and nearest neighbor queries as generalized data model constructs. Finally, I propose three query optimization algorithms based on spatial, temporal, and spatio-temporal constraints of geo-streams. I show the effectiveness of the data model through many query examples. The effectiveness and the efficiency of the algorithms are validated through extensive experiments on both synthetic and real data sets. This work established the fundamental building blocks toward a full-fledged geo-stream database management system and has potential impact in many applications such as hazard weather alerting and monitoring, traffic analysis, and environmental modeling. digital.library.unt.edu/ark:/67531/metadc68070/
A Wireless Traffic Surveillance System Using Video Analytics
Video surveillance systems have been commonly used in transportation systems to support traffic monitoring, speed estimation, and incident detection. However, there are several challenges in developing and deploying such systems, including high development and maintenance costs, bandwidth bottleneck for long range link, and lack of advanced analytics. In this thesis, I leverage current wireless, video camera, and analytics technologies, and present a wireless traffic monitoring system. I first present an overview of the system. Then I describe the site investigation and several test links with different hardware/software configurations to demonstrate the effectiveness of the system. The system development process was documented to provide guidelines for future development. Furthermore, I propose a novel speed-estimation analytics algorithm that takes into consideration roads with slope angles. I prove the correctness of the algorithm theoretically, and validate the effectiveness of the algorithm experimentally. The experimental results on both synthetic and real dataset show that the algorithm is more accurate than the baseline algorithm 80% of the time. On average the accuracy improvement of speed estimation is over 3.7% even for very small slope angles. digital.library.unt.edu/ark:/67531/metadc68005/
Physical-Layer Network Coding for MIMO Systems
The future wireless communication systems are required to meet the growing demands of reliability, bandwidth capacity, and mobility. However, as corruptions such as fading effects, thermal noise, are present in the channel, the occurrence of errors is unavoidable. Motivated by this, the work in this dissertation attempts to improve the system performance by way of exploiting schemes which statistically reduce the error rate, and in turn boost the system throughput. The network can be studied using a simplified model, the two-way relay channel, where two parties exchange messages via the assistance of a relay in between. In such scenarios, this dissertation performs theoretical analysis of the system, and derives closed-form and upper bound expressions of the error probability. These theoretical measurements are potentially helpful references for the practical system design. Additionally, several novel transmission methods including block relaying, permutation modulations for the physical-layer network coding, are proposed and discussed. Numerical simulation results are presented to support the validity of the conclusions. digital.library.unt.edu/ark:/67531/metadc68065/
Exploring Privacy in Location-based Services Using Cryptographic Protocols
Location-based services (LBS) are available on a variety of mobile platforms like cell phones, PDA's, etc. and an increasing number of users subscribe to and use these services. Two of the popular models of information flow in LBS are the client-server model and the peer-to-peer model, in both of which, existing approaches do not always provide privacy for all parties concerned. In this work, I study the feasibility of applying cryptographic protocols to design privacy-preserving solutions for LBS from an experimental and theoretical standpoint. In the client-server model, I construct a two-phase framework for processing nearest neighbor queries using combinations of cryptographic protocols such as oblivious transfer and private information retrieval. In the peer-to-peer model, I present privacy preserving solutions for processing group nearest neighbor queries in the semi-honest and dishonest adversarial models. I apply concepts from secure multi-party computation to realize our constructions and also leverage the capabilities of trusted computing technology, specifically TPM chips. My solution for the dishonest adversarial model is also of independent cryptographic interest. I prove my constructions secure under standard cryptographic assumptions and design experiments for testing the feasibility or practicability of our constructions and benchmark key operations. My experiments show that the proposed constructions are practical to implement and have reasonable costs, while providing strong privacy assurances. digital.library.unt.edu/ark:/67531/metadc68060/
Performance Analysis of Wireless Networks with QoS Adaptations
The explosive demand for multimedia and fast transmission of continuous media on wireless networks means the simultaneous existence of traffic requiring different qualities of service (QoS). In this thesis, several efficient algorithms have been developed which offer several QoS to the end-user. We first look at a request TDMA/CDMA protocol for supporting wireless multimedia traffic, where CDMA is laid over TDMA. Then we look at a hybrid push-pull algorithm for wireless networks, and present a generalized performance analysis of the proposed protocol. Some of the QoS factors considered include customer retrial rates due to user impatience and system timeouts and different levels of priority and weights for mobile hosts. We have also looked at how customer impatience and system timeouts affect the QoS provided by several queuing and scheduling schemes such as FIFO, priority, weighted fair queuing, and the application of the stretch-optimal algorithm to scheduling. digital.library.unt.edu/ark:/67531/metadc4336/
Source and Channel Coding Strategies for Wireless Sensor Networks
In this dissertation, I focus on source coding techniques as well as channel coding techniques. I addressed the challenges in WSN by developing (1) a new source coding strategy for erasure channels that has better distortion performance compared to MDC; (2) a new cooperative channel coding strategy for multiple access channels that has better channel outage performances compared to MIMO; (3) a new source-channel cooperation strategy to accomplish source-to-fusion center communication that reduces system distortion and improves outage performance. First, I draw a parallel between the 2x2 MDC scheme and the Alamouti's space time block coding (STBC) scheme and observe the commonality in their mathematical models. This commonality allows us to observe the duality between the two diversity techniques. Making use of this duality, I develop an MDC scheme with pairwise complex correlating transform. Theoretically, I show that MDC scheme results in: 1) complete elimination of the estimation error when only one descriptor is received; 2) greater efficiency in recovering the stronger descriptor (with larger variance) from the weaker descriptor; and 3) improved performance in terms of minimized distortion as the quantization error gets reduced. Experiments are also performed on real images to demonstrate these benefits. Second, I present a two-phase cooperative communication strategy and an optimal power allocation strategy to transmit sensor observations to a fusion center in a large-scale sensor network. Outage probability is used to evaluate the performance of the proposed system. Simulation results demonstrate that: 1) when signal-to-noise ratio is low, the performance of the proposed system is better than that of the MIMO system over uncorrelated slow fading Rayleigh channels; 2) given the transmission rate and the total transmission SNR, there exists an optimal power allocation that minimizes the outage probability; 3) on correlated slow fading Rayleigh channels, channel correlation will degrade the system performance in linear proportion to the correlation level. Third, I combine the statistical ranking of sensor observations with cooperative communication strategy in a cluster-based wireless sensor network. This strategy involves two steps: 1) ranking the sensor observations based on their test statistics; 2) building a two-phase cooperative communication model with an optimal power allocation strategy. The result is an optimal system performance that considers both sources and channels. I optimize the proposed model through analyses of the system distortion, and show that the cooperating nodes achieve maximum channel capacity. I also simulate the system distortion and outage to show the benefits of the proposed strategies. digital.library.unt.edu/ark:/67531/metadc177226/
Anchor Nodes Placement for Effective Passive Localization
Access: Use of this item is restricted to the UNT Community.
Wireless sensor networks are composed of sensor nodes, which can monitor an environment and observe events of interest. These networks are applied in various fields including but not limited to environmental, industrial and habitat monitoring. In many applications, the exact location of the sensor nodes is unknown after deployment. Localization is a process used to find sensor node's positional coordinates, which is vital information. The localization is generally assisted by anchor nodes that are also sensor nodes but with known locations. Anchor nodes generally are expensive and need to be optimally placed for effective localization. Passive localization is one of the localization techniques where the sensor nodes silently listen to the global events like thunder sounds, seismic waves, lighting, etc. According to previous studies, the ideal location to place anchor nodes was on the perimeter of the sensor network. This may not be the case in passive localization, since the function of anchor nodes here is different than the anchor nodes used in other localization systems. I do extensive studies on positioning anchor nodes for effective localization. Several simulations are run in dense and sparse networks for proper positioning of anchor nodes. I show that, for effective passive localization, the optimal placement of the anchor nodes is at the center of the network in such a way that no three anchor nodes share linearity. The more the non-linearity, the better the localization. The localization for our network design proves better when I place anchor nodes at right angles. digital.library.unt.edu/ark:/67531/metadc33132/
Measuring Vital Signs Using Smart Phones
Smart phones today have become increasingly popular with the general public for its diverse abilities like navigation, social networking, and multimedia facilities to name a few. These phones are equipped with high end processors, high resolution cameras, built-in sensors like accelerometer, orientation-sensor, light-sensor, and much more. According to comScore survey, 25.3% of US adults use smart phones in their daily lives. Motivated by the capability of smart phones and their extensive usage, I focused on utilizing them for bio-medical applications. In this thesis, I present a new application for a smart phone to quantify the vital signs such as heart rate, respiratory rate and blood pressure with the help of its built-in sensors. Using the camera and a microphone, I have shown how the blood pressure and heart rate can be determined for a subject. People sometimes encounter minor situations like fainting or fatal accidents like car crash at unexpected times and places. It would be useful to have a device which can measure all vital signs in such an event. The second part of this thesis demonstrates a new mode of communication for next generation 9-1-1 calls. In this new architecture, the call-taker will be able to control the multimedia elements in the phone from a remote location. This would help the call-taker or first responder to have a better control over the situation. Transmission of the vital signs measured using the smart phone can be a life saver in critical situations. In today's voice oriented 9-1-1 calls, the dispatcher first collects critical information (e.g., location, call-back number) from caller, and assesses the situation. Meanwhile, the dispatchers constantly face a "60-second dilemma"; i.e., within 60 seconds, they need to make a complicated but important decision, whether to dispatch and, if so, what to dispatch. The dispatchers often feel that they lack sufficient information to make a confident dispatch decision. This remote-media-control described in this system will be able to facilitate information acquisition and decision-making in emergency situations within the 60-second response window in 9-1-1 calls using new multimedia technologies. digital.library.unt.edu/ark:/67531/metadc33139/
Modeling Synergistic Relationships Between Words and Images
Texts and images provide alternative, yet orthogonal views of the same underlying cognitive concept. By uncovering synergistic, semantic relationships that exist between words and images, I am working to develop novel techniques that can help improve tasks in natural language processing, as well as effective models for text-to-image synthesis, image retrieval, and automatic image annotation. Specifically, in my dissertation, I will explore the interoperability of features between language and vision tasks. In the first part, I will show how it is possible to apply features generated using evidence gathered from text corpora to solve the image annotation problem in computer vision, without the use of any visual information. In the second part, I will address research in the reverse direction, and show how visual cues can be used to improve tasks in natural language processing. Importantly, I propose a novel metric to estimate the similarity of words by comparing the visual similarity of concepts invoked by these words, and show that it can be used further to advance the state-of-the-art methods that employ corpus-based and knowledge-based semantic similarity measures. Finally, I attempt to construct a joint semantic space connecting words with images, and synthesize an evaluation framework to quantify cross-modal semantic relationships that exist between arbitrary pairs of words and images. I study the effectiveness of unsupervised, corpus-based approaches to automatically derive the semantic relatedness between words and images, and perform empirical evaluations by measuring its correlation with human annotators. digital.library.unt.edu/ark:/67531/metadc177223/
Automated Classification of Emotions Using Song Lyrics
This thesis explores the classification of emotions in song lyrics, using automatic approaches applied to a novel corpus of 100 popular songs. I use crowd sourcing via Amazon Mechanical Turk to collect line-level emotions annotations for this collection of song lyrics.  I then build classifiers that rely on textual features to automatically identify the presence of one or more of the following six Ekman emotions: anger, disgust, fear, joy, sadness and surprise. I compare different classification systems and evaluate the performance of the automatic systems against the manual annotations. I also introduce a system that uses data collected from the social network Twitter. I use the Twitter API to collect a large corpus of tweets manually labeled by their authors for one of the six emotions of interest. I then compare the classification of emotions obtained when training on data automatically collected from Twitter versus data obtained through crowd sourced annotations. digital.library.unt.edu/ark:/67531/metadc177253/
Non-Uniform Grid-Based Coordinated Routing in Wireless Sensor Networks
Wireless sensor networks are ad hoc networks of tiny battery powered sensor nodes that can organize themselves to form self-organized networks and collect information regarding temperature, light, and pressure in an area. Though the applications of sensor networks are very promising, sensor nodes are limited in their capability due to many factors. The main limitation of these battery powered nodes is energy. Sensor networks are expected to work for long periods of time once deployed and it becomes important to conserve the battery life of the nodes to extend network lifetime. This work examines non-uniform grid-based routing protocol as an effort to minimize energy consumption in the network and extend network lifetime. The entire test area is divided into non-uniformly shaped grids. Fixed source and sink nodes with unlimited energy are placed in the network. Sensor nodes with full battery life are deployed uniformly and randomly in the field. The source node floods the network with only the coordinator node active in each grid and the other nodes sleeping. The sink node traces the same route back to the source node through the same coordinators. This process continues till a coordinator node runs out of energy, when new coordinator nodes are elected to participate in routing. Thus the network stays alive till the link between the source and sink nodes is lost, i.e., the network is partitioned. This work explores the efficiency of the non-uniform grid-based routing protocol for different node densities and the non-uniform grid structure that best extends network lifetime. digital.library.unt.edu/ark:/67531/metadc9078/
Effective and Accelerated Informative Frame Filtering in Colonoscopy Videos Using Graphic Processing Units
Colonoscopy is an endoscopic technique that allows a physician to inspect the mucosa of the human colon. Previous methods and software solutions to detect informative frames in a colonoscopy video (a process called informative frame filtering or IFF) have been hugely ineffective in (1) covering the proper definition of an informative frame in the broadest sense and (2) striking an optimal balance between accuracy and speed of classification in both real-time and non real-time medical procedures. In my thesis, I propose a more effective method and faster software solutions for IFF which is more effective due to the introduction of a heuristic algorithm (derived from experimental analysis of typical colon features) for classification. It contributed to a 5-10% boost in various performance metrics for IFF. The software modules are faster due to the incorporation of sophisticated parallel-processing oriented coding techniques on modern microprocessors. Two IFF modules were created, one for post-procedure and the other for real-time. Code optimizations through NVIDIA CUDA for GPU processing and/or CPU multi-threading concepts embedded in two significant microprocessor design philosophies (multi-core design and many-core design) resulted a 5-fold acceleration for the post-procedure module and a 40-fold acceleration for the real-time module. Some innovative software modules, which are still in testing phase, have been recently created to exploit the power of multiple GPUs together. digital.library.unt.edu/ark:/67531/metadc31536/
A CAM-based, high-performance classifier-scheduler for a video network processor.
Classification and scheduling are key functionalities of a network processor. Network processors are equipped with application specific integrated circuits (ASIC), so that as IP (Internet Protocol) packets arrive, they can be processed directly without using the central processing unit. A new network processor is proposed called the video network processor (VNP) for real time broadcasting of video streams for IP television (IPTV). This thesis explores the challenge in designing a combined classification and scheduling module for a VNP. I propose and design the classifier-scheduler module which will classify and schedule data for VNP. The proposed module discriminates between IP packets and video packets. The video packets are further processed for digital rights management (DRM). IP packets which carry regular traffic will traverse without any modification. Basic architecture of VNP and architecture of classifier-scheduler module based on content addressable memory (CAM) and random access memory (RAM) has been proposed. The module has been designed and simulated in Xilinx 9.1i; is built in ISE simulator with a throughput of 1.79 Mbps and a maximum working frequency of 111.89 MHz at a power dissipation of 33.6mW. The code has been translated and mapped for Spartan and Virtex family of devices. digital.library.unt.edu/ark:/67531/metadc6045/
Models to Combat Email Spam Botnets and Unwanted Phone Calls
With the amount of email spam received these days it is hard to imagine that spammers act individually. Nowadays, most of the spam emails have been sent from a collection of compromised machines controlled by some spammers. These compromised computers are often called bots, using which the spammers can send massive volume of spam within a short period of time. The motivation of this work is to understand and analyze the behavior of spammers through a large collection of spam mails. My research examined a the data set collected over a 2.5-year period and developed an algorithm which would give the botnet features and then classify them into various groups. Principal component analysis was used to study the association patterns of group of spammers and the individual behavior of a spammer in a given domain. This is based on the features which capture maximum variance of information we have clustered. Presence information is a growing tool towards more efficient communication and providing new services and features within a business setting and much more. The main contribution in my thesis is to propose the willingness estimator that can estimate the callee's willingness without his/her involvement, the model estimates willingness level based on call history. Finally, the accuracy of the proposed willingness estimator is validated with the actual call logs. digital.library.unt.edu/ark:/67531/metadc6095/
General Nathan Twining and the Fifteenth Air Force in World War II
General Nathan F. Twining distinguished himself in leading the American Fifteenth Air Force during the last full year of World War II in the European Theatre. Drawing on the leadership qualities he had already shown in combat in the Pacific Theatre, he was the only USAAF leader who commanded three separate air forces during World War II. His command of the Fifteenth Air Force gave him his biggest, longest lasting, and most challenging experience of the war, which would be the foundation for the reputation that eventually would win him appointment to the nation's highest military post as Chairman of the Joint Chiefs of Staff during the Cold War. digital.library.unt.edu/ark:/67531/metadc6094/
The Role of Intelligent Mobile Agents in Network Management and Routing
In this research, the application of intelligent mobile agents to the management of distributed network environments is investigated. Intelligent mobile agents are programs which can move about network systems in a deterministic manner in carrying their execution state. These agents can be considered an application of distributed artificial intelligence where the (usually small) agent code is moved to the data and executed locally. The mobile agent paradigm offers potential advantages over many conventional mechanisms which move (often large) data to the code, thereby wasting available network bandwidth. The performance of agents in network routing and knowledge acquisition has been investigated and simulated. A working mobile agent system has also been designed and implemented in JDK 1.2. digital.library.unt.edu/ark:/67531/metadc2736/
Extrapolating Subjectivity Research to Other Languages
Socrates articulated it best, "Speak, so I may see you." Indeed, language represents an invisible probe into the mind. It is the medium through which we express our deepest thoughts, our aspirations, our views, our feelings, our inner reality. From the beginning of artificial intelligence, researchers have sought to impart human like understanding to machines. As much of our language represents a form of self expression, capturing thoughts, beliefs, evaluations, opinions, and emotions which are not available for scrutiny by an outside observer, in the field of natural language, research involving these aspects has crystallized under the name of subjectivity and sentiment analysis. While subjectivity classification labels text as either subjective or objective, sentiment classification further divides subjective text into either positive, negative or neutral. In this thesis, I investigate techniques of generating tools and resources for subjectivity analysis that do not rely on an existing natural language processing infrastructure in a given language. This constraint is motivated by the fact that the vast majority of human languages are scarce from an electronic point of view: they lack basic tools such as part-of-speech taggers, parsers, or basic resources such as electronic text, annotated corpora or lexica. This severely limits the implementation of techniques on par with those developed for English, and by applying methods that are lighter in the usage of text processing infrastructure, we are able to conduct multilingual subjectivity research in these languages as well. Since my aim is also to minimize the amount of manual work required to develop lexica or corpora in these languages, the techniques proposed employ a lever approach, where English often acts as the donor language (the fulcrum in a lever) and allows through a relatively minimal amount of effort to establish preliminary subjectivity research in a target language. digital.library.unt.edu/ark:/67531/metadc271777/
Evaluating Appropriateness of Emg and Flex Sensors for Classifying Hand Gestures
Hand and arm gestures are a great way of communication when you don't want to be heard, quieter and often more reliable than whispering into a radio mike. In recent years hand gesture identification became a major active area of research due its use in various applications. The objective of my work is to develop an integrated sensor system, which will enable tactical squads and SWAT teams to communicate when there is absence of a Line of Sight or in the presence of any obstacles. The gesture set involved in this work is the standardized hand signals for close range engagement operations used by military and SWAT teams. The gesture sets involved in this work are broadly divided into finger movements and arm movements. The core components of the integrated sensor system are: Surface EMG sensors, Flex sensors and accelerometers. Surface EMG is the electrical activity produced by muscle contractions and measured by sensors directly attached to the skin. Bend Sensors use a piezo resistive material to detect the bend. The sensor output is determined by both the angle between the ends of the sensor as well as the flex radius. Accelerometers sense the dynamic acceleration and inclination in 3 directions simultaneously. EMG sensors are placed on the upper and lower forearm and assist in the classification of the finger and wrist movements. Bend sensors are mounted on a glove that is worn on the hand. The sensors are located over the first knuckle of each figure and can determine if the finger is bent or not. An accelerometer is attached to the glove at the base of the wrist and determines the speed and direction of the arm movement. Classification algorithm SVM is used to classify the gestures. digital.library.unt.edu/ark:/67531/metadc271769/
Optimal Access Point Selection and Channel Assignment in IEEE 802.11 Networks
Designing 802.11 wireless networks includes two major components: selection of access points (APs) in the demand areas and assignment of radio frequencies to each AP. Coverage and capacity are some key issues when placing APs in a demand area. APs need to cover all users. A user is considered covered if the power received from its corresponding AP is greater than a given threshold. Moreover, from a capacity standpoint, APs need to provide certain minimum bandwidth to users located in the coverage area. A major challenge in designing wireless networks is the frequency assignment problem. The 802.11 wireless LANs operate in the unlicensed ISM frequency, and all APs share the same frequency. As a result, as 802.11 APs become widely deployed, they start to interfere with each other and degrade network throughput. In consequence, efficient assignment of channels becomes necessary to avoid and minimize interference. In this work, an optimal AP selection was developed by balancing traffic load. An optimization problem was formulated that minimizes heavy congestion. As a result, APs in wireless LANs will have well distributed traffic loads, which maximize the throughput of the network. The channel assignment algorithm was designed by minimizing channel interference between APs. The optimization algorithm assigns channels in such a way that minimizes co-channel and adjacent channel interference resulting in higher throughput. digital.library.unt.edu/ark:/67531/metadc4687/
Voting Operating System (VOS)
Access: Use of this item is restricted to the UNT Community.
The electronic voting machine (EVM) plays a very important role in a country where government officials are elected into office. Throughout the world, a specific operating system that tends to the specific requirement of the EVM does not exist. Existing EVM technology depends upon the various operating systems currently available, thus ignoring the basic needs of the system. There is a compromise over the basic requirements in order to develop the systems on the basis on an already available operating system, thus having a lot of scope for error. It is necessary to know the specific details of the particular device for which the operating system is being developed. In this document, I evaluate existing EVMs and identify flaws and shortcomings. I propose a solution for a new operating system that meets the specific requirements of the EVM, calling it Voting Operating System (VOS, pronounced 'voice'). The identification technique can be simplified by using the fingerprint technology that determines the identity of a person based on two fingerprints. I also discuss the various parts of the operating system that have to be implemented that can tend to all the basic requirements of an EVM, including implementation of the memory manager, process manager and file system of the proposed operating system. digital.library.unt.edu/ark:/67531/metadc4648/
Exploring Trusted Platform Module Capabilities: A Theoretical and Experimental Study
Trusted platform modules (TPMs) are hardware modules that are bound to a computer's motherboard, that are being included in many desktops and laptops. Augmenting computers with these hardware modules adds powerful functionality in distributed settings, allowing us to reason about the security of these systems in new ways. In this dissertation, I study the functionality of TPMs from a theoretical as well as an experimental perspective. On the theoretical front, I leverage various features of TPMs to construct applications like random oracles that are impossible to implement in a standard model of computation. Apart from random oracles, I construct a new cryptographic primitive which is basically a non-interactive form of the standard cryptographic primitive of oblivious transfer. I apply this new primitive to secure mobile agent computations, where interaction between various entities is typically required to ensure security. I prove these constructions are secure using standard cryptographic techniques and assumptions. To test the practicability of these constructions and their applications, I performed an experimental study, both on an actual TPM and a software TPM simulator which has been enhanced to make it reflect timings from a real TPM. This allowed me to benchmark the performance of the applications and test the feasibility of the proposed extensions to standard TPMs. My tests also show that these constructions are practical. digital.library.unt.edu/ark:/67531/metadc6101/
General Purpose Programming on Modern Graphics Hardware
I start with a brief introduction to the graphics processing unit (GPU) as well as general-purpose computation on modern graphics hardware (GPGPU). Next, I explore the motivations for GPGPU programming, and the capabilities of modern GPUs (including advantages and disadvantages). Also, I give the background required for further exploring GPU programming, including the terminology used and the resources available. Finally, I include a comprehensive survey of previous and current GPGPU work, and end with a look at the future of GPU programming. digital.library.unt.edu/ark:/67531/metadc6112/
Peptide-based hidden Markov model for peptide fingerprint mapping.
Access: Use of this item is restricted to the UNT Community.
Peptide mass fingerprinting (PMF) was the first automated method for protein identification in proteomics, and it remains in common usage today because of its simplicity and the low equipment costs for generating fingerprints. However, one of the problems with PMF is its limited specificity and sensitivity in protein identification. Here I present a method that shows potential to significantly enhance the accuracy of peptide mass fingerprinting, using a machine learning approach based on a hidden Markov model (HMM). This method is applied to improve differentiation of real protein matches from those that occur by chance. The system was trained using 300 examples of combined real and false-positive protein identification results, and 10-fold cross-validation applied to assess model discrimination. The model can achieve 93% accuracy in distinguishing correct and real protein identification results versus false-positive matches. The receiver operating characteristic (ROC) curve area for the best model was 0.833. digital.library.unt.edu/ark:/67531/metadc4645/
Keywords in the mist: Automated keyword extraction for very large documents and back of the book indexing.
This research addresses the problem of automatic keyphrase extraction from large documents and back of the book indexing. The potential benefits of automating this process are far reaching, from improving information retrieval in digital libraries, to saving countless man-hours by helping professional indexers creating back of the book indexes. The dissertation introduces a new methodology to evaluate automated systems, which allows for a detailed, comparative analysis of several techniques for keyphrase extraction. We introduce and evaluate both supervised and unsupervised techniques, designed to balance the resource requirements of an automated system and the best achievable performance. Additionally, a number of novel features are proposed, including a statistical informativeness measure based on chi statistics; an encyclopedic feature that taps into the vast knowledge base of Wikipedia to establish the likelihood of a phrase referring to an informative concept; and a linguistic feature based on sophisticated semantic analysis of the text using current theories of discourse comprehension. The resulting keyphrase extraction system is shown to outperform the current state of the art in supervised keyphrase extraction by a large margin. Moreover, a fully automated back of the book indexing system based on the keyphrase extraction system was shown to lead to back of the book indexes closely resembling those created by human experts. digital.library.unt.edu/ark:/67531/metadc6118/
Indoor Localization Using Magnetic Fields
Indoor localization consists of locating oneself inside new buildings. GPS does not work indoors due to multipath reflection and signal blockage. WiFi based systems assume ubiquitous availability and infrastructure based systems require expensive installations, hence making indoor localization an open problem. This dissertation consists of solving the problem of indoor localization by thoroughly exploiting the indoor ambient magnetic fields comprising mainly of disturbances termed as anomalies in the Earth’s magnetic field caused by pillars, doors and elevators in hallways which are ferromagnetic in nature. By observing uniqueness in magnetic signatures collected from different campus buildings, the work presents the identification of landmarks and guideposts from these signatures and further develops magnetic maps of buildings - all of which can be used to locate and navigate people indoors. To understand the reason behind these anomalies, first a comparison between the measured and model generated Earth’s magnetic field is made, verifying the presence of a constant field without any disturbances. Then by modeling the magnetic field behavior of different pillars such as steel reinforced concrete, solid steel, and other structures like doors and elevators, the interaction of the Earth’s field with the ferromagnetic fields is described thereby explaining the causes of the uniqueness in the signatures that comprise these disturbances. Next, by employing the dynamic time warping algorithm to account for time differences in signatures obtained from users walking at different speeds, an indoor localization application capable of classifying locations using the magnetic signatures is developed solely on the smart phone. The application required users to walk short distances of 3-6 m anywhere in hallway to be located with accuracies of 80-99%. The classification framework was further validated with over 90% accuracies using model generated magnetic signatures representing hallways with different kinds of pillars, doors and elevators. All in all, this dissertation contributes the following: 1) provides a framework for understanding the presence of ambient magnetic fields indoors and utilizing them to solve the indoor localization problem; 2) develops an application that is independent of the user and the smart phones and 3) requires no other infrastructure since it is deployed on a device that encapsulates the sensing, computing and inferring functionalities, thereby making it a novel contribution to the mobile and pervasive computing domain. digital.library.unt.edu/ark:/67531/metadc103371/
The Design Of A Benchmark For Geo-stream Management Systems
The recent growth in sensor technology allows easier information gathering in real-time as sensors have grown smaller, more accurate, and less expensive. The resulting data is often in a geo-stream format continuously changing input with a spatial extent. Researchers developing geo-streaming management systems (GSMS) require a benchmark system for evaluation, which is currently lacking. This thesis presents GSMark, a benchmark for evaluating GSMSs. GSMark provides a data generator that creates a combination of synthetic and real geo-streaming data, a workload simulator to present the data to the GSMS as a data stream, and a set of benchmark queries that evaluate typical GSMS functionality and query performance. In particular, GSMark generates both moving points and evolving spatial regions, two fundamental data types for a broad range of geo-stream applications, and the geo-streaming queries on this data. digital.library.unt.edu/ark:/67531/metadc103392/
A Netcentric Scientific Research Repository
Access: Use of this item is restricted to the UNT Community.
The Internet and networks in general have become essential tools for disseminating in-formation. Search engines have become the predominant means of finding information on the Web and all other data repositories, including local resources. Domain scientists regularly acquire and analyze images generated by equipment such as microscopes and cameras, resulting in complex image files that need to be managed in a convenient manner. This type of integrated environment has been recently termed a netcentric sci-entific research repository. I developed a number of data manipulation tools that allow researchers to manage their information more effectively in a netcentric environment. The specific contributions are: (1) A unique interface for management of data including files and relational databases. A wrapper for relational databases was developed so that the data can be indexed and searched using traditional search engines. This approach allows data in databases to be searched with the same interface as other data. Fur-thermore, this approach makes it easier for scientists to work with their data if they are not familiar with SQL. (2) A Web services based architecture for integrating analysis op-erations into a repository. This technique allows the system to leverage the large num-ber of existing tools by wrapping them with a Web service and registering the service with the repository. Metadata associated with Web services was enhanced to allow this feature to be included. In addition, an improved binary to text encoding scheme was de-veloped to reduce the size overhead for sending large scientific data files via XML mes-sages used in Web services. (3) Integrated image analysis operations with SQL. This technique allows for images to be stored and managed conveniently in a relational da-tabase. SQL supplemented with map algebra operations is used to select and perform operations on sets of images. digital.library.unt.edu/ark:/67531/metadc5611/
Mobile agent security through multi-agent cryptographic protocols.
An increasingly promising and widespread topic of research in distributed computing is the mobile agent paradigm: code travelling and performing computations on remote hosts in an autonomous manner. One of the biggest challenges faced by this new paradigm is security. The issue of protecting sensitive code and data carried by a mobile agent against tampering from a malicious host is particularly hard but important. Based on secure multi-party computation, a recent research direction shows the feasibility of a software-only solution to this problem, which had been deemed impossible by some researchers previously. The best result prior to this dissertation is a single-agent protocol which requires the participation of a trusted third party. Our research employs multi-agent protocols to eliminate the trusted third party, resulting in a protocol with minimum trust assumptions. This dissertation presents one of the first formal definitions of secure mobile agent computation, in which the privacy and integrity of the agent code and data as well as the data provided by the host are all protected. We present secure protocols for mobile agent computation against static, semi-honest or malicious adversaries without relying on any third party or trusting any specific participant in the system. The security of our protocols is formally proven through standard proof technique and according to our formal definition of security. Our second result is a more practical agent protocol with strong security against most real-world host attacks. The security features are carefully analyzed, and the practicality is demonstrated through implementation and experimental study on a real-world mobile agent platform. All these protocols rely heavily on well-established cryptographic primitives, such as encrypted circuits, threshold decryption, and oblivious transfer. Our study of these tools yields new contributions to the general field of cryptography. Particularly, we correct a well-known construction of the encrypted circuit and give one of the first provably secure implementations of the encrypted circuit. digital.library.unt.edu/ark:/67531/metadc4473/
Evaluating the Scalability of SDF Single-chip Multiprocessor Architecture Using Automatically Parallelizing Code
Advances in integrated circuit technology continue to provide more and more transistors on a chip. Computer architects are faced with the challenge of finding the best way to translate these resources into high performance. The challenge in the design of next generation CPU (central processing unit) lies not on trying to use up the silicon area, but on finding smart ways to make use of the wealth of transistors now available. In addition, the next generation architecture should offer high throughout performance, scalability, modularity, and low energy consumption, instead of an architecture that is suitable for only one class of applications or users, or only emphasize faster clock rate. A program exhibits different types of parallelism: instruction level parallelism (ILP), thread level parallelism (TLP), or data level parallelism (DLP). Likewise, architectures can be designed to exploit one or more of these types of parallelism. It is generally not possible to design architectures that can take advantage of all three types of parallelism without using very complex hardware structures and complex compiler optimizations. We present the state-of-art architecture SDF (scheduled data flowed) which explores the TLP parallelism as much as that is supplied by that application. We implement a SDF single-chip multiprocessor constructed from simpler processors and execute the automatically parallelizing application on the single-chip multiprocessor. SDF has many desirable features such as high throughput, scalability, and low power consumption, which meet the requirements of the next generation of CPU design. Compared with superscalar, VLIW (very long instruction word), and SMT (simultaneous multithreading), the experiment results show that for application with very little parallelism SDF is comparable to other architectures, for applications with large amounts of parallelism SDF outperforms other architectures. digital.library.unt.edu/ark:/67531/metadc4673/
Intelligent Memory Manager: Towards improving the locality behavior of allocation-intensive applications.
Dynamic memory management required by allocation-intensive (i.e., Object Oriented and linked data structured) applications has led to a large number of research trends. Memory performance due to the cache misses in these applications continues to lag in terms of execution cycles as ever increasing CPU-Memory speed gap continues to grow. Sophisticated prefetcing techniques, data relocations, and multithreaded architectures have tried to address memory latency. These techniques are not completely successful since they require either extra hardware/software in the system or special properties in the applications. Software needed for prefetching and data relocation strategies, aimed to improve cache performance, pollutes the cache so that the technique itself becomes counter-productive. On the other hand, extra hardware complexity needed in multithreaded architectures decelerates CPU's clock, since "Simpler is Faster." This dissertation, directed to seek the cause of poor locality behavior of allocation--intensive applications, studies allocators and their impact on the cache performance of these applications. Our study concludes that service functions, in general, and memory management functions, in particular, entangle with application's code and become the major cause of cache pollution. In this dissertation, we present a novel technique that transfers the allocation and de-allocation functions entirely to a separate processor residing in chip with DRAM (Intelligent Memory Manager). Our empirical results show that, on average, 60% of the cache misses caused by allocation and de-allocation service functions are eliminated using our technique. digital.library.unt.edu/ark:/67531/metadc4491/
Graph-Based Keyphrase Extraction Using Wikipedia
Keyphrases describe a document in a coherent and simple way, giving the prospective reader a way to quickly determine whether the document satisfies their information needs. The pervasion of huge amount of information on Web, with only a small amount of documents have keyphrases extracted, there is a definite need to discover automatic keyphrase extraction systems. Typically, a document written by human develops around one or more general concepts or sub-concepts. These concepts or sub-concepts should be structured and semantically related with each other, so that they can form the meaningful representation of a document. Considering the fact, the phrases or concepts in a document are related to each other, a new approach for keyphrase extraction is introduced that exploits the semantic relations in the document. For measuring the semantic relations between concepts or sub-concepts in the document, I present a comprehensive study aimed at using collaboratively constructed semantic resources like Wikipedia and its link structure. In particular, I introduce a graph-based keyphrase extraction system that exploits the semantic relations in the document and features such as term frequency. I evaluated the proposed system using novel measures and the results obtained compare favorably with previously published results on established benchmarks. digital.library.unt.edu/ark:/67531/metadc67939/
Exploring Process-Variation Tolerant Design of Nanoscale Sense Amplifier Circuits
Sense amplifiers are important circuit components of a dynamic random access memory (DRAM), which forms the main memory of digital computers. The ability of the sense amplifier to detect and amplify voltage signals to correctly interpret data in DRAM cells cannot be understated. The sense amplifier plays a significant role in the overall speed of the DRAM. Sense amplifiers require matched transistors for optimal performance. Hence, the effects of mismatch through process variations must be minimized. This thesis presents a research which leads to optimal nanoscale CMOS sense amplifiers by incorporating the effects of process variation early in the design process. The effects of process variation on the performance of a standard voltage sense amplifier, which is used in conventional DRAMs, is studied. Parametric analysis is performed through circuit simulations to investigate which parameters have the most impact on the performance of the sense amplifier. The figures-of-merit (FoMs) used to characterize the circuit are the precharge time, power dissipation, sense delay and sense margin. Statistical analysis is also performed to study the impact of process variations on each FoM. By analyzing the results from the statistical study, a method is presented to select parameter values that minimize the effects of process variation. A design flow algorithm incorporating dual oxide and dual threshold voltage based techniques is used to optimize the FoMs for the sense amplifier. Experimental results prove that the proposed approach improves precharge time by 83.9%, sense delay by 80.2% sense margin by 61.9%, and power dissipation by 13.1%. digital.library.unt.edu/ark:/67531/metadc67942/
Process-Voltage-Temperature Aware Nanoscale Circuit Optimization
Embedded systems which are targeted towards portable applications are required to have low power consumption because such portable devices are typically powered by batteries. During the memory accesses of such battery operated portable systems, including laptops, cell phones and other devices, a significant amount of power or energy is consumed which significantly affects the battery life. Therefore, efficient and leakage power saving cache designs are needed for longer operation of battery powered applications. Design engineers have limited control over many design parameters of the circuit and hence face many chal-lenges due to inherent process technology variations, particularly on static random access memory (SRAM) circuit design. As CMOS process technologies scale down deeper into the nanometer regime, the push for high performance and reliable systems becomes even more challenging. As a result, developing low-power designs while maintaining better performance of the circuit becomes a very difficult task. Furthermore, a major need for accurate analysis and optimization of various forms of total power dissipation and performance in nanoscale CMOS technologies, particularly in SRAMs, is another critical issue to be considered. This dissertation proposes power-leakage and static noise margin (SNM) analysis and methodologies to achieve optimized static random access memories (SRAMs). Alternate topologies of SRAMs, mainly a 7-transistor SRAM, are taken as a case study throughout this dissertation. The optimized cache designs are process-voltage-temperature (PVT) tolerant and consider individual cells as well as memory arrays. digital.library.unt.edu/ark:/67531/metadc67943/
CMOS Active Pixel Sensors for Digital Cameras: Current State-of-the-Art
Image sensors play a vital role in many image sensing and capture applications. Among the various types of image sensors, complementary metal oxide semiconductor (CMOS) based active pixel sensors (APS), which are characterized by reduced pixel size, give fast readouts and reduced noise. APS are used in many applications such as mobile cameras, digital cameras, Webcams, and many consumer, commercial and scientific applications. With these developments and applications, CMOS APS designs are challenging the old and mature technology of charged couple device (CCD) sensors. With the continuous improvements of APS architecture, pixel designs, along with the development of nanometer CMOS fabrications technologies, APS are optimized for optical sensing. In addition, APS offers very low-power and low-voltage operations and is suitable for monolithic integration, thus allowing manufacturers to integrate more functionality on the array and building low-cost camera-on-a-chip. In this thesis, I explore the current state-of-the-art of CMOS APS by examining various types of APS. I show design and simulation results of one of the most commonly used APS in consumer applications, i.e. photodiode based APS. We also present an approach for technology scaling of the devices in photodiode APS to present CMOS technologies. Finally, I present the most modern CMOS APS technologies by reviewing different design models. The design of the photodiode APS is implemented using commercial CAD tools. digital.library.unt.edu/ark:/67531/metadc3631/
A nano-CMOS based universal voltage level converter for multi-VDD SoCs.
Power dissipation of integrated circuits is the most demanding issue for very large scale integration (VLSI) design engineers, especially for portable and mobile applications. Use of multiple supply voltages systems, which employs level converter between two voltage islands is one of the most effective ways to reduce power consumption. In this thesis work, a unique level converter known as universal level converter (ULC), capable of four distinct level converting operations, is proposed. The schematic and layout of ULC are built and simulated using CADENCE. The ULC is characterized by performing three analysis such as parametric, power, and load analysis which prove that the design has an average power consumption reduction of about 85-97% and capable of producing stable output at low voltages like 0.45V even under varying load conditions. digital.library.unt.edu/ark:/67531/metadc3602/
VLSI Architecture and FPGA Prototyping of a Secure Digital Camera for Biometric Application
This thesis presents a secure digital camera (SDC) that inserts biometric data into images found in forms of identification such as the newly proposed electronic passport. However, putting biometric data in passports makes the data vulnerable for theft, causing privacy related issues. An effective solution to combating unauthorized access such as skimming (obtaining data from the passport's owner who did not willingly submit the data) or eavesdropping (intercepting information as it moves from the chip to the reader) could be judicious use of watermarking and encryption at the source end of the biometric process in hardware like digital camera or scanners etc. To address such issues, a novel approach and its architecture in the framework of a digital camera, conceptualized as an SDC is presented. The SDC inserts biometric data into passport image with the aid of watermarking and encryption processes. The VLSI (very large scale integration) architecture of the functional units of the SDC such as watermarking and encryption unit is presented. The result of the hardware implementation of Rijndael advanced encryption standard (AES) and a discrete cosine transform (DCT) based visible and invisible watermarking algorithm is presented. The prototype chip can carry out simultaneous encryption and watermarking, which to our knowledge is the first of its kind. The encryption unit has a throughput of 500 Mbit/s and the visible and invisible watermarking unit has a max frequency of 96.31 MHz and 256 MHz respectively. digital.library.unt.edu/ark:/67531/metadc5393/
Resource Management in Wireless Networks
A local call admission control (CAC) algorithm for third generation wireless networks was designed and implemented, which allows for the simulation of network throughput for different spreading factors and various mobility scenarios. A global CAC algorithm is also implemented and used as a benchmark since it is inherently optimized; it yields the best possible performance but has an intensive computational complexity. Optimized local CAC algorithm achieves similar performance as global CAC algorithm at a fraction of the computational cost. Design of a dynamic channel assignment algorithm for IEEE 802.11 wireless systems is also presented. Channels are assigned dynamically depending on the minimal interference generated by the neighboring access points on a reference access point. Analysis of dynamic channel assignment algorithm shows an improvement by a factor of 4 over the default settings of having all access points use the same channel, resulting significantly higher network throughput. digital.library.unt.edu/ark:/67531/metadc5392/
Group-EDF: A New Approach and an Efficient Non-Preemptive Algorithm for Soft Real-Time Systems
Hard real-time systems in robotics, space and military missions, and control devices are specified with stringent and critical time constraints. On the other hand, soft real-time applications arising from multimedia, telecommunications, Internet web services, and games are specified with more lenient constraints. Real-time systems can also be distinguished in terms of their implementation into preemptive and non-preemptive systems. In preemptive systems, tasks are often preempted by higher priority tasks. Non-preemptive systems are gaining interest for implementing soft-real applications on multithreaded platforms. In this dissertation, I propose a new algorithm that uses a two-level scheduling strategy for scheduling non-preemptive soft real-time tasks. Our goal is to improve the success ratios of the well-known earliest deadline first (EDF) approach when the load on the system is very high and to improve the overall performance in both underloaded and overloaded conditions. Our approach, known as group-EDF (gEDF), is based on dynamic grouping of tasks with deadlines that are very close to each other, and using a shortest job first (SJF) technique to schedule tasks within the group. I believe that grouping tasks dynamically with similar deadlines and utilizing secondary criteria, such as minimizing the total execution time can lead to new and more efficient real-time scheduling algorithms. I present results comparing gEDF with other real-time algorithms including, EDF, best-effort, and guarantee scheme, by using randomly generated tasks with varying execution times, release times, deadlines and tolerances to missing deadlines, under varying workloads. Furthermore, I implemented the gEDF algorithm in the Linux kernel and evaluated gEDF for scheduling real applications. digital.library.unt.edu/ark:/67531/metadc5317/
Design and Optimization of Components in a 45nm CMOS Phase Locked Loop
Access: Use of this item is restricted to the UNT Community.
A novel scheme of optimizing the individual components of a phase locked loop (PLL) which is used for stable clock generation and synchronization of signals is considered in this work. Verilog-A is used for the high level system design of the main components of the PLL, followed by the individual component wise optimization. The design of experiments (DOE) approach to optimize the analog, 45nm voltage controlled oscillator (VCO) is presented. Also a mixed signal analysis using the analog and digital Verilog behavior of components is studied. Overall a high level system design of a PLL, a systematic optimization of each of its components, and an analog and mixed signal behavioral design approach have been implemented using cadence custom IC design tools. digital.library.unt.edu/ark:/67531/metadc5397/
Modeling Infectious Disease Spread Using Global Stochastic Field Simulation
Susceptibles-infectives-removals (SIR) and its derivatives are the classic mathematical models for the study of infectious diseases in epidemiology. In order to model and simulate epidemics of an infectious disease, a global stochastic field simulation paradigm (GSFS) is proposed, which incorporates geographic and demographic based interactions. The interaction measure between regions is a function of population density and geographical distance, and has been extended to include demographic and migratory constraints. The progression of diseases using GSFS is analyzed, and similar behavior to the SIR model is exhibited by GSFS, using the geographic information systems (GIS) gravity model for interactions. The limitations of the SIR and similar models of homogeneous population with uniform mixing are addressed by the GSFS model. The GSFS model is oriented to heterogeneous population, and can incorporate interactions based on geography, demography, environment and migration patterns. The progression of diseases can be modeled at higher levels of fidelity using the GSFS model, and facilitates optimal deployment of public health resources for prevention, control and surveillance of infectious diseases. digital.library.unt.edu/ark:/67531/metadc5335/
Bayesian Probabilistic Reasoning Applied to Mathematical Epidemiology for Predictive Spatiotemporal Analysis of Infectious Diseases
Abstract Probabilistic reasoning under uncertainty suits well to analysis of disease dynamics. The stochastic nature of disease progression is modeled by applying the principles of Bayesian learning. Bayesian learning predicts the disease progression, including prevalence and incidence, for a geographic region and demographic composition. Public health resources, prioritized by the order of risk levels of the population, will efficiently minimize the disease spread and curtail the epidemic at the earliest. A Bayesian network representing the outbreak of influenza and pneumonia in a geographic region is ported to a newer region with different demographic composition. Upon analysis for the newer region, the corresponding prevalence of influenza and pneumonia among the different demographic subgroups is inferred for the newer region. Bayesian reasoning coupled with disease timeline is used to reverse engineer an influenza outbreak for a given geographic and demographic setting. The temporal flow of the epidemic among the different sections of the population is analyzed to identify the corresponding risk levels. In comparison to spread vaccination, prioritizing the limited vaccination resources to the higher risk groups results in relatively lower influenza prevalence. HIV incidence in Texas from 1989-2002 is analyzed using demographic based epidemic curves. Dynamic Bayesian networks are integrated with probability distributions of HIV surveillance data coupled with the census population data to estimate the proportion of HIV incidence among the different demographic subgroups. Demographic based risk analysis lends to observation of varied spectrum of HIV risk among the different demographic subgroups. A methodology using hidden Markov models is introduced that enables to investigate the impact of social behavioral interactions in the incidence and prevalence of infectious diseases. The methodology is presented in the context of simulated disease outbreak data for influenza. Probabilistic reasoning analysis enhances the understanding of disease progression in order to identify the critical points of surveillance, control and prevention. Public health resources, prioritized by the order of risk levels of the population, will efficiently minimize the disease spread and curtail the epidemic at the earliest. digital.library.unt.edu/ark:/67531/metadc5302/
An Integrated Architecture for Ad Hoc Grids
Extensive research has been conducted by the grid community to enable large-scale collaborations in pre-configured environments. grid collaborations can vary in scale and motivation resulting in a coarse classification of grids: national grid, project grid, enterprise grid, and volunteer grid. Despite the differences in scope and scale, all the traditional grids in practice share some common assumptions. They support mutually collaborative communities, adopt a centralized control for membership, and assume a well-defined non-changing collaboration. To support grid applications that do not confirm to these assumptions, we propose the concept of ad hoc grids. In the context of this research, we propose a novel architecture for ad hoc grids that integrates a suite of component frameworks. Specifically, our architecture combines the community management framework, security framework, abstraction framework, quality of service framework, and reputation framework. The overarching objective of our integrated architecture is to support a variety of grid applications in a self-controlled fashion with the help of a self-organizing ad hoc community. We introduce mechanisms in our architecture that successfully isolates malicious elements from the community, inherently improving the quality of grid services and extracting deterministic quality assurances from the underlying infrastructure. We also emphasize on the technology-independence of our architecture, thereby offering the requisite platform for technology interoperability. The feasibility of the proposed architecture is verified with a high-quality ad hoc grid implementation. Additionally, we have analyzed the performance and behavior of ad hoc grids with respect to several control parameters. digital.library.unt.edu/ark:/67531/metadc5300/
Joint Schemes for Physical Layer Security and Error Correction
The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A cipher-based cryptosystem is also presented in this research. The complexity of this scheme is reduced compared to conventional schemes. The securities of the ciphers are analyzed against known-plaintext and chosen-plaintext attacks and are found to be secure. Randomization test was also conducted on these schemes and the results are presented. For the proof of concept, the schemes were implemented in software and hardware and these shows a reduction in hardware usage compared to conventional schemes. As a result, joint schemes for error correction and security provide security to the physical layer of wireless communication systems, a layer in the protocol stack where currently little or no security is implemented. In this physical layer security approach, the properties of powerful error correcting codes are exploited to deliver reliability to the intended parties, high security against eavesdroppers and efficiency in communication system. The notion of a highly secure and reliable physical layer has the potential to significantly change how communication system designers and users think of the physical layer since the error control codes employed in this work will have the dual roles of both reliability and security. digital.library.unt.edu/ark:/67531/metadc84159/
Measuring Semantic Relatedness Using Salient Encyclopedic Concepts
While pragmatics, through its integration of situational awareness and real world relevant knowledge, offers a high level of analysis that is suitable for real interpretation of natural dialogue, semantics, on the other end, represents a lower yet more tractable and affordable linguistic level of analysis using current technologies. Generally, the understanding of semantic meaning in literature has revolved around the famous quote ``You shall know a word by the company it keeps''. In this thesis we investigate the role of context constituents in decoding the semantic meaning of the engulfing context; specifically we probe the role of salient concepts, defined as content-bearing expressions which afford encyclopedic definitions, as a suitable source of semantic clues to an unambiguous interpretation of context. Furthermore, we integrate this world knowledge in building a new and robust unsupervised semantic model and apply it to entail semantic relatedness between textual pairs, whether they are words, sentences or paragraphs. Moreover, we explore the abstraction of semantics across languages and utilize our findings into building a novel multi-lingual semantic relatedness model exploiting information acquired from various languages. We demonstrate the effectiveness and the superiority of our mono-lingual and multi-lingual models through a comprehensive set of evaluations on specialized synthetic datasets for semantic relatedness as well as real world applications such as paraphrase detection and short answer grading. Our work represents a novel approach to integrate world-knowledge into current semantic models and a means to cross the language boundary for a better and more robust semantic relatedness representation, thus opening the door for an improved abstraction of meaning that carries the potential of ultimately imparting understanding of natural language to machines. digital.library.unt.edu/ark:/67531/metadc84212/
Scene Analysis Using Scale Invariant Feature Extraction and Probabilistic Modeling
Access: Use of this item is restricted to the UNT Community.
Conventional pattern recognition systems have two components: feature analysis and pattern classification. For any object in an image, features could be considered as the major characteristic of the object either for object recognition or object tracking purpose. Features extracted from a training image, can be used to identify the object when attempting to locate the object in a test image containing many other objects. To perform reliable scene analysis, it is important that the features extracted from the training image are detectable even under changes in image scale, noise and illumination. Scale invariant feature has wide applications such as image classification, object recognition and object tracking in the image processing area. In this thesis, color feature and SIFT (scale invariant feature transform) are considered to be scale invariant feature. The classification, recognition and tracking result were evaluated with novel evaluation criterion and compared with some existing methods. I also studied different types of scale invariant feature for the purpose of solving scene analysis problems. I propose probabilistic models as the foundation of analysis scene scenario of images. In order to differential the content of image, I develop novel algorithms for the adaptive combination for multiple features extracted from images. I demonstrate the performance of the developed algorithm on several scene analysis tasks, including object tracking, video stabilization, medical video segmentation and scene classification. digital.library.unt.edu/ark:/67531/metadc84275/
FIRST PREV 1 2 3 NEXT LAST