You limited your search to:

  Partner: UNT Libraries
 Decade: 2010-2019
 Degree Discipline: Computer Science and Engineering
 Collection: UNT Theses and Dissertations
Automated Real-time Objects Detection in Colonoscopy Videos for Quality Measurements
The effectiveness of colonoscopy depends on the quality of the inspection of the colon. There was no automated measurement method to evaluate the quality of the inspection. This thesis addresses this issue by investigating an automated post-procedure quality measurement technique and proposing a novel approach automatically deciding a percentage of stool areas in images of digitized colonoscopy video files. It involves the classification of image pixels based on their color features using a new method of planes on RGB (red, green and blue) color space. The limitation of post-procedure quality measurement is that quality measurements are available long after the procedure was done and the patient was released. A better approach is to inform any sub-optimal inspection immediately so that the endoscopist can improve the quality in real-time during the procedure. This thesis also proposes an extension to post-procedure method to detect stool, bite-block, and blood regions in real-time using color features in HSV color space. These three objects play a major role in quality measurements in colonoscopy. The proposed method partitions very large positive examples of each of these objects into a number of groups. These groups are formed by taking intersection of positive examples with a hyper plane. This hyper plane is named as 'positive plane'. 'Convex hulls' are used to model positive planes. Comparisons with traditional classifiers such as K-nearest neighbor (K-NN) and support vector machines (SVM) proves the soundness of the proposed method in terms of accuracy and speed that are critical in the targeted real-time quality measurement system. digital.library.unt.edu/ark:/67531/metadc283843/
Detection of Temporal Events and Abnormal Images for Quality Analysis in Endoscopy Videos
Recent reports suggest that measuring the objective quality is very essential towards the success of colonoscopy. Several quality indicators (i.e. metrics) proposed in recent studies are implemented in software systems that compute real-time quality scores for routine screening colonoscopy. Most quality metrics are derived based on various temporal events occurred during the colonoscopy procedure. The location of the phase boundary between the insertion and the withdrawal phases and the amount of circumferential inspection are two such important temporal events. These two temporal events can be determined by analyzing various camera motions of the colonoscope. This dissertation put forward a novel method to estimate X, Y and Z directional motions of the colonoscope using motion vector templates. Since abnormalities of a WCE or a colonoscopy video can be found in a small number of frames (around 5% out of total frames), it is very helpful if a computer system can decide whether a frame has any mucosal abnormalities. Also, the number of detected abnormal lesions during a procedure is used as a quality indicator. Majority of the existing abnormal detection methods focus on detecting only one type of abnormality or the overall accuracies are somewhat low if the method tries to detect multiple abnormalities. Most abnormalities in endoscopy images have unique textures which are clearly distinguishable from normal textures. In this dissertation a new method is proposed that achieves the objective of detecting multiple abnormalities with a higher accuracy using a multi-texture analysis technique. The multi-texture analysis method is designed by representing WCE and colonoscopy image textures as textons. digital.library.unt.edu/ark:/67531/metadc283849/
Evaluating Appropriateness of Emg and Flex Sensors for Classifying Hand Gestures
Hand and arm gestures are a great way of communication when you don't want to be heard, quieter and often more reliable than whispering into a radio mike. In recent years hand gesture identification became a major active area of research due its use in various applications. The objective of my work is to develop an integrated sensor system, which will enable tactical squads and SWAT teams to communicate when there is absence of a Line of Sight or in the presence of any obstacles. The gesture set involved in this work is the standardized hand signals for close range engagement operations used by military and SWAT teams. The gesture sets involved in this work are broadly divided into finger movements and arm movements. The core components of the integrated sensor system are: Surface EMG sensors, Flex sensors and accelerometers. Surface EMG is the electrical activity produced by muscle contractions and measured by sensors directly attached to the skin. Bend Sensors use a piezo resistive material to detect the bend. The sensor output is determined by both the angle between the ends of the sensor as well as the flex radius. Accelerometers sense the dynamic acceleration and inclination in 3 directions simultaneously. EMG sensors are placed on the upper and lower forearm and assist in the classification of the finger and wrist movements. Bend sensors are mounted on a glove that is worn on the hand. The sensors are located over the first knuckle of each figure and can determine if the finger is bent or not. An accelerometer is attached to the glove at the base of the wrist and determines the speed and direction of the arm movement. Classification algorithm SVM is used to classify the gestures. digital.library.unt.edu/ark:/67531/metadc271769/
Exploring Privacy in Location-based Services Using Cryptographic Protocols
Location-based services (LBS) are available on a variety of mobile platforms like cell phones, PDA's, etc. and an increasing number of users subscribe to and use these services. Two of the popular models of information flow in LBS are the client-server model and the peer-to-peer model, in both of which, existing approaches do not always provide privacy for all parties concerned. In this work, I study the feasibility of applying cryptographic protocols to design privacy-preserving solutions for LBS from an experimental and theoretical standpoint. In the client-server model, I construct a two-phase framework for processing nearest neighbor queries using combinations of cryptographic protocols such as oblivious transfer and private information retrieval. In the peer-to-peer model, I present privacy preserving solutions for processing group nearest neighbor queries in the semi-honest and dishonest adversarial models. I apply concepts from secure multi-party computation to realize our constructions and also leverage the capabilities of trusted computing technology, specifically TPM chips. My solution for the dishonest adversarial model is also of independent cryptographic interest. I prove my constructions secure under standard cryptographic assumptions and design experiments for testing the feasibility or practicability of our constructions and benchmark key operations. My experiments show that the proposed constructions are practical to implement and have reasonable costs, while providing strong privacy assurances. digital.library.unt.edu/ark:/67531/metadc68060/
Extrapolating Subjectivity Research to Other Languages
Socrates articulated it best, "Speak, so I may see you." Indeed, language represents an invisible probe into the mind. It is the medium through which we express our deepest thoughts, our aspirations, our views, our feelings, our inner reality. From the beginning of artificial intelligence, researchers have sought to impart human like understanding to machines. As much of our language represents a form of self expression, capturing thoughts, beliefs, evaluations, opinions, and emotions which are not available for scrutiny by an outside observer, in the field of natural language, research involving these aspects has crystallized under the name of subjectivity and sentiment analysis. While subjectivity classification labels text as either subjective or objective, sentiment classification further divides subjective text into either positive, negative or neutral. In this thesis, I investigate techniques of generating tools and resources for subjectivity analysis that do not rely on an existing natural language processing infrastructure in a given language. This constraint is motivated by the fact that the vast majority of human languages are scarce from an electronic point of view: they lack basic tools such as part-of-speech taggers, parsers, or basic resources such as electronic text, annotated corpora or lexica. This severely limits the implementation of techniques on par with those developed for English, and by applying methods that are lighter in the usage of text processing infrastructure, we are able to conduct multilingual subjectivity research in these languages as well. Since my aim is also to minimize the amount of manual work required to develop lexica or corpora in these languages, the techniques proposed employ a lever approach, where English often acts as the donor language (the fulcrum in a lever) and allows through a relatively minimal amount of effort to establish preliminary subjectivity research in a target language. digital.library.unt.edu/ark:/67531/metadc271777/
Finding Meaning in Context Using Graph Algorithms in Mono- and Cross-lingual Settings
Making computers automatically find the appropriate meaning of words in context is an interesting problem that has proven to be one of the most challenging tasks in natural language processing (NLP). Widespread potential applications of a possible solution to the problem could be envisaged in several NLP tasks such as text simplification, language learning, machine translation, query expansion, information retrieval and text summarization. Ambiguity of words has always been a challenge in these applications, and the traditional endeavor to solve the problem of this ambiguity, namely doing word sense disambiguation using resources like WordNet, has been fraught with debate about the feasibility of the granularity that exists in WordNet senses. The recent trend has therefore been to move away from enforcing any given lexical resource upon automated systems from which to pick potential candidate senses,and to instead encourage them to pick and choose their own resources. Given a sentence with a target ambiguous word, an alternative solution consists of picking potential candidate substitutes for the target, filtering the list of the candidates to a much shorter list using various heuristics, and trying to match these system predictions against a human generated gold standard, with a view to ensuring that the meaning of the sentence does not change after the substitutions. This solution has manifested itself in the SemEval 2007 task of lexical substitution and the more recent SemEval 2010 task of cross-lingual lexical substitution (which I helped organize), where given an English context and a target word within that context, the systems are required to provide between one and ten appropriate substitutes (in English) or translations (in Spanish) for the target word. In this dissertation, I present a comprehensive overview of state-of-the-art research and describe new experiments to tackle the tasks of lexical substitution and cross-lingual lexical substitution. In particular I attempt to answer some research questions pertinent to the tasks, mostly focusing on completely unsupervised approaches. I present a new framework for unsupervised lexical substitution using graphs and centrality algorithms. An additional novelty in this approach is the use of directional similarity rather than the traditional, symmetric word similarity. Additionally, the thesis also explores the extension of the monolingual framework into a cross-lingual one, and examines how well this cross-lingual framework can work for the monolingual lexical substitution and cross-lingual lexical substitution tasks. A comprehensive set of comparative investigations are presented amongst supervised and unsupervised methods, several graph based methods, and the use of monolingual and multilingual information. digital.library.unt.edu/ark:/67531/metadc271899/
Incremental Learning with Large Datasets
This dissertation focuses on the novel learning strategy based on geometric support vector machines to address the difficulties of processing immense data set. Support vector machines find the hyper-plane that maximizes the margin between two classes, and the decision boundary is represented with a few training samples it becomes a favorable choice for incremental learning. The dissertation presents a novel method Geometric Incremental Support Vector Machines (GISVMs) to address both efficiency and accuracy issues in handling massive data sets. In GISVM, skin of convex hulls is defined and an efficient method is designed to find the best skin approximation given available examples. The set of extreme points are found by recursively searching along the direction defined by a pair of known extreme points. By identifying the skin of the convex hulls, the incremental learning will only employ a much smaller number of samples with comparable or even better accuracy. When additional samples are provided, they will be used together with the skin of the convex hull constructed from previous dataset. This results in a small number of instances used in incremental steps of the training process. Based on the experimental results with synthetic data sets, public benchmark data sets from UCI and endoscopy videos, it is evident that the GISVM achieved satisfactory classifiers that closely model the underlying data distribution. GISVM improves the performance in sensitivity in the incremental steps, significantly reduced the demand for memory space, and demonstrates the ability of recovery from temporary performance degradation. digital.library.unt.edu/ark:/67531/metadc149595/
Indoor Localization Using Magnetic Fields
Indoor localization consists of locating oneself inside new buildings. GPS does not work indoors due to multipath reflection and signal blockage. WiFi based systems assume ubiquitous availability and infrastructure based systems require expensive installations, hence making indoor localization an open problem. This dissertation consists of solving the problem of indoor localization by thoroughly exploiting the indoor ambient magnetic fields comprising mainly of disturbances termed as anomalies in the Earth’s magnetic field caused by pillars, doors and elevators in hallways which are ferromagnetic in nature. By observing uniqueness in magnetic signatures collected from different campus buildings, the work presents the identification of landmarks and guideposts from these signatures and further develops magnetic maps of buildings - all of which can be used to locate and navigate people indoors. To understand the reason behind these anomalies, first a comparison between the measured and model generated Earth’s magnetic field is made, verifying the presence of a constant field without any disturbances. Then by modeling the magnetic field behavior of different pillars such as steel reinforced concrete, solid steel, and other structures like doors and elevators, the interaction of the Earth’s field with the ferromagnetic fields is described thereby explaining the causes of the uniqueness in the signatures that comprise these disturbances. Next, by employing the dynamic time warping algorithm to account for time differences in signatures obtained from users walking at different speeds, an indoor localization application capable of classifying locations using the magnetic signatures is developed solely on the smart phone. The application required users to walk short distances of 3-6 m anywhere in hallway to be located with accuracies of 80-99%. The classification framework was further validated with over 90% accuracies using model generated magnetic signatures representing hallways with different kinds of pillars, doors and elevators. All in all, this dissertation contributes the following: 1) provides a framework for understanding the presence of ambient magnetic fields indoors and utilizing them to solve the indoor localization problem; 2) develops an application that is independent of the user and the smart phones and 3) requires no other infrastructure since it is deployed on a device that encapsulates the sensing, computing and inferring functionalities, thereby making it a novel contribution to the mobile and pervasive computing domain. digital.library.unt.edu/ark:/67531/metadc103371/
The Influence of Social Network Graph Structure on Disease Dynamics in a Simulated Environment
The fight against epidemics/pandemics is one of man versus nature. Technological advances have not only improved existing methods for monitoring and controlling disease outbreaks, but have also provided new means for investigation, such as through modeling and simulation. This dissertation explores the relationship between social structure and disease dynamics. Social structures are modeled as graphs, and outbreaks are simulated based on a well-recognized standard, the susceptible-infectious-removed (SIR) paradigm. Two independent, but related, studies are presented. The first involves measuring the severity of outbreaks as social network parameters are altered. The second study investigates the efficacy of various vaccination policies based on social structure. Three disease-related centrality measures are introduced, contact, transmission, and spread centrality, which are related to previously established centrality measures degree, betweenness, and closeness, respectively. The results of experiments presented in this dissertation indicate that reducing the neighborhood size along with outside-of-neighborhood contacts diminishes the severity of disease outbreaks. Vaccination strategies can effectively reduce these parameters. Additionally, vaccination policies that target individuals with high centrality are generally shown to be slightly more effective than a random vaccination policy. These results combined with past and future studies will assist public health officials in their effort to minimize the effects of inevitable disease epidemics/pandemics. digital.library.unt.edu/ark:/67531/metadc33173/
Joint Schemes for Physical Layer Security and Error Correction
The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A cipher-based cryptosystem is also presented in this research. The complexity of this scheme is reduced compared to conventional schemes. The securities of the ciphers are analyzed against known-plaintext and chosen-plaintext attacks and are found to be secure. Randomization test was also conducted on these schemes and the results are presented. For the proof of concept, the schemes were implemented in software and hardware and these shows a reduction in hardware usage compared to conventional schemes. As a result, joint schemes for error correction and security provide security to the physical layer of wireless communication systems, a layer in the protocol stack where currently little or no security is implemented. In this physical layer security approach, the properties of powerful error correcting codes are exploited to deliver reliability to the intended parties, high security against eavesdroppers and efficiency in communication system. The notion of a highly secure and reliable physical layer has the potential to significantly change how communication system designers and users think of the physical layer since the error control codes employed in this work will have the dual roles of both reliability and security. digital.library.unt.edu/ark:/67531/metadc84159/
Layout-accurate Ultra-fast System-level Design Exploration Through Verilog-ams
This research addresses problems in designing analog and mixed-signal (AMS) systems by bridging the gap between system-level and circuit-level simulation by making simulations fast like system-level and accurate like circuit-level. The tools proposed include metamodel integrated Verilog-AMS based design exploration flows. The research involves design centering, metamodel generation flows for creating efficient behavioral models, and Verilog-AMS integration techniques for model realization. The core of the proposed solution is transistor-level and layout-level metamodeling and their incorporation in Verilog-AMS. Metamodeling is used to construct efficient and layout-accurate surrogate models for AMS system building blocks. Verilog-AMS, an AMS hardware description language, is employed to build surrogate model implementations that can be simulated with industrial standard simulators. The case-study circuits and systems include an operational amplifier (OP-AMP), a voltage-controlled oscillator (VCO), a charge-pump phase-locked loop (PLL), and a continuous-time delta-sigma modulator (DSM). The minimum and maximum error rates of the proposed OP-AMP model are 0.11 % and 2.86 %, respectively. The error rates for the PLL lock time and power estimation are 0.7 % and 3.0 %, respectively. The OP-AMP optimization using the proposed approach is ~17000× faster than the transistor-level model based approach. The optimization achieves a ~4× power reduction for the OP-AMP design. The PLL parasitic-aware optimization achieves a 10× speedup and a 147 µW power reduction. Thus the experimental results validate the effectiveness of the proposed solution. digital.library.unt.edu/ark:/67531/metadc271923/
Metamodeling-based Fast Optimization of Nanoscale Ams-socs
Modern consumer electronic systems are mostly based on analog and digital circuits and are designed as analog/mixed-signal systems on chip (AMS-SoCs). the integration of analog and digital circuits on the same die makes the system cost effective. in AMS-SoCs, analog and mixed-signal portions have not traditionally received much attention due to their complexity. As the fabrication technology advances, the simulation times for AMS-SoC circuits become more complex and take significant amounts of time. the time allocated for the circuit design and optimization creates a need to reduce the simulation time. the time constraints placed on designers are imposed by the ever-shortening time to market and non-recurrent cost of the chip. This dissertation proposes the use of a novel method, called metamodeling, and intelligent optimization algorithms to reduce the design time. Metamodel-based ultra-fast design flows are proposed and investigated. Metamodel creation is a one time process and relies on fast sampling through accurate parasitic-aware simulations. One of the targets of this dissertation is to minimize the sample size while retaining the accuracy of the model. in order to achieve this goal, different statistical sampling techniques are explored and applied to various AMS-SoC circuits. Also, different metamodel functions are explored for their accuracy and application to AMS-SoCs. Several different optimization algorithms are compared for global optimization accuracy and convergence. Three different AMS circuits, ring oscillator, inductor-capacitor voltage-controlled oscillator (LC-VCO) and phase locked loop (PLL) that are present in many AMS-SoC are used in this study for design flow application. Metamodels created in this dissertation provide accuracy with an error of less than 2% from the physical layout simulations. After optimal sampling investigation, metamodel functions and optimization algorithms are ranked in terms of speed and accuracy. Experimental results show that the proposed design flow provides roughly 5,000x speedup over conventional design flows. Thus, this dissertation greatly advances the state-of-the-art in mixed-signal design and will assist towards making consumer electronics cheaper and affordable. digital.library.unt.edu/ark:/67531/metadc115081/
Modeling Synergistic Relationships Between Words and Images
Texts and images provide alternative, yet orthogonal views of the same underlying cognitive concept. By uncovering synergistic, semantic relationships that exist between words and images, I am working to develop novel techniques that can help improve tasks in natural language processing, as well as effective models for text-to-image synthesis, image retrieval, and automatic image annotation. Specifically, in my dissertation, I will explore the interoperability of features between language and vision tasks. In the first part, I will show how it is possible to apply features generated using evidence gathered from text corpora to solve the image annotation problem in computer vision, without the use of any visual information. In the second part, I will address research in the reverse direction, and show how visual cues can be used to improve tasks in natural language processing. Importantly, I propose a novel metric to estimate the similarity of words by comparing the visual similarity of concepts invoked by these words, and show that it can be used further to advance the state-of-the-art methods that employ corpus-based and knowledge-based semantic similarity measures. Finally, I attempt to construct a joint semantic space connecting words with images, and synthesize an evaluation framework to quantify cross-modal semantic relationships that exist between arbitrary pairs of words and images. I study the effectiveness of unsupervised, corpus-based approaches to automatically derive the semantic relatedness between words and images, and perform empirical evaluations by measuring its correlation with human annotators. digital.library.unt.edu/ark:/67531/metadc177223/
Physical-Layer Network Coding for MIMO Systems
The future wireless communication systems are required to meet the growing demands of reliability, bandwidth capacity, and mobility. However, as corruptions such as fading effects, thermal noise, are present in the channel, the occurrence of errors is unavoidable. Motivated by this, the work in this dissertation attempts to improve the system performance by way of exploiting schemes which statistically reduce the error rate, and in turn boost the system throughput. The network can be studied using a simplified model, the two-way relay channel, where two parties exchange messages via the assistance of a relay in between. In such scenarios, this dissertation performs theoretical analysis of the system, and derives closed-form and upper bound expressions of the error probability. These theoretical measurements are potentially helpful references for the practical system design. Additionally, several novel transmission methods including block relaying, permutation modulations for the physical-layer network coding, are proposed and discussed. Numerical simulation results are presented to support the validity of the conclusions. digital.library.unt.edu/ark:/67531/metadc68065/
Process-Voltage-Temperature Aware Nanoscale Circuit Optimization
Embedded systems which are targeted towards portable applications are required to have low power consumption because such portable devices are typically powered by batteries. During the memory accesses of such battery operated portable systems, including laptops, cell phones and other devices, a significant amount of power or energy is consumed which significantly affects the battery life. Therefore, efficient and leakage power saving cache designs are needed for longer operation of battery powered applications. Design engineers have limited control over many design parameters of the circuit and hence face many chal-lenges due to inherent process technology variations, particularly on static random access memory (SRAM) circuit design. As CMOS process technologies scale down deeper into the nanometer regime, the push for high performance and reliable systems becomes even more challenging. As a result, developing low-power designs while maintaining better performance of the circuit becomes a very difficult task. Furthermore, a major need for accurate analysis and optimization of various forms of total power dissipation and performance in nanoscale CMOS technologies, particularly in SRAMs, is another critical issue to be considered. This dissertation proposes power-leakage and static noise margin (SNM) analysis and methodologies to achieve optimized static random access memories (SRAMs). Alternate topologies of SRAMs, mainly a 7-transistor SRAM, are taken as a case study throughout this dissertation. The optimized cache designs are process-voltage-temperature (PVT) tolerant and consider individual cells as well as memory arrays. digital.library.unt.edu/ark:/67531/metadc67943/
Scene Analysis Using Scale Invariant Feature Extraction and Probabilistic Modeling
Access: Use of this item is restricted to the UNT Community.
Conventional pattern recognition systems have two components: feature analysis and pattern classification. For any object in an image, features could be considered as the major characteristic of the object either for object recognition or object tracking purpose. Features extracted from a training image, can be used to identify the object when attempting to locate the object in a test image containing many other objects. To perform reliable scene analysis, it is important that the features extracted from the training image are detectable even under changes in image scale, noise and illumination. Scale invariant feature has wide applications such as image classification, object recognition and object tracking in the image processing area. In this thesis, color feature and SIFT (scale invariant feature transform) are considered to be scale invariant feature. The classification, recognition and tracking result were evaluated with novel evaluation criterion and compared with some existing methods. I also studied different types of scale invariant feature for the purpose of solving scene analysis problems. I propose probabilistic models as the foundation of analysis scene scenario of images. In order to differential the content of image, I develop novel algorithms for the adaptive combination for multiple features extracted from images. I demonstrate the performance of the developed algorithm on several scene analysis tasks, including object tracking, video stabilization, medical video segmentation and scene classification. digital.library.unt.edu/ark:/67531/metadc84275/
Sentence Similarity Analysis with Applications in Automatic Short Answer Grading
In this dissertation, I explore unsupervised techniques for the task of automatic short answer grading. I compare a number of knowledge-based and corpus-based measures of text similarity, evaluate the effect of domain and size on the corpus-based measures, and also introduce a novel technique to improve the performance of the system by integrating automatic feedback from the student answers. I continue to combine graph alignment features with lexical semantic similarity measures and employ machine learning techniques to show that grade assignment error can be reduced compared to a system that considers only lexical semantic measures of similarity. I also detail a preliminary attempt to align the dependency graphs of student and instructor answers in order to utilize a structural component that is necessary to simulate human-level grading of student answers. I further explore the utility of these techniques to several related tasks in natural language processing including the detection of text similarity, paraphrase, and textual entailment. digital.library.unt.edu/ark:/67531/metadc149640/
Source and Channel Coding Strategies for Wireless Sensor Networks
In this dissertation, I focus on source coding techniques as well as channel coding techniques. I addressed the challenges in WSN by developing (1) a new source coding strategy for erasure channels that has better distortion performance compared to MDC; (2) a new cooperative channel coding strategy for multiple access channels that has better channel outage performances compared to MIMO; (3) a new source-channel cooperation strategy to accomplish source-to-fusion center communication that reduces system distortion and improves outage performance. First, I draw a parallel between the 2x2 MDC scheme and the Alamouti's space time block coding (STBC) scheme and observe the commonality in their mathematical models. This commonality allows us to observe the duality between the two diversity techniques. Making use of this duality, I develop an MDC scheme with pairwise complex correlating transform. Theoretically, I show that MDC scheme results in: 1) complete elimination of the estimation error when only one descriptor is received; 2) greater efficiency in recovering the stronger descriptor (with larger variance) from the weaker descriptor; and 3) improved performance in terms of minimized distortion as the quantization error gets reduced. Experiments are also performed on real images to demonstrate these benefits. Second, I present a two-phase cooperative communication strategy and an optimal power allocation strategy to transmit sensor observations to a fusion center in a large-scale sensor network. Outage probability is used to evaluate the performance of the proposed system. Simulation results demonstrate that: 1) when signal-to-noise ratio is low, the performance of the proposed system is better than that of the MIMO system over uncorrelated slow fading Rayleigh channels; 2) given the transmission rate and the total transmission SNR, there exists an optimal power allocation that minimizes the outage probability; 3) on correlated slow fading Rayleigh channels, channel correlation will degrade the system performance in linear proportion to the correlation level. Third, I combine the statistical ranking of sensor observations with cooperative communication strategy in a cluster-based wireless sensor network. This strategy involves two steps: 1) ranking the sensor observations based on their test statistics; 2) building a two-phase cooperative communication model with an optimal power allocation strategy. The result is an optimal system performance that considers both sources and channels. I optimize the proposed model through analyses of the system distortion, and show that the cooperating nodes achieve maximum channel capacity. I also simulate the system distortion and outage to show the benefits of the proposed strategies. digital.library.unt.edu/ark:/67531/metadc177226/