A Wireless Traffic Surveillance System Using Video Analytics

A Wireless Traffic Surveillance System Using Video Analytics

Date: May 2011
Creator: Luo, Ning
Description: Video surveillance systems have been commonly used in transportation systems to support traffic monitoring, speed estimation, and incident detection. However, there are several challenges in developing and deploying such systems, including high development and maintenance costs, bandwidth bottleneck for long range link, and lack of advanced analytics. In this thesis, I leverage current wireless, video camera, and analytics technologies, and present a wireless traffic monitoring system. I first present an overview of the system. Then I describe the site investigation and several test links with different hardware/software configurations to demonstrate the effectiveness of the system. The system development process was documented to provide guidelines for future development. Furthermore, I propose a novel speed-estimation analytics algorithm that takes into consideration roads with slope angles. I prove the correctness of the algorithm theoretically, and validate the effectiveness of the algorithm experimentally. The experimental results on both synthetic and real dataset show that the algorithm is more accurate than the baseline algorithm 80% of the time. On average the accuracy improvement of speed estimation is over 3.7% even for very small slope angles.
Contributing Partner: UNT Libraries
Anchor Nodes Placement for Effective Passive Localization

Anchor Nodes Placement for Effective Passive Localization

Access: Use of this item is restricted to the UNT Community.
Date: August 2010
Creator: Pasupathy, Karthikeyan
Description: Wireless sensor networks are composed of sensor nodes, which can monitor an environment and observe events of interest. These networks are applied in various fields including but not limited to environmental, industrial and habitat monitoring. In many applications, the exact location of the sensor nodes is unknown after deployment. Localization is a process used to find sensor node's positional coordinates, which is vital information. The localization is generally assisted by anchor nodes that are also sensor nodes but with known locations. Anchor nodes generally are expensive and need to be optimally placed for effective localization. Passive localization is one of the localization techniques where the sensor nodes silently listen to the global events like thunder sounds, seismic waves, lighting, etc. According to previous studies, the ideal location to place anchor nodes was on the perimeter of the sensor network. This may not be the case in passive localization, since the function of anchor nodes here is different than the anchor nodes used in other localization systems. I do extensive studies on positioning anchor nodes for effective localization. Several simulations are run in dense and sparse networks for proper positioning of anchor nodes. I show that, for effective passive localization, the ...
Contributing Partner: UNT Libraries
Effective and Accelerated Informative Frame Filtering in Colonoscopy Videos Using Graphic Processing Units

Effective and Accelerated Informative Frame Filtering in Colonoscopy Videos Using Graphic Processing Units

Date: August 2010
Creator: Karri, Venkata Praveen
Description: Colonoscopy is an endoscopic technique that allows a physician to inspect the mucosa of the human colon. Previous methods and software solutions to detect informative frames in a colonoscopy video (a process called informative frame filtering or IFF) have been hugely ineffective in (1) covering the proper definition of an informative frame in the broadest sense and (2) striking an optimal balance between accuracy and speed of classification in both real-time and non real-time medical procedures. In my thesis, I propose a more effective method and faster software solutions for IFF which is more effective due to the introduction of a heuristic algorithm (derived from experimental analysis of typical colon features) for classification. It contributed to a 5-10% boost in various performance metrics for IFF. The software modules are faster due to the incorporation of sophisticated parallel-processing oriented coding techniques on modern microprocessors. Two IFF modules were created, one for post-procedure and the other for real-time. Code optimizations through NVIDIA CUDA for GPU processing and/or CPU multi-threading concepts embedded in two significant microprocessor design philosophies (multi-core design and many-core design) resulted a 5-fold acceleration for the post-procedure module and a 40-fold acceleration for the real-time module. Some innovative software modules, ...
Contributing Partner: UNT Libraries
Arithmetic Computations and Memory Management Using a Binary Tree Encoding af Natural Numbers

Arithmetic Computations and Memory Management Using a Binary Tree Encoding af Natural Numbers

Date: December 2011
Creator: Haraburda, David
Description: Two applications of a binary tree data type based on a simple pairing function (a bijection between natural numbers and pairs of natural numbers) are explored. First, the tree is used to encode natural numbers, and algorithms that perform basic arithmetic computations are presented along with formal proofs of their correctness. Second, using this "canonical" representation as a base type, algorithms for encoding and decoding additional isomorphic data types of other mathematical constructs (sets, sequences, etc.) are also developed. An experimental application to a memory management system is constructed and explored using these isomorphic types. A practical analysis of this system's runtime complexity and space savings are provided, along with a proof of concept framework for both applications of the binary tree type, in the Java programming language.
Contributing Partner: UNT Libraries
The Design Of A Benchmark For Geo-stream Management Systems

The Design Of A Benchmark For Geo-stream Management Systems

Date: December 2011
Creator: Shen, Chao
Description: The recent growth in sensor technology allows easier information gathering in real-time as sensors have grown smaller, more accurate, and less expensive. The resulting data is often in a geo-stream format continuously changing input with a spatial extent. Researchers developing geo-streaming management systems (GSMS) require a benchmark system for evaluation, which is currently lacking. This thesis presents GSMark, a benchmark for evaluating GSMSs. GSMark provides a data generator that creates a combination of synthetic and real geo-streaming data, a workload simulator to present the data to the GSMS as a data stream, and a set of benchmark queries that evaluate typical GSMS functionality and query performance. In particular, GSMark generates both moving points and evolving spatial regions, two fundamental data types for a broad range of geo-stream applications, and the geo-streaming queries on this data.
Contributing Partner: UNT Libraries
Kalman Filtering Approach to Optimize OFDM Data Rate

Kalman Filtering Approach to Optimize OFDM Data Rate

Date: August 2011
Creator: Wunnava, Sashi Prabha
Description: This study is based on applying a non-linear mapping method, here the unscented Kalman filter; to estimate and optimize data rate resulting from the arrival rate having a Poisson distribution in an orthogonal frequency division multiplexing (OFDM) transmission system. OFDM is an emerging multi-carrier modulation scheme. With the growing need for quality of service in wireless communications, it is highly necessary to optimize resources in such a way that the overall performance of the system models should rise while keeping in mind the objective to achieve high data rate and efficient spectral methods in the near future. In this study, the results from the OFDM-TDMA transmission system have been used to apply cross-layer optimization between layers so as to treat different resources between layers simultaneously. The main controller manages the transmission of data between layers using the multicarrier modulation techniques. The unscented Kalman filter is used here to perform nonlinear mapping by estimating and optimizing the data rate, which result from the arrival rate having a Poisson distribution.
Contributing Partner: UNT Libraries
End of Insertion Detection in Colonoscopy Videos

End of Insertion Detection in Colonoscopy Videos

Date: August 2009
Creator: Malik, Avnish Rajbal
Description: Colorectal cancer is the second leading cause of cancer-related deaths behind lung cancer in the United States. Colonoscopy is the preferred screening method for detection of diseases like Colorectal Cancer. In the year 2006, American Society for Gastrointestinal Endoscopy (ASGE) and American College of Gastroenterology (ACG) issued guidelines for quality colonoscopy. The guidelines suggest that on average the withdrawal phase during a screening colonoscopy should last a minimum of 6 minutes. My aim is to classify the colonoscopy video into insertion and withdrawal phase. The problem is that currently existing shot detection techniques cannot be applied because colonoscopy is a single camera shot from start to end. An algorithm to detect phase boundary has already been developed by the MIGLAB team. Existing method has acceptable levels of accuracy but the main issue is dependency on MPEG (Moving Pictures Expert Group) 1/2. I implemented exhaustive search for motion estimation to reduce the execution time and improve the accuracy. I took advantages of the C/C++ programming languages with multithreading which helped us get even better performances in terms of execution time. I propose a method for improving the current method of colonoscopy video analysis and also an extension for the same to ...
Contributing Partner: UNT Libraries
Study of the effects of background and motion camera on the efficacy of Kalman and particle filter algorithms.

Study of the effects of background and motion camera on the efficacy of Kalman and particle filter algorithms.

Date: August 2009
Creator: Morita, Yasuhiro
Description: This study compares independent use of two known algorithms (Kalmar filter with background subtraction and Particle Filter) that are commonly deployed in object tracking applications. Object tracking in general is very challenging; it presents numerous problems that need to be addressed by the application in order to facilitate its successful deployment. Such problems range from abrupt object motion, during tracking, to a change in appearance of the scene and the object, as well as object to scene occlusions, and camera motion among others. It is important to take into consideration some issues, such as, accounting for noise associated with the image in question, ability to predict to an acceptable statistical accuracy, the position of the object at a particular time given its current position. This study tackles some of the issues raised above prior to addressing how the use of either of the aforementioned algorithm, minimize or in some cases eliminate the negative effects
Contributing Partner: UNT Libraries
Qos Aware Service Oriented Architecture

Qos Aware Service Oriented Architecture

Date: August 2013
Creator: Adepu, Sagarika
Description: Service-oriented architecture enables web services to operate in a loosely-coupled setting and provides an environment for dynamic discovery and use of services over a network using standards such as WSDL, SOAP, and UDDI. Web service has both functional and non-functional characteristics. This thesis work proposes to add QoS descriptions (non-functional properties) to WSDL and compose various services to form a business process. This composition of web services also considers QoS properties along with functional properties and the composed services can again be published as a new Web Service and can be part of any other composition using Composed WSDL.
Contributing Partner: UNT Libraries
Baseband Noise Suppression in Ofdm Using Kalman Filter

Baseband Noise Suppression in Ofdm Using Kalman Filter

Date: May 2012
Creator: Rodda, Lasya
Description: As the technology is advances the reduced size of hardware gives rise to an additive 1/f baseband noise. This additive 1/f noise is a system noise generated due to miniaturization of hardware and affects the lower frequencies. Though 1/f noise does not show much effect in wide band channels because of its nature to affect only certain frequencies, 1/f noise becomes a prominent in OFDM communication systems where narrow band channels are used. in this thesis, I study the effects of 1/f noise on the OFDM systems and implement algorithms for estimation and suppression of the noise using Kalman filter. Suppression of the noise is achieved by subtracting the estimated noise from the received noise. I show that the performance of the system is considerably improved by applying the 1/f noise suppression.
Contributing Partner: UNT Libraries
Integrity Verification of Applications on Radium Architecture

Integrity Verification of Applications on Radium Architecture

Date: August 2015
Creator: Tarigopula, Mohan Krishna
Description: Trusted Computing capability has become ubiquitous these days, and it is being widely deployed into consumer devices as well as enterprise platforms. As the number of threats is increasing at an exponential rate, it is becoming a daunting task to secure the systems against them. In this context, the software integrity measurement at runtime with the support of trusted platforms can be a better security strategy. Trusted Computing devices like TPM secure the evidence of a breach or an attack. These devices remain tamper proof if the hardware platform is physically secured. This type of trusted security is crucial for forensic analysis in the aftermath of a breach. The advantages of trusted platforms can be further leveraged if they can be used wisely. RADIUM (Race-free on-demand Integrity Measurement Architecture) is one such architecture, which is built on the strength of TPM. RADIUM provides an asynchronous root of trust to overcome the TOC condition of DRTM. Even though the underlying architecture is trusted, attacks can still compromise applications during runtime by exploiting their vulnerabilities. I propose an application-level integrity measurement solution that fits into RADIUM, to expand the trusted computing capability to the application layer. This is based on the concept ...
Contributing Partner: UNT Libraries
Radium: Secure Policy Engine in Hypervisor

Radium: Secure Policy Engine in Hypervisor

Date: August 2015
Creator: Shah, Tawfiq M
Description: The basis of today’s security systems is the trust and confidence that the system will behave as expected and are in a known good trusted state. The trust is built from hardware and software elements that generates a chain of trust that originates from a trusted known entity. Leveraging hardware, software and a mandatory access control policy technology is needed to create a trusted measurement environment. Employing a control layer (hypervisor or microkernel) with the ability to enforce a fine grained access control policy with hyper call granularity across multiple guest virtual domains can ensure that any malicious environment to be contained. In my research, I propose the use of radium's Asynchronous Root of Trust Measurement (ARTM) capability incorporated with a secure mandatory access control policy engine that would mitigate the limitations of the current hardware TPM solutions. By employing ARTM we can leverage asynchronous use of boot, launch, and use with the hypervisor proving its state and the integrity of the secure policy. My solution is using Radium (Race free on demand integrity architecture) architecture that will allow a more detailed measurement of applications at run time with greater semantic knowledge of the measured environments. Radium incorporation of a ...
Contributing Partner: UNT Libraries
Maintaining Web Applications Integrity Running on Radium

Maintaining Web Applications Integrity Running on Radium

Date: August 2015
Creator: Ur-Rehman, Wasi
Description: Computer security attacks take place due to the presence of vulnerabilities and bugs in software applications. Bugs and vulnerabilities are the result of weak software architecture and lack of standard software development practices. Despite the fact that software companies are investing millions of dollars in the research and development of software designs security risks are still at large. In some cases software applications are found to carry vulnerabilities for many years before being identified. A recent such example is the popular Heart Bleed Bug in the Open SSL/TSL. In today’s world, where new software application are continuously being developed for a varied community of users; it’s highly unlikely to have software applications running without flaws. Attackers on computer system securities exploit these vulnerabilities and bugs and cause threat to privacy without leaving any trace. The most critical vulnerabilities are those which are related to the integrity of the software applications. Because integrity is directly linked to the credibility of software application and data it contains. Here I am giving solution of maintaining web applications integrity running on RADIUM by using daikon. Daikon generates invariants, these invariants are used to maintain the integrity of the web application and also check the ...
Contributing Partner: UNT Libraries
Unique Channel Email System

Unique Channel Email System

Date: August 2015
Creator: Balakchiev, Milko
Description: Email connects 85% of the world. This paper explores the pattern of information overload encountered by majority of email users and examine what steps key email providers are taking to combat the problem. Besides fighting spam, popular email providers offer very limited tools to reduce the amount of unwanted incoming email. Rather, there has been a trend to expand storage space and aid the organization of email. Storing email is very costly and harmful to the environment. Additionally, information overload can be detrimental to productivity. We propose a simple solution that results in drastic reduction of unwanted mail, also known as graymail.
Contributing Partner: UNT Libraries
Classifying Pairwise Object Interactions: A Trajectory Analytics Approach

Classifying Pairwise Object Interactions: A Trajectory Analytics Approach

Date: May 2015
Creator: Janmohammadi, Siamak
Description: We have a huge amount of video data from extensively available surveillance cameras and increasingly growing technology to record the motion of a moving object in the form of trajectory data. With proliferation of location-enabled devices and ongoing growth in smartphone penetration as well as advancements in exploiting image processing techniques, tracking moving objects is more flawlessly achievable. In this work, we explore some domain-independent qualitative and quantitative features in raw trajectory (spatio-temporal) data in videos captured by a fixed single wide-angle view camera sensor in outdoor areas. We study the efficacy of those features in classifying four basic high level actions by employing two supervised learning algorithms and show how each of the features affect the learning algorithms’ overall accuracy as a single factor or confounded with others.
Contributing Partner: UNT Libraries
Toward a Data-Type-Based Real Time Geospatial Data Stream Management System

Toward a Data-Type-Based Real Time Geospatial Data Stream Management System

Date: May 2011
Creator: Zhang, Chengyang
Description: The advent of sensory and communication technologies enables the generation and consumption of large volumes of streaming data. Many of these data streams are geo-referenced. Existing spatio-temporal databases and data stream management systems are not capable of handling real time queries on spatial extents. In this thesis, we investigated several fundamental research issues toward building a data-type-based real time geospatial data stream management system. The thesis makes contributions in the following areas: geo-stream data models, aggregation, window-based nearest neighbor operators, and query optimization strategies. The proposed geo-stream data model is based on second-order logic and multi-typed algebra. Both abstract and discrete data models are proposed and exemplified. I further propose two useful geo-stream operators, namely Region By and WNN, which abstract common aggregation and nearest neighbor queries as generalized data model constructs. Finally, I propose three query optimization algorithms based on spatial, temporal, and spatio-temporal constraints of geo-streams. I show the effectiveness of the data model through many query examples. The effectiveness and the efficiency of the algorithms are validated through extensive experiments on both synthetic and real data sets. This work established the fundamental building blocks toward a full-fledged geo-stream database management system and has potential impact in many ...
Contributing Partner: UNT Libraries
Source and Channel Coding Strategies for Wireless Sensor Networks

Source and Channel Coding Strategies for Wireless Sensor Networks

Date: December 2012
Creator: Li, Li
Description: In this dissertation, I focus on source coding techniques as well as channel coding techniques. I addressed the challenges in WSN by developing (1) a new source coding strategy for erasure channels that has better distortion performance compared to MDC; (2) a new cooperative channel coding strategy for multiple access channels that has better channel outage performances compared to MIMO; (3) a new source-channel cooperation strategy to accomplish source-to-fusion center communication that reduces system distortion and improves outage performance. First, I draw a parallel between the 2x2 MDC scheme and the Alamouti's space time block coding (STBC) scheme and observe the commonality in their mathematical models. This commonality allows us to observe the duality between the two diversity techniques. Making use of this duality, I develop an MDC scheme with pairwise complex correlating transform. Theoretically, I show that MDC scheme results in: 1) complete elimination of the estimation error when only one descriptor is received; 2) greater efficiency in recovering the stronger descriptor (with larger variance) from the weaker descriptor; and 3) improved performance in terms of minimized distortion as the quantization error gets reduced. Experiments are also performed on real images to demonstrate these benefits. Second, I present a ...
Contributing Partner: UNT Libraries
Scene Analysis Using Scale Invariant Feature Extraction and Probabilistic Modeling

Scene Analysis Using Scale Invariant Feature Extraction and Probabilistic Modeling

Access: Use of this item is restricted to the UNT Community.
Date: August 2011
Creator: Shen, Yao
Description: Conventional pattern recognition systems have two components: feature analysis and pattern classification. For any object in an image, features could be considered as the major characteristic of the object either for object recognition or object tracking purpose. Features extracted from a training image, can be used to identify the object when attempting to locate the object in a test image containing many other objects. To perform reliable scene analysis, it is important that the features extracted from the training image are detectable even under changes in image scale, noise and illumination. Scale invariant feature has wide applications such as image classification, object recognition and object tracking in the image processing area. In this thesis, color feature and SIFT (scale invariant feature transform) are considered to be scale invariant feature. The classification, recognition and tracking result were evaluated with novel evaluation criterion and compared with some existing methods. I also studied different types of scale invariant feature for the purpose of solving scene analysis problems. I propose probabilistic models as the foundation of analysis scene scenario of images. In order to differential the content of image, I develop novel algorithms for the adaptive combination for multiple features extracted from images. I ...
Contributing Partner: UNT Libraries
Occlusion Tolerant Object Recognition Methods for Video Surveillance and Tracking of Moving Civilian Vehicles

Occlusion Tolerant Object Recognition Methods for Video Surveillance and Tracking of Moving Civilian Vehicles

Date: December 2007
Creator: Pati, Nishikanta
Description: Recently, there is a great interest in moving object tracking in the fields of security and surveillance. Object recognition under partial occlusion is the core of any object tracking system. This thesis presents an automatic and real-time color object-recognition system which is not only robust but also occlusion tolerant. The intended use of the system is to recognize and track external vehicles entered inside a secured area like a school campus or any army base. Statistical morphological skeleton is used to represent the visible shape of the vehicle. Simple curve matching and different feature based matching techniques are used to recognize the segmented vehicle. Features of the vehicle are extracted upon entering the secured area. The vehicle is recognized from either a digital video frame or a static digital image when needed. The recognition engine will help the design of a high performance tracking system meant for remote video surveillance.
Contributing Partner: UNT Libraries
Detection of Temporal Events and Abnormal Images for Quality Analysis in Endoscopy Videos

Detection of Temporal Events and Abnormal Images for Quality Analysis in Endoscopy Videos

Date: August 2013
Creator: Nawarathna, Ruwan D.
Description: Recent reports suggest that measuring the objective quality is very essential towards the success of colonoscopy. Several quality indicators (i.e. metrics) proposed in recent studies are implemented in software systems that compute real-time quality scores for routine screening colonoscopy. Most quality metrics are derived based on various temporal events occurred during the colonoscopy procedure. The location of the phase boundary between the insertion and the withdrawal phases and the amount of circumferential inspection are two such important temporal events. These two temporal events can be determined by analyzing various camera motions of the colonoscope. This dissertation put forward a novel method to estimate X, Y and Z directional motions of the colonoscope using motion vector templates. Since abnormalities of a WCE or a colonoscopy video can be found in a small number of frames (around 5% out of total frames), it is very helpful if a computer system can decide whether a frame has any mucosal abnormalities. Also, the number of detected abnormal lesions during a procedure is used as a quality indicator. Majority of the existing abnormal detection methods focus on detecting only one type of abnormality or the overall accuracies are somewhat low if the method tries to ...
Contributing Partner: UNT Libraries
Automated Real-time Objects Detection in Colonoscopy Videos for Quality Measurements

Automated Real-time Objects Detection in Colonoscopy Videos for Quality Measurements

Date: August 2013
Creator: Kumara, Muthukudage Jayantha
Description: The effectiveness of colonoscopy depends on the quality of the inspection of the colon. There was no automated measurement method to evaluate the quality of the inspection. This thesis addresses this issue by investigating an automated post-procedure quality measurement technique and proposing a novel approach automatically deciding a percentage of stool areas in images of digitized colonoscopy video files. It involves the classification of image pixels based on their color features using a new method of planes on RGB (red, green and blue) color space. The limitation of post-procedure quality measurement is that quality measurements are available long after the procedure was done and the patient was released. A better approach is to inform any sub-optimal inspection immediately so that the endoscopist can improve the quality in real-time during the procedure. This thesis also proposes an extension to post-procedure method to detect stool, bite-block, and blood regions in real-time using color features in HSV color space. These three objects play a major role in quality measurements in colonoscopy. The proposed method partitions very large positive examples of each of these objects into a number of groups. These groups are formed by taking intersection of positive examples with a hyper plane. ...
Contributing Partner: UNT Libraries
Multi-perspective, Multi-modal Image Registration and Fusion

Multi-perspective, Multi-modal Image Registration and Fusion

Date: August 2012
Creator: Belkhouche, Mohammed Yassine
Description: Multi-modal image fusion is an active research area with many civilian and military applications. Fusion is defined as strategic combination of information collected by various sensors from different locations or different types in order to obtain a better understanding of an observed scene or situation. Fusion of multi-modal images cannot be completed unless these two modalities are spatially aligned. In this research, I consider two important problems. Multi-modal, multi-perspective image registration and decision level fusion of multi-modal images. In particular, LiDAR and visual imagery. Multi-modal image registration is a difficult task due to the different semantic interpretation of features extracted from each modality. This problem is decoupled into three sub-problems. The first step is identification and extraction of common features. The second step is the determination of corresponding points. The third step consists of determining the registration transformation parameters. Traditional registration methods use low level features such as lines and corners. Using these features require an extensive optimization search in order to determine the corresponding points. Many methods use global positioning systems (GPS), and a calibrated camera in order to obtain an initial estimate of the camera parameters. The advantages of our work over the previous works are the following. ...
Contributing Partner: UNT Libraries
GPS CaPPture: a System for GPS Trajectory Collection, Processing, and Destination Prediction

GPS CaPPture: a System for GPS Trajectory Collection, Processing, and Destination Prediction

Date: May 2012
Creator: Griffin, Terry W.
Description: In the United States, smartphone ownership surpassed 69.5 million in February 2011 with a large portion of those users (20%) downloading applications (apps) that enhance the usability of a device by adding additional functionality. a large percentage of apps are written specifically to utilize the geographical position of a mobile device. One of the prime factors in developing location prediction models is the use of historical data to train such a model. with larger sets of training data, prediction algorithms become more accurate; however, the use of historical data can quickly become a downfall if the GPS stream is not collected or processed correctly. Inaccurate or incomplete or even improperly interpreted historical data can lead to the inability to develop accurately performing prediction algorithms. As GPS chipsets become the standard in the ever increasing number of mobile devices, the opportunity for the collection of GPS data increases remarkably. the goal of this study is to build a comprehensive system that addresses the following challenges: (1) collection of GPS data streams in a manner such that the data is highly usable and has a reduction in errors; (2) processing and reduction of the collected data in order to prepare it and ...
Contributing Partner: UNT Libraries
3D Reconstruction Using Lidar and Visual Images

3D Reconstruction Using Lidar and Visual Images

Date: December 2012
Creator: Duraisamy, Prakash
Description: In this research, multi-perspective image registration using LiDAR and visual images was considered. 2D-3D image registration is a difficult task because it requires the extraction of different semantic features from each modality. This problem is solved in three parts. The first step involves detection and extraction of common features from each of the data sets. The second step consists of associating the common features between two different modalities. Traditional methods use lines or orthogonal corners as common features. The third step consists of building the projection matrix. Many existing methods use global positing system (GPS) or inertial navigation system (INS) for an initial estimate of the camera pose. However, the approach discussed herein does not use GPS, INS, or any such devices for initial estimate; hence the model can be used in places like the lunar surface or Mars where GPS or INS are not available. A variation of the method is also described, which does not require strong features from both images but rather uses intensity gradients in the image. This can be useful when one image does not have strong features (such as lines) or there are too many extraneous features.
Contributing Partner: UNT Libraries
FIRST PREV 1 2 NEXT LAST