Super synchronization for fused video and time-series neural network training

PDF Version Also Available for Download.

Description

A key element in establishing neural networks for traffic monitoring is the ground truth data set that verifies the sensor data. The sensors we use have time series data gathered from loop and piezo sensors embedded in the highway. These signals are analyzed and parsed into vehicle events. Features are extracted from these sensors and combine to form the vehicle vectors. The vehicle vectors are combined together with the video data in a data fusion process thereby providing the neural network with its training set. We examine two studies, one by Georgia Tech Research Institute (GTRI) and another by Los ... continued below

Physical Description

12 p.

Creation Information

Elliott, C.J.; Pepin, J. & Gillmann, R. June 1, 1996.

Context

This article is part of the collection entitled: Office of Scientific & Technical Information Technical Reports and was provided by UNT Libraries Government Documents Department to Digital Library, a digital repository hosted by the UNT Libraries. More information about this article can be viewed below.

Who

People and organizations associated with either the creation of this article or its content.

Authors

Sponsor

Publisher

Provided By

UNT Libraries Government Documents Department

Serving as both a federal and a state depository library, the UNT Libraries Government Documents Department maintains millions of items in a variety of formats. The department is a member of the FDLP Content Partnerships Program and an Affiliated Archive of the National Archives.

Contact Us

What

Descriptive information to help identify this article. Follow the links below to find similar items on the Digital Library.

Description

A key element in establishing neural networks for traffic monitoring is the ground truth data set that verifies the sensor data. The sensors we use have time series data gathered from loop and piezo sensors embedded in the highway. These signals are analyzed and parsed into vehicle events. Features are extracted from these sensors and combine to form the vehicle vectors. The vehicle vectors are combined together with the video data in a data fusion process thereby providing the neural network with its training set. We examine two studies, one by Georgia Tech Research Institute (GTRI) and another by Los Alamos National Laboratory (LANL) that use video information and have had difficulties in establishing the fusion process. That is to say, the correspondence between the video events recorded as the ground truth data and the sensor events has been uncertain. We show that these uncertainties can be removed by establishing a more precise and accurate time measurement for the video events. The principal that the video time information is inherently precise to better than a frame (1/30 s) and that by tracing the factors causing imprecision in the timing of events, we can achieve precisions required for unique vehicle identification we call super synchronization. In the Georgia data study there was an imprecision on the order of 3 seconds and in the LANL study an imprecision of early a second. In both cases, the imprecision had led to lack of proper identification of sensor events. In the case of the Georgia 120 study sensors were placed at various distances downstream, up to 250 meters, from the ground truth camera. The original analysis assumed that there was a fixed time offset corresponding to the downstream location. For this case we show that when we restrict the analysis to passenger cars and take into account the speed of the car we can achieve a precision of approximately 0.3 s. This value is an order of magnitude less than the previous procedure.

Physical Description

12 p.

Notes

OSTI as DE96009799

Source

  • Neural network applications in highway and vehicle engineering, Ashburn, VA (United States), Jul 1996

Language

Item Type

Identifier

Unique identifying numbers for this article in the Digital Library or other systems.

  • Other: DE96009799
  • Report No.: LA-UR--96-854
  • Report No.: CONF-9607100--1
  • Grant Number: W-7405-ENG-36
  • Office of Scientific & Technical Information Report Number: 243471
  • Archival Resource Key: ark:/67531/metadc671613

Collections

This article is part of the following collection of related materials.

Office of Scientific & Technical Information Technical Reports

Reports, articles and other documents harvested from the Office of Scientific and Technical Information.

Office of Scientific and Technical Information (OSTI) is the Department of Energy (DOE) office that collects, preserves, and disseminates DOE-sponsored research and development (R&D) results that are the outcomes of R&D projects or other funded activities at DOE labs and facilities nationwide and grantees at universities and other institutions.

What responsibilities do I have when using this article?

When

Dates and time periods associated with this article.

Creation Date

  • June 1, 1996

Added to The UNT Digital Library

  • June 29, 2015, 9:42 p.m.

Description Last Updated

  • March 1, 2016, 1:56 p.m.

Usage Statistics

When was this article last used?

Yesterday: 0
Past 30 days: 0
Total Uses: 5

Interact With This Article

Here are some suggestions for what to do next.

Start Reading

PDF Version Also Available for Download.

Citations, Rights, Re-Use

Elliott, C.J.; Pepin, J. & Gillmann, R. Super synchronization for fused video and time-series neural network training, article, June 1, 1996; New Mexico. (digital.library.unt.edu/ark:/67531/metadc671613/: accessed December 13, 2017), University of North Texas Libraries, Digital Library, digital.library.unt.edu; crediting UNT Libraries Government Documents Department.