PAC learning algorithms for functions approximated by feedforward networks

PDF Version Also Available for Download.

Description

The authors present a class of efficient algorithms for PAC learning continuous functions and regressions that are approximated by feedforward networks. The algorithms are applicable to networks with unknown weights located only in the output layer and are obtained by utilizing the potential function methods of Aizerman et al. Conditions relating the sample sizes to the error bounds are derived using martingale-type inequalities. For concreteness, the discussion is presented in terms of neural networks, but the results are applicable to general feedforward networks, in particular to wavelet networks. The algorithms can be directly adapted to concept learning problems.

Physical Description

15 p.

Creation Information

Rao, N.S.V. & Protopopescu, V. June 1, 1996.

Context

This article is part of the collection entitled: Office of Scientific & Technical Information Technical Reports and was provided by UNT Libraries Government Documents Department to Digital Library, a digital repository hosted by the UNT Libraries. More information about this article can be viewed below.

Who

People and organizations associated with either the creation of this article or its content.

Authors

Sponsor

Publisher

Provided By

UNT Libraries Government Documents Department

Serving as both a federal and a state depository library, the UNT Libraries Government Documents Department maintains millions of items in a variety of formats. The department is a member of the FDLP Content Partnerships Program and an Affiliated Archive of the National Archives.

Contact Us

What

Descriptive information to help identify this article. Follow the links below to find similar items on the Digital Library.

Description

The authors present a class of efficient algorithms for PAC learning continuous functions and regressions that are approximated by feedforward networks. The algorithms are applicable to networks with unknown weights located only in the output layer and are obtained by utilizing the potential function methods of Aizerman et al. Conditions relating the sample sizes to the error bounds are derived using martingale-type inequalities. For concreteness, the discussion is presented in terms of neural networks, but the results are applicable to general feedforward networks, in particular to wavelet networks. The algorithms can be directly adapted to concept learning problems.

Physical Description

15 p.

Notes

OSTI as DE96008787

Source

  • 13. international conference on machine learning, Bari (Italy), 3-6 Jul 1996

Language

Item Type

Identifier

Unique identifying numbers for this article in the Digital Library or other systems.

  • Other: DE96008787
  • Report No.: CONF-960797--1
  • Grant Number: AC05-96OR22464
  • Office of Scientific & Technical Information Report Number: 244610
  • Archival Resource Key: ark:/67531/metadc673198

Collections

This article is part of the following collection of related materials.

Office of Scientific & Technical Information Technical Reports

What responsibilities do I have when using this article?

When

Dates and time periods associated with this article.

Creation Date

  • June 1, 1996

Added to The UNT Digital Library

  • June 29, 2015, 9:42 p.m.

Description Last Updated

  • Jan. 22, 2016, 1:02 p.m.

Usage Statistics

When was this article last used?

Yesterday: 0
Past 30 days: 0
Total Uses: 5

Interact With This Article

Here are some suggestions for what to do next.

Start Reading

PDF Version Also Available for Download.

Citations, Rights, Re-Use

Rao, N.S.V. & Protopopescu, V. PAC learning algorithms for functions approximated by feedforward networks, article, June 1, 1996; Tennessee. (digital.library.unt.edu/ark:/67531/metadc673198/: accessed August 17, 2017), University of North Texas Libraries, Digital Library, digital.library.unt.edu; crediting UNT Libraries Government Documents Department.