On sparsely connected optimal neural networks

PDF Version Also Available for Download.

Description

This paper uses two different approaches to show that VLSI- and size-optimal discrete neural networks are obtained for small fan-in values. These have applications to hardware implementations of neural networks, but also reveal an intrinsic limitation of digital VLSI technology: its inability to cope with highly connected structures. The first approach is based on implementing F{sub n,m} functions. The authors show that this class of functions can be implemented in VLSI-optimal (i.e., minimizing AT{sup 2}) neural networks of small constant fan-ins. In order to estimate the area (A) and the delay (T) of such networks, the following cost functions will ... continued below

Physical Description

9 p.

Creation Information

Beiu, V. & Draghici, S. October 1, 1997.

Context

This article is part of the collection entitled: Office of Scientific & Technical Information Technical Reports and was provided by UNT Libraries Government Documents Department to Digital Library, a digital repository hosted by the UNT Libraries. More information about this article can be viewed below.

Who

People and organizations associated with either the creation of this article or its content.

Authors

  • Beiu, V. Los Alamos National Lab., NM (United States)
  • Draghici, S. Wayne State Univ., Detroit, MI (United States)

Sponsor

Publisher

Provided By

UNT Libraries Government Documents Department

Serving as both a federal and a state depository library, the UNT Libraries Government Documents Department maintains millions of items in a variety of formats. The department is a member of the FDLP Content Partnerships Program and an Affiliated Archive of the National Archives.

Contact Us

What

Descriptive information to help identify this article. Follow the links below to find similar items on the Digital Library.

Description

This paper uses two different approaches to show that VLSI- and size-optimal discrete neural networks are obtained for small fan-in values. These have applications to hardware implementations of neural networks, but also reveal an intrinsic limitation of digital VLSI technology: its inability to cope with highly connected structures. The first approach is based on implementing F{sub n,m} functions. The authors show that this class of functions can be implemented in VLSI-optimal (i.e., minimizing AT{sup 2}) neural networks of small constant fan-ins. In order to estimate the area (A) and the delay (T) of such networks, the following cost functions will be used: (i) the connectivity and the number-of-bits for representing the weights and thresholds--for good estimates of the area; and (ii) the fan-ins and the length of the wires--for good approximates of the delay. The second approach is based on implementing Boolean functions for which the classical Shannon`s decomposition can be used. Such a solution has already been used to prove bounds on the size of fan-in 2 neural networks. They will generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan-in values. Finally, a size-optimal neural network of small constant fan-ins will be suggested for F{sub n,m} functions.

Physical Description

9 p.

Notes

OSTI as DE97008308

Source

  • International conference on microelectronics for neural networks, evolution and fuzzy systems, Dresden (Germany), 24-26 Sep 1997

Language

Item Type

Identifier

Unique identifying numbers for this article in the Digital Library or other systems.

  • Other: DE97008308
  • Report No.: LA-UR--97-1567
  • Report No.: CONF-970984--1
  • Grant Number: W-7405-ENG-36
  • Office of Scientific & Technical Information Report Number: 532531
  • Archival Resource Key: ark:/67531/metadc693942

Collections

This article is part of the following collection of related materials.

Office of Scientific & Technical Information Technical Reports

Reports, articles and other documents harvested from the Office of Scientific and Technical Information.

Office of Scientific and Technical Information (OSTI) is the Department of Energy (DOE) office that collects, preserves, and disseminates DOE-sponsored research and development (R&D) results that are the outcomes of R&D projects or other funded activities at DOE labs and facilities nationwide and grantees at universities and other institutions.

What responsibilities do I have when using this article?

When

Dates and time periods associated with this article.

Creation Date

  • October 1, 1997

Added to The UNT Digital Library

  • Aug. 14, 2015, 8:43 a.m.

Description Last Updated

  • May 20, 2016, 3:04 p.m.

Usage Statistics

When was this article last used?

Yesterday: 0
Past 30 days: 0
Total Uses: 3

Interact With This Article

Here are some suggestions for what to do next.

Start Reading

PDF Version Also Available for Download.

Citations, Rights, Re-Use

Beiu, V. & Draghici, S. On sparsely connected optimal neural networks, article, October 1, 1997; New Mexico. (digital.library.unt.edu/ark:/67531/metadc693942/: accessed October 21, 2017), University of North Texas Libraries, Digital Library, digital.library.unt.edu; crediting UNT Libraries Government Documents Department.