## Entropy based comparison of neural networks for classification

Description:
In recent years, multilayer feedforward neural networks (NN) have been shown to be very effective tools in many different applications. A natural and essential step in continuing the diffusion of these tools in day by day use is their hardware implementation which is by far the most cost effective solution for large scale use. When the hardware implementation is contemplated, the issue of the size of the NN becomes crucial because the size is directly proportional with the cost of the implementation. In this light, any theoretical results which establish bounds on the size of a NN for a given problem is extremely important. In the same context, a particularly interesting case is that of the neural networks using limited integer weights. These networks are particularly suitable for hardware implementation because they need less space for storing the weights and the fixed point, limited precision arithmetic has much cheaper implementations in comparison with its floating point counterpart. This paper presents an entropy based analysis which completes, unifies and correlates results partially presented in [Beiu, 1996, 1997a] and [Draghici, 1997]. Tight bounds for real and integer weight neural networks are calculated.

Date:
April 1, 1997

Creator:
Draghici, S. & Beiu, V.

Item Type:
Refine your search to only
Article

Partner:
UNT Libraries Government Documents Department