Monitoring Artificial Neural Network Performance Degradation under Network Damage


Recently a number of research groups presented transistor-based designs exhibiting behavior similar to biological synapses, facilitating creation of a tangible artificial neuron. Hardware neural networks would possess great advantages in information processing tasks that are inherently parallel, such as image processing, require learning, such as handwriting recognition, or in an environment where the processing unit might be susceptible to physical damage. A number of different possibilities for realization of hardware neural networks currently exist. This paper presents analysis of performance degradation of various architectures of artificial neural networks when subjected to neural damage. An analysis of un-optimized and optimized, feed-forward and recurrent networks, trained with uncorrelated and correlated data sets is conducted. A comparison of networks with single, dual, triple, and quadruple hidden layers is quantified. The main finding is that for damage occurring to cells in hidden layer(s) the architecture that sustains the least damage is that of a single hidden layer. However, when the damage is administered to input layer then the opposite, that is arranging cells in multiple hidden layers, offers the most resilience to the damage. Additionally, recurrent networks offer improved resilience to damage compared to feed-forward networks.

  • Abstract
  • Introduction
  • Methods
  • Results and Discussions
  • Conclusion and Applications
  • References

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In