Fitting a Function and Its Derivative


This paper introduces a new procedure for gradient-based training of multilayer perceptron neural networks to simultaneously approximate both a function and its first derivatives. It is assumed that the true function values and the true derivatives are available at the training points. An algorithm is then derived to compute the gradient of a new performance function that combines both squared function error and squared derivative error. Experimental results show that the neural networks trained by the new procedure yield more accurate approximations for both the functions and their first derivatives than networks trained by standard methods. In addition, it is shown that the generalization capabilities of networks trained using this new procedure are better than those trained with early stopping or Bayesian regularization, even though no validation set is used.

  • Abstract
  • Introduction
  • Training Algorithm
  • Simulation Results
  • Conclusions
  • Acknowledgements
  • References

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In