Classification of Recorded Musical Instrument Sound Using Neural Network


The purpose of this paper is to classify automatically musical instrument sounds on the basis of a limited number of parameters. And this involves issues like feature extraction and development of classifier using the obtained features. As for feature extraction, a 5 second audio file stored in WAVE format is passed to a feature extraction function. The feature extraction function calculates more than 20 numerical features both in time-domain and frequency-domain that characterize the sample. Regarding the task of classification, we designed a two-layer Feed-Forward Neural Network (FFNN) using back-propagation training algorithm. The FFNN is trained in a supervised manner — the weights are adjusted based on training samples (input-output pairs) that guide the optimization procedure towards an optimum. After training, the neural network is validated by analyzing its response to unknown data in order to evaluate its generalization capabilities. Then, the sequential forward selection method is adopted to choose the best feature set to achieve high classification accuracy. Our goal is mainly to classify the sound into five different musical instrument families, such as the Strings, the Woodwinds and the Brass.

  • Abstract
  • 1. Introduction
  • 2. The Database
  • 3. Preprocessing
  • 4. Feature Extraction
  • 5. Results of Automatic Classification Training Phase
  • 6. Testing Phase
  • 7. Feature Vector Selection
  • 8. Conclusion
  • Acknowledgements
  • References

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In