Research on the Language Model Compression Combing Domain Compressing and Importance Pruning


Currently the size of most statistical language models based on large-scale training corpus always goes beyond the storage ability of many handheld devices. This paper proposes a language model compression method which combined the domain compressing and the importance pruning, by setting the unit retained rate to control the size of the language model. In this paper, we use perplexity to evaluate the performance of the language model which get from the new compression method. The experimental results show that the new compression method can well adapted to the needs of handheld devices.

  • Abstract
  • Keywords
  • 1 Introduction
  • 2 Statistical Language Models
  • 3 Language Model Compression
  • 4 The Improved Compression Method
  • 5 Experimental Results
  • Conclusion
  • Acknowledgments
  • Reference
Topics: Compression

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In