Pasula, Revanth Reddy (2025) Optimizing Speech Models with Freezing. International Journal of Innovative Science and Research Technology, 10 (6): 25jun167. pp. 69-73. ISSN 2456-2165
Adapting speech models to new languages requires an optimization of the trade-off between accuracy and computational cost. In this work, we investigate the optimization of Mozilla’s DeepSpeech model when adapted from English to German and Swiss German through selective freezing of layers. Employing a strategy of transfer learning, we analyze the performance impacts of freezing different numbers of network layers during fine-tuning. The experiment reveals that freezing the initial layers achieves significant performance improvements: training time decreases and accuracy increases. This layer-freezing technique hence offers an extensible way to improve automated speech recognition for under-resourced languages.
Altmetric Metrics
Dimensions Matrics
Downloads
Downloads per month over past year
![]() |