论文标题
通过流介绍,改善基于快速慢编码器的传感器
Improving Fast-slow Encoder based Transducer with Streaming Deliberation
论文作者
论文摘要
本文介绍了一个基于快速慢的编码器的传感器,并进行了流审经端到端自动语音识别的审议。我们旨在提高基于快速慢编码器的传感器的识别精度,同时通过集成流介绍模型来保持其延迟较低。具体而言,审议模型利用流媒体快速编码器的部分假设,并隐含地学习纠正识别错误。我们修改了基于快速慢的换能器的平行光束搜索算法,以有效且与审议模型兼容。此外,审议模型旨在处理流数据。为了进一步提高审议性能,探索了一种简单的文本增强方法。我们还比较了编码部分假设的LSTM和构象模型。在Librispeech和内部数据上进行的实验表明,与基于快速缓慢编码器的传感器相比,模型尺寸略有增加和可忽略的额外令牌发射潜伏期的相对降低(WERR)从3%降低至5%。与香草神经传感器相比,提议的审议模型与基于快速慢的编码器的换能器一起在Librispeech上获得了相对的10-11%WERR,并且在内部数据中,相对6%的WERR具有较小的排放延迟的内部数据。
This paper introduces a fast-slow encoder based transducer with streaming deliberation for end-to-end automatic speech recognition. We aim to improve the recognition accuracy of the fast-slow encoder based transducer while keeping its latency low by integrating a streaming deliberation model. Specifically, the deliberation model leverages partial hypotheses from the streaming fast encoder and implicitly learns to correct recognition errors. We modify the parallel beam search algorithm for fast-slow encoder based transducer to be efficient and compatible with the deliberation model. In addition, the deliberation model is designed to process streaming data. To further improve the deliberation performance, a simple text augmentation approach is explored. We also compare LSTM and Conformer models for encoding partial hypotheses. Experiments on Librispeech and in-house data show relative WER reductions (WERRs) from 3% to 5% with a slight increase in model size and negligible extra token emission latency compared with fast-slow encoder based transducer. Compared with vanilla neural transducers, the proposed deliberation model together with fast-slow encoder based transducer obtains relative 10-11% WERRs on Librispeech and around relative 6% WERR on in-house data with smaller emission delays.