论文标题
法律技术开放日记:关于如何在巨型语言模型时代开发和部署轻量级模型的教训
Legal-Tech Open Diaries: Lesson learned on how to develop and deploy light-weight models in the era of humongous Language Models
论文作者
论文摘要
在十亿参数大小的语言模型(LMS)时代,初创企业必须遵循趋势并相应地调整其技术。尽管如此,由于大型模型的开发和部署伴随着高度计算资源的需求,并且会带来经济的后果,因此仍存在公开挑战。在这项工作中,我们遵循现代法律技术初创企业的研发小组的步骤,并对模型开发和部署提出了重要的见解。我们是从零地面开始的,通过预训练多个域特异性多语性LM,与可用替代方案(XLM-R)相比,它更适合合同和监管文本。我们在半公共的半私人法律基准中介绍了此类模型的基准结果,其中包括5个下游任务,显示了较大的模型大小的影响。最后,我们研究了全尺度管道对模型压缩的影响,其中包括:a)参数修剪,b)知识蒸馏,c)量化:所得模型在不牺牲整个绩效的情况下更有效。
In the era of billion-parameter-sized Language Models (LMs), start-ups have to follow trends and adapt their technology accordingly. Nonetheless, there are open challenges since the development and deployment of large models comes with a need for high computational resources and has economical consequences. In this work, we follow the steps of the R&D group of a modern legal-tech start-up and present important insights on model development and deployment. We start from ground zero by pre-training multiple domain-specific multi-lingual LMs which are a better fit to contractual and regulatory text compared to the available alternatives (XLM-R). We present benchmark results of such models in a half-public half-private legal benchmark comprising 5 downstream tasks showing the impact of larger model size. Lastly, we examine the impact of a full-scale pipeline for model compression which includes: a) Parameter Pruning, b) Knowledge Distillation, and c) Quantization: The resulting models are much more efficient without sacrificing performance at large.