论文标题
医疗保健中基于机器学习的预测模型的解释性
Interpretability of machine learning based prediction models in healthcare
论文作者
论文摘要
需要确保可解释的机器学习模型。模型的更高解释性意味着对最终用户的未来预测的理解和解释。此外,可解释的机器学习模型使医疗保健专家能够做出合理和数据驱动的决策,以提供个性化的决策,最终会导致医疗保健中更高的服务质量。通常,我们可以将第一个集中在个性化解释(局部解释性)的两组中分类,而第二个则汇总了人群级别的预测模型(全球可解释性)。另外,我们可以将可解释性方法分组到特定于模型的技术中,这些方法旨在解释由特定模型(例如神经网络)和模型不合SNOSTIC方法产生的预测,这些方法提供了任何机器学习模型做出的预测的易于理解的解释。在这里,我们概述了可解释性方法,并提供了医疗保健不同领域机器学习的实际解释性示例,包括预测与健康相关的结果,优化治疗或提高针对特定条件的筛查效率。此外,我们概述了可解释的机器学习的未来指示,并强调了开发算法解决方案的重要性,这些解决方案可以在高风险的医疗保健问题中实现机器学习驱动的决策。
There is a need of ensuring machine learning models that are interpretable. Higher interpretability of the model means easier comprehension and explanation of future predictions for end-users. Further, interpretable machine learning models allow healthcare experts to make reasonable and data-driven decisions to provide personalized decisions that can ultimately lead to higher quality of service in healthcare. Generally, we can classify interpretability approaches in two groups where the first focuses on personalized interpretation (local interpretability) while the second summarizes prediction models on a population level (global interpretability). Alternatively, we can group interpretability methods into model-specific techniques, which are designed to interpret predictions generated by a specific model, such as a neural network, and model-agnostic approaches, which provide easy-to-understand explanations of predictions made by any machine learning model. Here, we give an overview of interpretability approaches and provide examples of practical interpretability of machine learning in different areas of healthcare, including prediction of health-related outcomes, optimizing treatments or improving the efficiency of screening for specific conditions. Further, we outline future directions for interpretable machine learning and highlight the importance of developing algorithmic solutions that can enable machine-learning driven decision making in high-stakes healthcare problems.