论文标题

基于示例的解释,具有对抗性攻击,用于呼吸道声音分析

Example-based Explanations with Adversarial Attacks for Respiratory Sound Analysis

论文作者

Chang, Yi, Ren, Zhao, Nguyen, Thanh Tam, Nejdl, Wolfgang, Schuller, Björn W.

论文摘要

呼吸声分类是远程筛查呼吸有关疾病(例如肺炎,哮喘和Covid-19)的重要工具。为了促进分类结果的解释性,尤其是基于深度学习的结果,已经提出了许多使用原型的解释方法。但是,现有的解释技术通常假定数据是无偏见的,并且可以通过一组原型示例来解释预测结果。在这项工作中,我们开发了一种基于统一的示例解释方法,以选择代表性数据(原型)和离群值(批评)。特别是,我们提出了一种新颖的对抗攻击的应用,以通过迭代快速梯度符号方法生成数据实例的解释谱。这种统一的解释可以通过允许人类专家评估模型错误来避免过度征服和偏见。我们进行了广泛的定量和定性评估,以表明我们的方法产生了有效且可理解的解释,并且对许多深度学习模型都有强大的态度

Respiratory sound classification is an important tool for remote screening of respiratory-related diseases such as pneumonia, asthma, and COVID-19. To facilitate the interpretability of classification results, especially ones based on deep learning, many explanation methods have been proposed using prototypes. However, existing explanation techniques often assume that the data is non-biased and the prediction results can be explained by a set of prototypical examples. In this work, we develop a unified example-based explanation method for selecting both representative data (prototypes) and outliers (criticisms). In particular, we propose a novel application of adversarial attacks to generate an explanation spectrum of data instances via an iterative fast gradient sign method. Such unified explanation can avoid over-generalisation and bias by allowing human experts to assess the model mistakes case by case. We performed a wide range of quantitative and qualitative evaluations to show that our approach generates effective and understandable explanation and is robust with many deep learning models

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源