论文标题
在社交网络中检查错误信息检查的模态级别可解释的框架
A Modality-level Explainable Framework for Misinformation Checking in Social Networks
论文作者
论文摘要
虚假信息的广泛存在是全世界对社会影响的关注,激发了事实检查组织的出现以减轻错误信息传播。但是,以人为驱动的验证导致了一项耗时的任务,并且瓶颈以与他们相同的速度检查了值得信赖的信息。由于错误信息不仅与内容本身有关,而且与其他社交特征有关,因此本文从多模式的角度解决了社交网络中自动错误信息检查。此外,由于简单地将新闻命名为不正确的消息可能不会说服公民,更糟糕的是,加强了确认偏见,该提案是一种模式级别可解释的可解释的可解释的误导性分类器框架。我们的框架包括一个误导性分类器,并通过可解释的方法来产生以模态为导向的解释推断。初步发现表明,错误信息分类器确实受益于多模式信息编码,而面向模态的可解释机制则提高了推论的可解释性和完整性。
The widespread of false information is a rising concern worldwide with critical social impact, inspiring the emergence of fact-checking organizations to mitigate misinformation dissemination. However, human-driven verification leads to a time-consuming task and a bottleneck to have checked trustworthy information at the same pace they emerge. Since misinformation relates not only to the content itself but also to other social features, this paper addresses automatic misinformation checking in social networks from a multimodal perspective. Moreover, as simply naming a piece of news as incorrect may not convince the citizen and, even worse, strengthen confirmation bias, the proposal is a modality-level explainable-prone misinformation classifier framework. Our framework comprises a misinformation classifier assisted by explainable methods to generate modality-oriented explainable inferences. Preliminary findings show that the misinformation classifier does benefit from multimodal information encoding and the modality-oriented explainable mechanism increases both inferences' interpretability and completeness.