论文标题
域概括的深频过滤
Deep Frequency Filtering for Domain Generalization
论文作者
论文摘要
提高深神经网络(DNN)的概括能力对于它们的实际用途至关重要,这是一个长期的挑战。一些理论研究发现,DNN在学习过程中对某些频率成分具有偏好,并表明这可能会影响学习特征的鲁棒性。在本文中,我们提出了用于学习域的可替代特征的深频过滤(DFF),这是第一个努力在训练过程中明确调节潜在空间中不同域的不同传输困难的频率成分。为了实现这一目标,我们对不同层的特征图执行快速傅立叶变换(FFT),然后采用一个轻量级模块来从FFT后从频率表示中学习注意力掩码,以增强可转移的组件,同时抑制不利于概括的组件。此外,我们从经验上比较采用不同类型的注意力设计来实施DFF的有效性。广泛的实验证明了我们提出的DFF的有效性,并表明将我们的DFF应用于普通基线上的表现在不同领域概括任务上的最新方法(包括封闭式分类和开放设定的检索)。
Improving the generalization ability of Deep Neural Networks (DNNs) is critical for their practical uses, which has been a longstanding challenge. Some theoretical studies have uncovered that DNNs have preferences for some frequency components in the learning process and indicated that this may affect the robustness of learned features. In this paper, we propose Deep Frequency Filtering (DFF) for learning domain-generalizable features, which is the first endeavour to explicitly modulate the frequency components of different transfer difficulties across domains in the latent space during training. To achieve this, we perform Fast Fourier Transform (FFT) for the feature maps at different layers, then adopt a light-weight module to learn attention masks from the frequency representations after FFT to enhance transferable components while suppressing the components not conducive to generalization. Further, we empirically compare the effectiveness of adopting different types of attention designs for implementing DFF. Extensive experiments demonstrate the effectiveness of our proposed DFF and show that applying our DFF on a plain baseline outperforms the state-of-the-art methods on different domain generalization tasks, including close-set classification and open-set retrieval.