论文标题

分类分类的计算方法

A Computational Approach to Packet Classification

论文作者

Rashelbach, Alon, Rottenstreich, Ori, Silberstein, Mark

论文摘要

多场数据包分类是现代软件定义数据中心网络中的重要组成部分。为了达到高吞吐量和低潜伏期,最先进的算法努力将规则查找数据结构拟合到On-Die Caches中;但是,它们的规则数量不能很好地扩展。我们提出了一种新颖的方法NueVomatch,该方法改善了现有方法的内存缩放。一个新的数据结构,范围查询递归模型索引(RQ-RMI)是使NueVomatch通过模型推理计算替换对主内存的大多数访问的关键组件。我们描述了一种有效的培训算法,该算法保证了基于RQ-RMI的分类的正确性。 RQ-RMI的使用允许将规则压缩到适合硬件缓存的模型权重中。此外,它利用了对现代CPU中快速神经网络处理的日益支持,例如广泛的向量说明,每个查找的速率达到了数十纳秒。我们使用标准ClassBench基准的500K多场规则进行评估显示,几何平均压缩系数为4.9倍,8倍和82X,平均性能改善为2.4倍,2.6倍和1.6倍的吞吐量,而切割,神经涂到剪切,神经皮质和全面的术语,所有状态所有状态的ARTART ARTART ALGORITHM。

Multi-field packet classification is a crucial component in modern software-defined data center networks. To achieve high throughput and low latency, state-of-the-art algorithms strive to fit the rule lookup data structures into on-die caches; however, they do not scale well with the number of rules. We present a novel approach, NuevoMatch, which improves the memory scaling of existing methods. A new data structure, Range Query Recursive Model Index (RQ-RMI), is the key component that enables NuevoMatch to replace most of the accesses to main memory with model inference computations. We describe an efficient training algorithm that guarantees the correctness of the RQ-RMI-based classification. The use of RQ-RMI allows the rules to be compressed into model weights that fit into the hardware cache. Further, it takes advantage of the growing support for fast neural network processing in modern CPUs, such as wide vector instructions, achieving a rate of tens of nanoseconds per lookup. Our evaluation using 500K multi-field rules from the standard ClassBench benchmark shows a geometric mean compression factor of 4.9x, 8x, and 82x, and average performance improvement of 2.4x, 2.6x, and 1.6x in throughput compared to CutSplit, NeuroCuts, and TupleMerge, all state-of-the-art algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源