论文标题

网络模型概括的新希望

A new hope for network model generalization

论文作者

Dietmüller, Alexander, Ray, Siddhant, Jacob, Romain, Vanbever, Laurent

论文摘要

对网络流量动态的概括机器学习(ML)模型往往被认为是失去的原因。因此,对于每项新任务,我们都会设计新模型并在特定于模型的数据集上训练它们,以密切模仿部署环境。但是,一个称为_transformer_的ML体系结构已在其他域中启用了以前难以想象的概括。如今,人们可以在大规模数据集上下载预训练的模型,而仅针对特定任务和上下文进行微调,而时间和数据相对较少。这些微调的模型现在是许多基准测试的最先进的模型。 我们认为,这种进度可以转化为网络,并提出网络流量变压器(NTT),这是一个适合从数据包痕迹学习网络动态的变压器。我们的最初结果是有希望的:NTT似乎能够概括为新的预测任务和环境。这项研究表明,尽管它需要许多未来的研究,但仍然有希望的希望。

Generalizing machine learning (ML) models for network traffic dynamics tends to be considered a lost cause. Hence for every new task, we design new models and train them on model-specific datasets closely mimicking the deployment environments. Yet, an ML architecture called_Transformer_ has enabled previously unimaginable generalization in other domains. Nowadays, one can download a model pre-trained on massive datasets and only fine-tune it for a specific task and context with comparatively little time and data. These fine-tuned models are now state-of-the-art for many benchmarks. We believe this progress could translate to networking and propose a Network Traffic Transformer (NTT), a transformer adapted to learn network dynamics from packet traces. Our initial results are promising: NTT seems able to generalize to new prediction tasks and environments. This study suggests there is still hope for generalization, though it calls for a lot of future research.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源