论文标题

带随机拓扑的多服务器系统中的编码缓存

Coded Caching in Multi-server System with Random Topology

论文作者

Mital, Nitish, Gunduz, Deniz, Ling, Cong

论文摘要

在具有$ P $服务器和$ K $用户的多服务器系统中研究了缓存的内容交付,每个系统都配备了本地高速缓存内存。在交付阶段,每个用户都会随机连接到$ p $服务器中的任何$ρ$。多亏了多个服务器的可用性,该服务器对小型电池基站(SBS)进行了建模,因此可以满足每台服务器的存储容量的降低以及每台服务器的降低的交付速率;但是,这也导致与单服务器方案相比,多播的机会减少了。提出了一种联合存储和主动的缓存方案,该方案利用了跨服务器的编码存储,用户未编码的缓存放置以及编码的交付。研究了\ textIt {连续}和\ textit {Parallel}从服务器的传输的传递\ textit {litency}。结果表明,随着连续的传输,可实现的平均输送延迟与单人场景中的平均延迟相当,而两者之间的差距取决于$ρ$,这是跨服务器的可用冗余,并且可以通过增加SBSS的存储容量来减少。还证明了未编码的缓存位置和MDS编码的服务器存储的提议方案的最佳性也证明了连续传输。

Cache-aided content delivery is studied in a multi-server system with $P$ servers and $K$ users, each equipped with a local cache memory. In the delivery phase, each user connects randomly to any $ρ$ out of $P$ servers. Thanks to the availability of multiple servers, which model small-cell base stations (SBSs), demands can be satisfied with reduced storage capacity at each server and reduced delivery rate per server; however, this also leads to reduced multicasting opportunities compared to the single-server scenario. A joint storage and proactive caching scheme is proposed, which exploits coded storage across the servers, uncoded cache placement at the users, and coded delivery. The delivery \textit{latency} is studied for both \textit{successive} and \textit{parallel} transmissions from the servers. It is shown that, with successive transmissions the achievable average delivery latency is comparable to the one achieved in the single-server scenario, while the gap between the two depends on $ρ$, the available redundancy across the servers, and can be reduced by increasing the storage capacity at the SBSs. The optimality of the proposed scheme with uncoded cache placement and MDS-coded server storage is also proved for successive transmissions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源