GRACE: Loss-Resilient Real-Time Video Communication Using Data-Scalable Autoencoder

Autor: Cheng, Yihua, Arapin, Anton, Zhang, Ziyi, Zhang, Qizheng, Li, Hanchen, Feamster, Nick, Jiang, Junchen
Rok vydání: 2022
Předmět:
Druh dokumentu: Working Paper
Popis: Across many real-time video applications, we see a growing need (especially in long delays and dynamic bandwidth) to allow clients to decode each frame once any (non-empty) subset of its packets is received and improve quality with each new packet. We call it data-scalable delivery. Unfortunately, existing techniques (e.g., FEC, RS and Fountain Codes) fall short: they require either delivery of a minimum number of packets to decode frames, and/or pad video data with redundancy in anticipation of packet losses, which hurts video quality if no packets get lost. This work explores a new approach, inspired by recent advances of neural-network autoencoders, which make data-scalable delivery possible. We present Grace, a concrete data-scalable real-time video system. With the same video encoding, Grace's quality is slightly lower than traditional codec without redundancy when no packet is lost, but with each missed packet, its quality degrades much more gracefully than existing solutions, allowing clients to flexibly trade between frame delay and video quality. Grace makes two contributions: (1) it trains new custom autoencoders to balance compression efficiency and resilience against a wide range of packet losses; and (2) it uses a new transmission scheme to deliver autoencoder-coded frames as individually decodable packets. We test Grace (and traditional loss-resilient schemes and codecs) on real network traces and videos, and show that while Grace's compression efficiency is slightly worse than heavily engineered video codecs, it significantly reduces tail video frame delay (by 2$\times$ at the 95th percentile) with the marginally lowered video quality
Databáze: arXiv