Deep learning-based video quality enhancement for the new versatile video coding
Autor: | Olfa Ben Ahmed, Seifeddine Messaoud, Soulef Bouaafia, Fatma Ezahra Sayadi, Randa Khemiri |
---|---|
Rok vydání: | 2021 |
Předmět: |
Compression artifact
business.industry Computer science Deep learning S.I. : Emerging trends in AI & ML Video quality Convolutional neural network Computer architecture Artificial Intelligence Algorithmic efficiency Video coding Multimedia IoT Artificial intelligence Quality of experience business VVC Software Random access Coding (social sciences) |
Zdroj: | Neural Computing & Applications |
ISSN: | 1433-3058 0941-0643 |
DOI: | 10.1007/s00521-021-06491-9 |
Popis: | Multimedia IoT (M-IoT) is an emerging type of Internet of things (IoT) relaying multimedia data (images, videos, audio and speech, etc.). The rapid growth of M-IoT devices enables the creation of a massive volume of multimedia data with different characteristics and requirements. With the development of artificial intelligence (AI), AI-based multimedia IoT systems have been recently designed and deployed for various video-based services for contemporary daily life, like video surveillance with high definition (HD) and ultra-high definition (UHD) and mobile multimedia streaming. These new services need higher video quality in order to meet the quality of experience (QoE) required by the users. Versatile video coding (VVC) is the new video coding standard that achieves significant coding efficiency over its predecessor high-efficiency video coding (HEVC). Moreover, VVC can achieve up to 30% BD rate savings compared to HEVC. Inspired by the rapid advancements in deep learning, we propose in this paper a wide-activated squeeze-and-excitation deep convolutional neural network (WSE-DCNN) technique-based video quality enhancement for VVC. Therefore, we replace the conventional in-loop filtering in VVC by the proposed WSE-DCNN model that eliminates the compression artifacts in order to improve visual quality and hence increase the end user QoE. The obtained results prove that the proposed in-loop filtering technique achieves \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-2.85$$\end{document}-2.85%, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-8.89$$\end{document}-8.89%, and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-10.05$$\end{document}-10.05% BD rate reduction for luma and both chroma components under random access configuration. Compared to the traditional CNN-based filtering approaches, the proposed WSE-DCNN-based in-loop filtering framework achieves efficient performance in terms of RD cost. |
Databáze: | OpenAIRE |
Externí odkaz: |