Zobrazeno 1 - 10
of 81
pro vyhledávání: '"Pavéz, Eduardo"'
We explore the problem of sampling graph signals in scenarios where the graph structure is not predefined and must be inferred from data. In this scenario, existing approaches rely on a two-step process, where a graph is learned first, followed by sa
Externí odkaz:
http://arxiv.org/abs/2412.09753
Autor:
Vasudevan, Ekamresh, Sridhara, Shashank N., Pavez, Eduardo, Ortega, Antonio, Singh, Raghavendra, Kalluri, Srinath
We present a novel method to correct flying pixels within data captured by Time-of-flight (ToF) sensors. Flying pixel (FP) artifacts occur when signals from foreground and background objects reach the same sensor pixel, leading to a confident yet inc
Externí odkaz:
http://arxiv.org/abs/2410.08084
Autor:
Sridhara, Shashank N., Pavez, Eduardo, Jayawant, Ajinkya, Ortega, Antonio, Watanabe, Ryosuke, Nonaka, Keisuke
3D Point clouds (PCs) are commonly used to represent 3D scenes. They can have millions of points, making subsequent downstream tasks such as compression and streaming computationally expensive. PC sampling (selecting a subset of points) can be used t
Externí odkaz:
http://arxiv.org/abs/2410.01027
Choosing an appropriate frequency definition and norm is critical in graph signal sampling and reconstruction. Most previous works define frequencies based on the spectral properties of the graph and use the same frequency definition and $\ell_2$-nor
Externí odkaz:
http://arxiv.org/abs/2409.09526
This paper develops fast graph Fourier transform (GFT) algorithms with O(n log n) runtime complexity for rank-one updates of the path graph. We first show that several commonly-used audio and video coding transforms belong to this class of GFTs, whic
Externí odkaz:
http://arxiv.org/abs/2409.08970
With the increasing number of images and videos consumed by computer vision algorithms, compression methods are evolving to consider both perceptual quality and performance in downstream tasks. Traditional codecs can tackle this problem by performing
Externí odkaz:
http://arxiv.org/abs/2408.07028
Autor:
Watanabe, Ryosuke, Sridhara, Shashank N., Hong, Haoran, Pavez, Eduardo, Nonaka, Keisuke, Kobayashi, Tatsuya, Ortega, Antonio
Point clouds are a general format for representing realistic 3D objects in diverse 3D applications. Since point clouds have large data sizes, developing efficient point cloud compression methods is crucial. However, excessive compression leads to var
Externí odkaz:
http://arxiv.org/abs/2406.10520
Point clouds in 3D applications frequently experience quality degradation during processing, e.g., scanning and compression. Reliable point cloud quality assessment (PCQA) is important for developing compression algorithms with good bitrate-quality t
Externí odkaz:
http://arxiv.org/abs/2406.09762
We present new results to model and understand the role of encoder-decoder design in machine learning (ML) from an information-theoretic angle. We use two main information concepts, information sufficiency (IS) and mutual information loss (MIL), to r
Externí odkaz:
http://arxiv.org/abs/2405.20452
Current video coding standards, including H.264/AVC, HEVC, and VVC, employ discrete cosine transform (DCT), discrete sine transform (DST), and secondary to Karhunen-Loeve transforms (KLTs) decorrelate the intra-prediction residuals. However, the effi
Externí odkaz:
http://arxiv.org/abs/2402.16371