Zobrazeno 1 - 10
of 253
pro vyhledávání: '"Wu Zijie"'
Publikováno v:
Zhejiang dianli, Vol 41, Iss 10, Pp 97-105 (2022)
With the continuous increase of renewable energy permeability, the randomness of new energy and the benefit game between different operators bring a great challenge to the economic dispatch of power system. For safety and economy improvement of mic
Externí odkaz:
https://doaj.org/article/51e92ce00afc4c98a8448a40eb200519
Lossy compression methods rely on an autoencoder to transform a point cloud into latent points for storage, leaving the inherent redundancy of latent representations unexplored. To reduce redundancy in latent points, we propose a sparse priors guided
Externí odkaz:
http://arxiv.org/abs/2411.13860
We introduce Referring Human Pose and Mask Estimation (R-HPM) in the wild, where either a text or positional prompt specifies the person of interest in an image. This new task holds significant potential for human-centric applications such as assisti
Externí odkaz:
http://arxiv.org/abs/2410.20508
Recent advances in 2D/3D generative models enable the generation of dynamic 3D objects from a single-view video. Existing approaches utilize score distillation sampling to form the dynamic scene as dynamic NeRF or dense 3D Gaussians. However, these m
Externí odkaz:
http://arxiv.org/abs/2404.03736
3D object detection is a fundamental task in scene understanding. Numerous research efforts have been dedicated to better incorporate Hough voting into the 3D object detection pipeline. However, due to the noisy, cluttered, and partial nature of real
Externí odkaz:
http://arxiv.org/abs/2403.14133
Generating realistic 3D scenes is challenging due to the complexity of room layouts and object geometries.We propose a sketch based knowledge enhanced diffusion architecture (SEK) for generating customized, diverse, and plausible 3D scenes. SEK condi
Externí odkaz:
http://arxiv.org/abs/2403.14121
Autor:
Yang, Qitong, Feng, Mingtao, Wu, Zijie, Sun, Shijie, Dong, Weisheng, Wang, Yaonan, Mian, Ajmal
Directly learning to model 4D content, including shape, color and motion, is challenging. Existing methods depend on skeleton-based motion control and offer limited continuity in detail. To address this, we propose a novel framework that generates co
Externí odkaz:
http://arxiv.org/abs/2403.13238
Recent progress in text-to-image (T2I) models enables high-quality image generation with flexible textual control. To utilize the abundant visual priors in the off-the-shelf T2I models, a series of methods try to invert an image to proper embedding t
Externí odkaz:
http://arxiv.org/abs/2310.08094
Diffusion probabilistic models have achieved remarkable success in text guided image generation. However, generating 3D shapes is still challenging due to the lack of sufficient data containing 3D models along with their descriptions. Moreover, text
Externí odkaz:
http://arxiv.org/abs/2308.02874
This paper aims for a new generation task: non-stationary multi-texture synthesis, which unifies synthesizing multiple non-stationary textures in a single model. Most non-stationary textures have large scale variance and can hardly be synthesized thr
Externí odkaz:
http://arxiv.org/abs/2305.06200