Zobrazeno 1 - 10
of 41
pro vyhledávání: '"Gryaditskaya, Yulia"'
We propose GroundUp, the first sketch-based ideation tool for 3D city massing of urban areas. We focus on early-stage urban design, where sketching is a common tool and the design starts from balancing building volumes (masses) and open spaces. With
Externí odkaz:
http://arxiv.org/abs/2407.12739
We study the underexplored but fundamental vision problem of machine understanding of abstract freehand scene sketches. We introduce a sketch encoder that results in semantically-aware feature space, which we evaluate by testing its performance on a
Externí odkaz:
http://arxiv.org/abs/2312.12463
3D shape modeling is labor-intensive, time-consuming, and requires years of expertise. To facilitate 3D shape modeling, we propose a 3D shape generation network that takes a 3D VR sketch as a condition. We assume that sketches are created by novices
Externí odkaz:
http://arxiv.org/abs/2306.10830
Autor:
Berardi, Gianluca, Gryaditskaya, Yulia
Recently, encoders like ViT (vision transformer) and ResNet have been trained on vast datasets and utilized as perceptual metrics for comparing sketches and images, as well as multi-domain encoders in a zero-shot setting. However, there has been limi
Externí odkaz:
http://arxiv.org/abs/2306.08541
This paper, for the very first time, introduces human sketches to the landscape of XAI (Explainable Artificial Intelligence). We argue that sketch as a ``human-centred'' data form, represents a natural interface to study explainability. We focus on c
Externí odkaz:
http://arxiv.org/abs/2304.11744
Publikováno v:
2020 International Conference on 3D Vision (3DV), pp. 81-90. IEEE, 2020
Growing free online 3D shapes collections dictated research on 3D retrieval. Active debate has however been had on (i) what the best input modality is to trigger retrieval, and (ii) the ultimate usage scenario for such retrieval. In this paper, we of
Externí odkaz:
http://arxiv.org/abs/2209.10020
Publikováno v:
2021 International Conference on 3D Vision (3DV), pp. 1003-1013. IEEE, 2021
We present the first fine-grained dataset of 1,497 3D VR sketch and 3D shape pairs of a chair category with large shapes diversity. Our dataset supports the recent trend in the sketch community on fine-grained data analysis, and extends it to an acti
Externí odkaz:
http://arxiv.org/abs/2209.10008
We study the practical task of fine-grained 3D-VR-sketch-based 3D shape retrieval. This task is of particular interest as 2D sketches were shown to be effective queries for 2D images. However, due to the domain gap, it remains hard to achieve strong
Externí odkaz:
http://arxiv.org/abs/2209.09043
Autor:
Chowdhury, Pinaki Nath, Sain, Aneeshan, Bhunia, Ayan Kumar, Xiang, Tao, Gryaditskaya, Yulia, Song, Yi-Zhe
We advance sketch research to scenes with the first dataset of freehand scene sketches, FS-COCO. With practical applications in mind, we collect sketches that convey scene content well but can be sketched within a few minutes by a person with any ske
Externí odkaz:
http://arxiv.org/abs/2203.02113
We present the first one-shot personalized sketch segmentation method. We aim to segment all sketches belonging to the same category provisioned with a single sketch with a given part annotation while (i) preserving the parts semantics embedded in th
Externí odkaz:
http://arxiv.org/abs/2112.10838