Zobrazeno 1 - 10
of 6 113
pro vyhledávání: '"Cheng TA"'
Publikováno v:
Frontiers in Medicine, Vol 10 (2023)
Externí odkaz:
https://doaj.org/article/2f5ca8fa488e425ebeffaa6cf5d57fcc
Autor:
Yu, Ju-Chi, Borgne, Julie Le, Krishnan, Anjali, Gloaguen, Arnaud, Yang, Cheng-Ta, Rabin, Laura A, Abdi, Hervé, Guillemot, Vincent
Correspondence analysis, multiple correspondence analysis and their discriminant counterparts (i.e., discriminant simple correspondence analysis and discriminant multiple correspondence analysis) are methods of choice for analyzing multivariate categ
Externí odkaz:
http://arxiv.org/abs/2409.11789
Autor:
Ya-Mei Bai, Mu-Hong Chen, Ju-Wei Hsu, Kai-Lin Huang, Pei-Chi Tu, Wan-Chen Chang, Tung-Ping Su, Cheng Ta Li, Wei-Chen Lin, Shih-Jen Tsai
Publikováno v:
Journal of Neuroinflammation, Vol 17, Iss 1, Pp 1-10 (2020)
Abstract Background Previous individual studies have shown the differences in inflammatory cytokines and gray matter volumes between bipolar disorder (BD) and unipolar depression (UD). However, few studies have investigated the association between pr
Externí odkaz:
https://doaj.org/article/c1c19ccc71a344428e4c888475acf363
Autor:
Antonio Passaro, Filippo de Marinis, Hai-Yan Tu, Konstantin K. Laktionov, Jifeng Feng, Artem Poltoratskiy, Jun Zhao, Eng Huat Tan, Maya Gottfried, Victor Lee, Dariusz Kowalski, Cheng Ta Yang, BJ Srinivasa, Laura Clementi, Tejaswini Jalikop, Dennis Chin Lun Huang, Agnieszka Cseh, Keunchil Park, Yi-Long Wu
Publikováno v:
Frontiers in Oncology, Vol 11 (2021)
BackgroundAfatinib is approved for first-line treatment of patients with epidermal growth factor receptor mutation-positive (EGFRm+) non-small-cell lung cancer (NSCLC). Here, we report findings from a combined analysis of three phase IIIb studies of
Externí odkaz:
https://doaj.org/article/9f20422dd14b455088226a15dd90831a
We propose ZeST, a method for zero-shot material transfer to an object in the input image given a material exemplar image. ZeST leverages existing diffusion adapters to extract implicit material representation from the exemplar image. This representa
Externí odkaz:
http://arxiv.org/abs/2404.06425
Current state-of-the-art spatial reasoning-enhanced VLMs are trained to excel at spatial visual question answering (VQA). However, we believe that higher-level 3D-aware tasks, such as articulating dynamic scene changes and motion planning, require a
Externí odkaz:
http://arxiv.org/abs/2403.13438
Autor:
Yeh, Chun-Hsiao, Cheng, Ta-Ying, Hsieh, He-Yen, Lin, Chuan-En, Ma, Yi, Markham, Andrew, Trigoni, Niki, Kung, H. T., Chen, Yubei
Recent text-to-image diffusion models are able to learn and synthesize images containing novel, personalized concepts (e.g., their own pets or specific items) with just a few examples for training. This paper tackles two interconnected issues within
Externí odkaz:
http://arxiv.org/abs/2402.15504
Autor:
Cheng, Ta-Ying, Gadelha, Matheus, Groueix, Thibault, Fisher, Matthew, Mech, Radomir, Markham, Andrew, Trigoni, Niki
Current controls over diffusion models (e.g., through text or ControlNet) for image generation fall short in recognizing abstract, continuous attributes like illumination direction or non-rigid shape change. In this paper, we present an approach for
Externí odkaz:
http://arxiv.org/abs/2402.08654