Zobrazeno 1 - 7
of 7
pro vyhledávání: '"Lee, Hyungyung"'
Autor:
Lee, Hyungyung, Lee, Da Young, Kim, Wonjae, Kim, Jin-Hwa, Kim, Tackeun, Kim, Jihang, Sunwoo, Leonard, Choi, Edward
Synthetic medical data generation has opened up new possibilities in the healthcare domain, offering a powerful tool for simulating clinical scenarios, enhancing diagnostic and treatment quality, gaining granular medical knowledge, and accelerating t
Externí odkaz:
http://arxiv.org/abs/2302.12172
Although deep generative models have gained a lot of attention, most of the existing works are designed for unimodal generation. In this paper, we explore a new method for unconditional image-text pair generation. We design Multimodal Cross-Quantizat
Externí odkaz:
http://arxiv.org/abs/2204.07537
Publikováno v:
IEEE Journal of Biomedical and Health Informatics 2022
Recently a number of studies demonstrated impressive performance on diverse vision-language multi-modal tasks such as image captioning and visual question answering by extending the BERT architecture with multi-modal pre-training objectives. In this
Externí odkaz:
http://arxiv.org/abs/2105.11333
UniXGen: A Unified Vision-Language Model for Multi-View Chest X-ray Generation and Report Generation
Autor:
Lee, Hyungyung, Lee, Da Young, Kim, Wonjae, Kim, Jin-Hwa, Kim, Tackeun, Kim, Jihang, Sunwoo, Leonard, Choi, Edward
Generated synthetic data in medical research can substitute privacy and security-sensitive data with a large-scale curated dataset, reducing data collection and annotation costs. As part of this effort, we propose UniXGen, a unified chest X-ray and r
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::9cda45c90196dea6661cd8cc04b23d06
http://arxiv.org/abs/2302.12172
http://arxiv.org/abs/2302.12172
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
Journal of Consumer Studies. 30:239-259
Publikováno v:
International Journal of Computer-Assisted Language Learning and Teaching (IJCALLT); January 2011, Vol. 1 Issue: 1 p1-15, 15p