Zobrazeno 1 - 10
of 1 038
pro vyhledávání: '"MA Martin"'
Autor:
MA Martin-Piedra, CA Alfonso-Rodriguez, A Zapater, D Durand-Herrera, J Chato-Astrain, F Campos, MC Sanchez-Quevedo, M Alamino, I Garzon
Publikováno v:
European Cells & Materials, Vol 37, Pp 233-249 (2019)
Mesenchymal stem cells (MSCs) can differentiate toward epithelial cells and may be used as an alternative source for generation of heterotypical artificial human skin substitutes, thus, enhancing their development and translation potential to the cli
Externí odkaz:
https://doaj.org/article/1086fbf2576145efb26f65d465c7289c
Autor:
Xu, Shawn, Yang, Lin, Kelly, Christopher, Sieniek, Marcin, Kohlberger, Timo, Ma, Martin, Weng, Wei-Hung, Kiraly, Atilla, Kazemzadeh, Sahar, Melamed, Zakkai, Park, Jungyeon, Strachan, Patricia, Liu, Yun, Lau, Chuck, Singh, Preeti, Chen, Christina, Etemadi, Mozziyar, Kalidindi, Sreenivasa Raju, Matias, Yossi, Chou, Katherine, Corrado, Greg S., Shetty, Shravya, Tse, Daniel, Prabhakara, Shruthi, Golden, Daniel, Pilgrim, Rory, Eswaran, Krish, Sellergren, Andrew
In this work, we present an approach, which we call Embeddings for Language/Image-aligned X-Rays, or ELIXR, that leverages a language-aligned image encoder combined or grafted onto a fixed LLM, PaLM 2, to perform a broad range of chest X-ray tasks. W
Externí odkaz:
http://arxiv.org/abs/2308.01317
Autor:
Liang, Paul Pu, Deng, Zihao, Ma, Martin, Zou, James, Morency, Louis-Philippe, Salakhutdinov, Ruslan
In a wide range of multimodal tasks, contrastive learning has become a particularly appealing approach since it can successfully learn representations from abundant unlabeled data with only pairing information (e.g., image-caption or video-audio pair
Externí odkaz:
http://arxiv.org/abs/2306.05268
Autor:
Kong, Lingjing, Ma, Martin Q., Chen, Guangyi, Xing, Eric P., Chi, Yuejie, Morency, Louis-Philippe, Zhang, Kun
Masked autoencoder (MAE), a simple and effective self-supervised learning framework based on the reconstruction of masked image regions, has recently achieved prominent success in a variety of vision tasks. Despite the emergence of intriguing empiric
Externí odkaz:
http://arxiv.org/abs/2306.04898
Creating artificial social intelligence - algorithms that can understand the nuances of multi-person interactions - is an exciting and emerging challenge in processing facial expressions and gestures from multimodal videos. Recent multimodal methods
Externí odkaz:
http://arxiv.org/abs/2208.01036
Autor:
Tsai, Yao-Hung Hubert, Li, Tianqin, Ma, Martin Q., Zhao, Han, Zhang, Kun, Morency, Louis-Philippe, Salakhutdinov, Ruslan
Conditional contrastive learning frameworks consider the conditional sampling procedure that constructs positive or negative data pairs conditioned on specific variables. Fair contrastive learning constructs negative pairs, for example, from the same
Externí odkaz:
http://arxiv.org/abs/2202.05458
Autor:
Ma, Martin Q., Tsai, Yao-Hung Hubert, Liang, Paul Pu, Zhao, Han, Zhang, Kun, Salakhutdinov, Ruslan, Morency, Louis-Philippe
Contrastive self-supervised learning (SSL) learns an embedding space that maps similar data pairs closer and dissimilar data pairs farther apart. Despite its success, one issue has been overlooked: the fairness aspect of representations learned using
Externí odkaz:
http://arxiv.org/abs/2106.02866
Given an unsupervised outlier detection task, how should one select a detection algorithm as well as its hyperparameters (jointly called a model)? Unsupervised model selection is notoriously difficult, in the absence of hold-out validation data with
Externí odkaz:
http://arxiv.org/abs/2104.01422
Autor:
Tsai, Yao-Hung Hubert, Ma, Martin Q., Yang, Muqiao, Zhao, Han, Morency, Louis-Philippe, Salakhutdinov, Ruslan
This paper introduces Relative Predictive Coding (RPC), a new contrastive representation learning objective that maintains a good balance among training stability, minibatch size sensitivity, and downstream task performance. The key to the success of
Externí odkaz:
http://arxiv.org/abs/2103.11275
Autor:
Tsai, Yao-Hung Hubert, Ma, Martin Q., Yang, Muqiao, Salakhutdinov, Ruslan, Morency, Louis-Philippe
The human language can be expressed through multiple sources of information known as modalities, including tones of voice, facial gestures, and spoken language. Recent multimodal learning with strong performances on human-centric tasks such as sentim
Externí odkaz:
http://arxiv.org/abs/2004.14198