Zobrazeno 1 - 10
of 49
pro vyhledávání: '"Razdaibiedina A"'
Autor:
Mitra, Arindam, Del Corro, Luciano, Mahajan, Shweti, Codas, Andres, Simoes, Clarisse, Agarwal, Sahaj, Chen, Xuxi, Razdaibiedina, Anastasia, Jones, Erik, Aggarwal, Kriti, Palangi, Hamid, Zheng, Guoqing, Rosset, Corby, Khanpour, Hamed, Awadallah, Ahmed
Orca 1 learns from rich signals, such as explanation traces, allowing it to outperform conventional instruction-tuned models on benchmarks like BigBench Hard and AGIEval. In Orca 2, we continue exploring how improved training signals can enhance smal
Externí odkaz:
http://arxiv.org/abs/2311.11045
Learning semantically meaningful representations from scientific documents can facilitate academic literature search and improve performance of recommendation systems. Pre-trained language models have been shown to learn rich textual representations,
Externí odkaz:
http://arxiv.org/abs/2305.04177
Autor:
Razdaibiedina, Anastasia, Mao, Yuning, Hou, Rui, Khabsa, Madian, Lewis, Mike, Ba, Jimmy, Almahairi, Amjad
Prompt tuning is one of the successful approaches for parameter-efficient tuning of pre-trained language models. Despite being arguably the most parameter-efficient (tuned soft prompts constitute <0.1% of total parameters), it typically performs wors
Externí odkaz:
http://arxiv.org/abs/2305.03937
Autor:
Razdaibiedina, Anastasia, Mao, Yuning, Hou, Rui, Khabsa, Madian, Lewis, Mike, Almahairi, Amjad
We introduce Progressive Prompts - a simple and efficient approach for continual learning in language models. Our method allows forward transfer and resists catastrophic forgetting, without relying on data replay or a large number of task-specific pa
Externí odkaz:
http://arxiv.org/abs/2301.12314
Autor:
Razdaibiedina, Anastasia1,2,3 (AUTHOR), Brechalov, Alexander1,2 (AUTHOR), Friesen, Helena2 (AUTHOR), Mattiazzi Usaj, Mojca2,4 (AUTHOR), Masinas, Myra Paz David2 (AUTHOR), Garadi Suresh, Harsha2 (AUTHOR), Wang, Kyle1,2 (AUTHOR), Boone, Charles1,2,5 (AUTHOR) charlie.boone@utoronto.ca, Ba, Jimmy3,6 (AUTHOR) jba@cs.toronto.edu, Andrews, Brenda1,2 (AUTHOR) brenda.andrews@utoronto.ca
Publikováno v:
Molecular Systems Biology. May2024, Vol. 20 Issue 5, p521-548. 28p.
Protein function is inherently linked to its localization within the cell, and fluorescent microscopy data is an indispensable resource for learning representations of proteins. Despite major developments in molecular representation learning, extract
Externí odkaz:
http://arxiv.org/abs/2205.11676
Autor:
Razdaibiedina, Anastasia, Khetan, Ashish, Karnin, Zohar, Khashabi, Daniel, Kapoor, Vishaal, Madan, Vivek
Fine-tuning contextualized representations learned by pre-trained language models remains a prevalent practice in NLP. However, fine-tuning can lead to representation degradation (also known as representation collapse), which may result in instabilit
Externí odkaz:
http://arxiv.org/abs/2205.11603
Autor:
Anastasia Razdaibiedina, Alexander Brechalov, Helena Friesen, Mojca Mattiazzi Usaj, Myra Paz David Masinas, Harsha Garadi Suresh, Kyle Wang, Charles Boone, Jimmy Ba, Brenda Andrews
Publikováno v:
Molecular Systems Biology, Vol 20, Iss 5, Pp 521-548 (2024)
Abstract Fluorescence microscopy data describe protein localization patterns at single-cell resolution and have the potential to reveal whole-proteome functional information with remarkable precision. Yet, extracting biologically meaningful represent
Externí odkaz:
https://doaj.org/article/10a4b38bbb1c488aaaefcdc1f2306ba0
Deep learning methods are becoming widely used for restoration of defects associated with fluorescence microscopy imaging. One of the major challenges in application of such methods is the availability of training data. In this work, we propose a uni
Externí odkaz:
http://arxiv.org/abs/1910.14207
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.