Zobrazeno 1 - 10
of 20 988
pro vyhledávání: '"P., Prabhu"'
Autor:
Le, Thanh-Dung, Ha, Vu Nguyen, Nguyen, Ti Ti, Eappen, Geoffrey, Thiruvasagam, Prabhu, Garces-Socarras, Luis M., Chou, Hong-fu, Gonzalez-Rios, Jorge L., Merlano-Duncan, Juan Carlos, Chatzinotas, Symeon
Remote sensing image classification is a critical component of Earth observation (EO) systems, traditionally dominated by convolutional neural networks (CNNs) and other deep learning techniques. However, the advent of Transformer-based architectures
Externí odkaz:
http://arxiv.org/abs/2409.03901
Harnessing pre-trained LLMs to improve ASR systems, particularly for low-resource languages, is now an emerging area of research. Existing methods range from using LLMs for ASR error correction to tightly coupled systems that replace the ASR decoder
Externí odkaz:
http://arxiv.org/abs/2408.16542
Autor:
Roth, Karsten, Udandarao, Vishaal, Dziadzio, Sebastian, Prabhu, Ameya, Cherti, Mehdi, Vinyals, Oriol, Hénaff, Olivier, Albanie, Samuel, Bethge, Matthias, Akata, Zeynep
Multimodal foundation models serve numerous applications at the intersection of vision and language. Still, despite being pretrained on extensive data, they become outdated over time. To keep models updated, research into continual pretraining mainly
Externí odkaz:
http://arxiv.org/abs/2408.14471
Autor:
Tercan, Alperen, Prabhu, Vinayak S.
Lexicographic multi-objective problems, which impose a lexicographic importance order over the objectives, arise in many real-life scenarios. Existing Reinforcement Learning work directly addressing lexicographic tasks has been scarce. The few propos
Externí odkaz:
http://arxiv.org/abs/2408.13493
Autor:
Prakash, Jatin, Buvanesh, Anirudh, Santra, Bishal, Saini, Deepak, Yadav, Sachin, Jiao, Jian, Prabhu, Yashoteja, Sharma, Amit, Varma, Manik
Extreme Classification (XC) aims to map a query to the most relevant documents from a very large document set. XC algorithms used in real-world applications learn this mapping from datasets curated from implicit feedback, such as user clicks. However
Externí odkaz:
http://arxiv.org/abs/2408.09585
Autor:
Xue, Le, Shu, Manli, Awadalla, Anas, Wang, Jun, Yan, An, Purushwalkam, Senthil, Zhou, Honglu, Prabhu, Viraj, Dai, Yutong, Ryoo, Michael S, Kendre, Shrikant, Zhang, Jieyu, Qin, Can, Zhang, Shu, Chen, Chia-Chih, Yu, Ning, Tan, Juntao, Awalgaonkar, Tulika Manoj, Heinecke, Shelby, Wang, Huan, Choi, Yejin, Schmidt, Ludwig, Chen, Zeyuan, Savarese, Silvio, Niebles, Juan Carlos, Xiong, Caiming, Xu, Ran
This report introduces xGen-MM (also known as BLIP-3), a framework for developing Large Multimodal Models (LMMs). The framework comprises meticulously curated datasets, a training recipe, model architectures, and a resulting suite of LMMs. xGen-MM, s
Externí odkaz:
http://arxiv.org/abs/2408.08872
Autor:
Prabhu, Sumanth
Self-ensembling techniques with diverse reasoning paths such as Self-Consistency have demonstrated remarkable performance gains in text generation with Large Language Models (LLMs). However, such techniques depend on the availability of an accurate a
Externí odkaz:
http://arxiv.org/abs/2408.08869
Autor:
Imagen-Team-Google, Baldridge, Jason, Bauer, Jakob, Bhutani, Mukul, Brichtova, Nicole, Bunner, Andrew, Chan, Kelvin, Chen, Yichang, Dieleman, Sander, Du, Yuqing, Eaton-Rosen, Zach, Fei, Hongliang, de Freitas, Nando, Gao, Yilin, Gladchenko, Evgeny, Colmenarejo, Sergio Gómez, Guo, Mandy, Haig, Alex, Hawkins, Will, Hu, Hexiang, Huang, Huilian, Igwe, Tobenna Peter, Kaplanis, Christos, Khodadadeh, Siavash, Kim, Yelin, Konyushkova, Ksenia, Langner, Karol, Lau, Eric, Luo, Shixin, Mokrá, Soňa, Nandwani, Henna, Onoe, Yasumasa, Oord, Aäron van den, Parekh, Zarana, Pont-Tuset, Jordi, Qi, Hang, Qian, Rui, Ramachandran, Deepak, Rane, Poorva, Rashwan, Abdullah, Razavi, Ali, Riachi, Robert, Srinivasan, Hansa, Srinivasan, Srivatsan, Strudel, Robin, Uria, Benigno, Wang, Oliver, Wang, Su, Waters, Austin, Wolff, Chris, Wright, Auriel, Xiao, Zhisheng, Xiong, Hao, Xu, Keyang, van Zee, Marc, Zhang, Junlin, Zhang, Katie, Zhou, Wenlei, Zolna, Konrad, Aboubakar, Ola, Akbulut, Canfer, Akerlund, Oscar, Albuquerque, Isabela, Anderson, Nina, Andreetto, Marco, Aroyo, Lora, Bariach, Ben, Barker, David, Ben, Sherry, Berman, Dana, Biles, Courtney, Blok, Irina, Botadra, Pankil, Brennan, Jenny, Brown, Karla, Buckley, John, Bunel, Rudy, Bursztein, Elie, Butterfield, Christina, Caine, Ben, Carpenter, Viral, Casagrande, Norman, Chang, Ming-Wei, Chang, Solomon, Chaudhuri, Shamik, Chen, Tony, Choi, John, Churbanau, Dmitry, Clement, Nathan, Cohen, Matan, Cole, Forrester, Dektiarev, Mikhail, Du, Vincent, Dutta, Praneet, Eccles, Tom, Elue, Ndidi, Feden, Ashley, Fruchter, Shlomi, Garcia, Frankie, Garg, Roopal, Ge, Weina, Ghazy, Ahmed, Gipson, Bryant, Goodman, Andrew, Górny, Dawid, Gowal, Sven, Gupta, Khyatti, Halpern, Yoni, Han, Yena, Hao, Susan, Hayes, Jamie, Hertz, Amir, Hirst, Ed, Hou, Tingbo, Howard, Heidi, Ibrahim, Mohamed, Ike-Njoku, Dirichi, Iljazi, Joana, Ionescu, Vlad, Isaac, William, Jana, Reena, Jennings, Gemma, Jenson, Donovon, Jia, Xuhui, Jones, Kerry, Ju, Xiaoen, Kajic, Ivana, Ayan, Burcu Karagol, Kelly, Jacob, Kothawade, Suraj, Kouridi, Christina, Ktena, Ira, Kumakaw, Jolanda, Kurniawan, Dana, Lagun, Dmitry, Lavitas, Lily, Lee, Jason, Li, Tao, Liang, Marco, Li-Calis, Maggie, Liu, Yuchi, Alberca, Javier Lopez, Lu, Peggy, Lum, Kristian, Ma, Yukun, Malik, Chase, Mellor, John, Mosseri, Inbar, Murray, Tom, Nematzadeh, Aida, Nicholas, Paul, Oliveira, João Gabriel, Ortiz-Jimenez, Guillermo, Paganini, Michela, Paine, Tom Le, Paiss, Roni, Parrish, Alicia, Peckham, Anne, Peswani, Vikas, Petrovski, Igor, Pfaff, Tobias, Pirozhenko, Alex, Poplin, Ryan, Prabhu, Utsav, Qi, Yuan, Rahtz, Matthew, Rashtchian, Cyrus, Rastogi, Charvi, Raul, Amit, Rebuffi, Sylvestre-Alvise, Ricco, Susanna, Riedel, Felix, Robinson, Dirk, Rohatgi, Pankaj, Rosgen, Bill, Rumbley, Sarah, Ryu, Moonkyung, Salgado, Anthony, Singla, Sahil, Schroff, Florian, Schumann, Candice, Shah, Tanmay, Shillingford, Brendan, Shivakumar, Kaushik, Shtatnov, Dennis, Singer, Zach, Sluzhaev, Evgeny, Sokolov, Valerii, Sottiaux, Thibault, Stimberg, Florian, Stone, Brad, Stutz, David, Su, Yu-Chuan, Tabellion, Eric, Tang, Shuai, Tao, David, Thomas, Kurt, Thornton, Gregory, Toor, Andeep, Udrescu, Cristian, Upadhyay, Aayush, Vasconcelos, Cristina, Vasiloff, Alex, Voynov, Andrey, Walker, Amanda, Wang, Luyu, Wang, Miaosen, Wang, Simon, Wang, Stanley, Wang, Qifei, Wang, Yuxiao, Weisz, Ágoston, Wiles, Olivia, Wu, Chenxia, Xu, Xingyu Federico, Xue, Andrew, Yang, Jianbo, Yu, Luo, Yurtoglu, Mete, Zand, Ali, Zhang, Han, Zhang, Jiageng, Zhao, Catherine, Zhaxybay, Adilet, Zhou, Miao, Zhu, Shengqi, Zhu, Zhenkai, Bloxwich, Dawn, Bordbar, Mahyar, Cobo, Luis C., Collins, Eli, Dai, Shengyang, Doshi, Tulsee, Dragan, Anca, Eck, Douglas, Hassabis, Demis, Hsiao, Sissie, Hume, Tom, Kavukcuoglu, Koray, King, Helen, Krawczyk, Jack, Li, Yeqing, Meier-Hellstern, Kathy, Orban, Andras, Pinsky, Yury, Subramanya, Amar, Vinyals, Oriol, Yu, Ting, Zwols, Yori
We introduce Imagen 3, a latent diffusion model that generates high quality images from text prompts. We describe our quality and responsibility evaluations. Imagen 3 is preferred over other state-of-the-art (SOTA) models at the time of evaluation. I
Externí odkaz:
http://arxiv.org/abs/2408.07009
Autor:
Ghorbel, Bassem, Prabhu, Vinayak S.
Researchers have previously proposed augmenting Signal Temporal Logic (STL) with the value freezing operator in order to express engineering properties that cannot be expressed in STL. This augmented logic is known as STL*. The previous algorithms fo
Externí odkaz:
http://arxiv.org/abs/2408.02460
Autor:
Sainz, Oscar, García-Ferrero, Iker, Jacovi, Alon, Campos, Jon Ander, Elazar, Yanai, Agirre, Eneko, Goldberg, Yoav, Chen, Wei-Lin, Chim, Jenny, Choshen, Leshem, D'Amico-Wong, Luca, Dell, Melissa, Fan, Run-Ze, Golchin, Shahriar, Li, Yucheng, Liu, Pengfei, Pahwa, Bhavish, Prabhu, Ameya, Sharma, Suryansh, Silcock, Emily, Solonko, Kateryna, Stap, David, Surdeanu, Mihai, Tseng, Yu-Min, Udandarao, Vishaal, Wang, Zengzhi, Xu, Ruijie, Yang, Jinglin
The 1st Workshop on Data Contamination (CONDA 2024) focuses on all relevant aspects of data contamination in natural language processing, where data contamination is understood as situations where evaluation data is included in pre-training corpora u
Externí odkaz:
http://arxiv.org/abs/2407.21530