Zobrazeno 1 - 10
of 45 037
pro vyhledávání: '"Anwer, A."'
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Md Arshad Anwer, Ram Niwas, Tushar Ranjan, Shyam Sundar Mandal, Mohammad Ansar, Jitendra Nath Srivastava, Jitesh Kumar, Khushbu Jain, Neha Kumari, Aditya Bharti
Publikováno v:
Bioengineering, Vol 11, Iss 4, p 361 (2024)
In the original publication [...]
Externí odkaz:
https://doaj.org/article/8bc1fbf97b1d40b5af9057658ca717c0
Autor:
Anwer, Md Arshad1 (AUTHOR) ansar.pantversity@gmail.com, Niwas, Ram1 (AUTHOR) arshadanwer930@gmail.com, Ranjan, Tushar2 (AUTHOR) mail2tusharranjan@gmail.com, Mandal, Shyam Sundar3 (AUTHOR) maizebreederbau@gmail.com, Ansar, Mohammad1 (AUTHOR) neha.k1392@gmail.com, Srivastava, Jitendra Nath1 (AUTHOR) adityabharti0806@gmail.com, Kumar, Jitesh2 (AUTHOR) jitesh1jan@gmail.com, Jain, Khushbu2 (AUTHOR) khushbu3aug@gmail.com, Kumari, Neha1 (AUTHOR), Bharti, Aditya1 (AUTHOR)
Publikováno v:
Bioengineering (Basel). Apr2024, Vol. 11 Issue 4, p361. 5p.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Mullappilly, Sahal Shaji, Kurpath, Mohammed Irfan, Pieri, Sara, Alseiari, Saeed Yahya, Cholakkal, Shanavas, Aldahmani, Khaled, Khan, Fahad, Anwer, Rao, Khan, Salman, Baldwin, Timothy, Cholakkal, Hisham
This paper introduces BiMediX2, a bilingual (Arabic-English) Bio-Medical EXpert Large Multimodal Model (LMM) with a unified architecture that integrates text and visual modalities, enabling advanced image understanding and medical applications. BiMed
Externí odkaz:
http://arxiv.org/abs/2412.07769
Autor:
Vayani, Ashmal, Dissanayake, Dinura, Watawana, Hasindri, Ahsan, Noor, Sasikumar, Nevasini, Thawakar, Omkar, Ademtew, Henok Biadglign, Hmaiti, Yahya, Kumar, Amandeep, Kuckreja, Kartik, Maslych, Mykola, Ghallabi, Wafa Al, Mihaylov, Mihail, Qin, Chao, Shaker, Abdelrahman M, Zhang, Mike, Ihsani, Mahardika Krisna, Esplana, Amiel, Gokani, Monil, Mirkin, Shachar, Singh, Harsh, Srivastava, Ashay, Hamerlik, Endre, Izzati, Fathinah Asma, Maani, Fadillah Adamsyah, Cavada, Sebastian, Chim, Jenny, Gupta, Rohit, Manjunath, Sanjay, Zhumakhanova, Kamila, Rabevohitra, Feno Heriniaina, Amirudin, Azril, Ridzuan, Muhammad, Kareem, Daniya, More, Ketan, Li, Kunyang, Shakya, Pramesh, Saad, Muhammad, Ghasemaghaei, Amirpouya, Djanibekov, Amirbek, Azizov, Dilshod, Jankovic, Branislava, Bhatia, Naman, Cabrera, Alvaro, Obando-Ceron, Johan, Otieno, Olympiah, Farestam, Fabian, Rabbani, Muztoba, Baliah, Sanoojan, Sanjeev, Santosh, Shtanchaev, Abduragim, Fatima, Maheen, Nguyen, Thao, Kareem, Amrin, Aremu, Toluwani, Xavier, Nathan, Bhatkal, Amit, Toyin, Hawau, Chadha, Aman, Cholakkal, Hisham, Anwer, Rao Muhammad, Felsberg, Michael, Laaksonen, Jorma, Solorio, Thamar, Choudhury, Monojit, Laptev, Ivan, Shah, Mubarak, Khan, Salman, Khan, Fahad
Existing Large Multimodal Models (LMMs) generally focus on only a few regions and languages. As LMMs continue to improve, it is increasingly important to ensure they understand cultural contexts, respect local sensitivities, and support low-resource
Externí odkaz:
http://arxiv.org/abs/2411.16508
Autor:
Ghaboura, Sara, Heakl, Ahmed, Thawakar, Omkar, Alharthi, Ali, Riahi, Ines, Saif, Abduljalil, Laaksonen, Jorma, Khan, Fahad S., Khan, Salman, Anwer, Rao M.
Recent years have witnessed a significant interest in developing large multimodal models (LMMs) capable of performing various visual reasoning and understanding tasks. This has led to the introduction of multiple LMM benchmarks to evaluate LMMs on di
Externí odkaz:
http://arxiv.org/abs/2410.18976
Autor:
Awais, Muhammad, Alharthi, Ali Husain Salem Abdulla, Kumar, Amandeep, Cholakkal, Hisham, Anwer, Rao Muhammad
Significant progress has been made in advancing large multimodal conversational models (LMMs), capitalizing on vast repositories of image-text data available online. Despite this progress, these models often encounter substantial domain gaps, hinderi
Externí odkaz:
http://arxiv.org/abs/2410.08405
Recently, the Segment Anything Model (SAM) has demonstrated promising segmentation capabilities in a variety of downstream segmentation tasks. However in the context of universal medical image segmentation there exists a notable performance discrepan
Externí odkaz:
http://arxiv.org/abs/2410.04172
Autor:
Ishaq, Ayesha, Boudjoghra, Mohamed El Amine, Lahoud, Jean, Khan, Fahad Shahbaz, Khan, Salman, Cholakkal, Hisham, Anwer, Rao Muhammad
3D multi-object tracking plays a critical role in autonomous driving by enabling the real-time monitoring and prediction of multiple objects' movements. Traditional 3D tracking systems are typically constrained by predefined object categories, limiti
Externí odkaz:
http://arxiv.org/abs/2410.01678