Zobrazeno 1 - 10
of 84
pro vyhledávání: '"Rajan, Ajitha"'
This paper proposes a fully explainable approach to speaker verification (SV), a task that fundamentally relies on individual speaker characteristics. The opaque use of speaker attributes in current SV systems raises concerns of trust. Addressing thi
Externí odkaz:
http://arxiv.org/abs/2405.19796
The rapidly advancing field of Explainable Artificial Intelligence (XAI) aims to tackle the issue of trust regarding the use of complex black-box deep learning models in real-world applications. Existing post-hoc XAI techniques have recently been sho
Externí odkaz:
http://arxiv.org/abs/2403.19444
Converting deep learning models between frameworks is a common step to maximize model compatibility across devices and leverage optimization features that may be exclusively provided in one deep learning framework. However, this conversion process ma
Externí odkaz:
http://arxiv.org/abs/2312.15101
Autor:
Peng, Chao, Lv, Zhengwei, Fu, Jiarong, Liang, Jiayuan, Zhang, Zhao, Rajan, Ajitha, Yang, Ping
Android Apps are frequently updated to keep up with changing user, hardware, and business demands. Ensuring the correctness of App updates through extensive testing is crucial to avoid potential bugs reaching the end user. Existing Android testing to
Externí odkaz:
http://arxiv.org/abs/2309.01519
When deploying Deep Neural Networks (DNNs), developers often convert models from one deep learning framework to another (e.g., TensorFlow to PyTorch). However, this process is error-prone and can impact target model accuracy. To identify the extent o
Externí odkaz:
http://arxiv.org/abs/2306.06157
Image recognition tasks typically use deep learning and require enormous processing power, thus relying on hardware accelerators like GPUs and TPUs for fast, timely processing. Failure in real-time image recognition tasks can occur due to sub-optimal
Externí odkaz:
http://arxiv.org/abs/2306.06208
The increased utilization of Artificial Intelligence (AI) solutions brings with it inherent risks, such as misclassification and sub-optimal execution time performance, due to errors introduced in their deployment infrastructure because of problemati
Externí odkaz:
http://arxiv.org/abs/2306.01697
Autor:
Gema, Aryo Pradipta, Grabarczyk, Dominik, De Wulf, Wolf, Borole, Piyush, Alfaro, Javier Antonio, Minervini, Pasquale, Vergari, Antonio, Rajan, Ajitha
Knowledge graphs are powerful tools for representing and organising complex biomedical data. Several knowledge graph embedding algorithms have been proposed to learn from and complete knowledge graphs. However, a recent study demonstrates the limited
Externí odkaz:
http://arxiv.org/abs/2305.19979
Explainable AI (XAI) techniques have been widely used to help explain and understand the output of deep learning models in fields such as image classification and Natural Language Processing. Interest in using XAI techniques to explain deep learning-
Externí odkaz:
http://arxiv.org/abs/2305.18011
Autor:
Gema, Aryo Pradipta, Kobiela, Michał, Fraisse, Achille, Rajan, Ajitha, Oyarzún, Diego A., Alfaro, Javier Antonio
The SARS-CoV-2 pandemic has emphasised the importance of developing a universal vaccine that can protect against current and future variants of the virus. The present study proposes a novel conditional protein Language Model architecture, called Vaxf
Externí odkaz:
http://arxiv.org/abs/2305.11194