Zobrazeno 1 - 10
of 9 103
pro vyhledávání: '"A. Bade"'
Autor:
Yigezu, Mesay Gemeda, Mersha, Melkamu Abay, Bade, Girma Yohannis, Kalita, Jugal, Kolesnikova, Olga, Gelbukh, Alexander
Publikováno v:
ACLing 2024: 6th International Conference on AI in Computational Linguistics
The proliferation of fake news has emerged as a significant threat to the integrity of information dissemination, particularly on social media platforms. Misinformation can spread quickly due to the ease of creating and disseminating content, affecti
Externí odkaz:
http://arxiv.org/abs/2410.02609
Adversarial attacks (AAs) pose a significant threat to the reliability and robustness of deep neural networks. While the impact of these attacks on model predictions has been extensively studied, their effect on the learned representations and concep
Externí odkaz:
http://arxiv.org/abs/2403.16782
Insights into the learned latent representations are imperative for verifying deep neural networks (DNNs) in critical computer vision (CV) tasks. Therefore, state-of-the-art supervised Concept-based eXplainable Artificial Intelligence (C-XAI) methods
Externí odkaz:
http://arxiv.org/abs/2311.14435
Autor:
Upadhyay, Uddeshya, Bade, Sairam, Puranik, Arjun, Asfahan, Shahir, Babu, Melwin, Lopez-Jimenez, Francisco, Asirvatham, Samuel J., Prasad, Ashim, Rajasekharan, Ajit, Awasthi, Samir, Barve, Rakesh
Publikováno v:
Transactions on Machine Learning Research (TMLR), 2023
The automated analysis of medical time series, such as the electrocardiogram (ECG), electroencephalogram (EEG), pulse oximetry, etc, has the potential to serve as a valuable tool for diagnostic decisions, allowing for remote monitoring of patients an
Externí odkaz:
http://arxiv.org/abs/2311.13821
Agent-based models (ABMs) simulate the formation and evolution of social processes at a fundamental level by decoupling agent behavior from global observations. In the case where ABM networks evolve over time as a result of (or in conjunction with) a
Externí odkaz:
http://arxiv.org/abs/2308.05256
Revealing Similar Semantics Inside CNNs: An Interpretable Concept-based Comparison of Feature Spaces
Safety-critical applications require transparency in artificial intelligence (AI) components, but widely used convolutional neural networks (CNNs) widely used for perception tasks lack inherent interpretability. Hence, insights into what CNNs have le
Externí odkaz:
http://arxiv.org/abs/2305.07663
Analysis of how semantic concepts are represented within Convolutional Neural Networks (CNNs) is a widely used approach in Explainable Artificial Intelligence (XAI) for interpreting CNNs. A motivation is the need for transparency in safety-critical A
Externí odkaz:
http://arxiv.org/abs/2304.14864
Autor:
Bade, Sohail1 (AUTHOR) sohailbade@gmail.com, Bade, Sahil1 (AUTHOR), Sharma, Grishma1 (AUTHOR), Bhurtel, Narayan2 (AUTHOR), Singh, Yadvinder3 (AUTHOR), Paudel, Sudip4 (AUTHOR), Magar, Frena Pulami5 (AUTHOR), Chapagain, Kshitij1 (AUTHOR)
Publikováno v:
Clinical Case Reports. Sep2024, Vol. 12 Issue 9, p1-6. 6p.
Autor:
Bade, Sophie, Root, Joseph
We study the set of incentive compatible and efficient two-sided matching mechanisms. We classify all such mechanisms under an additional assumption -- "gender-neutrality" -- which guarantees that the two sides be treated symmetrically. All group str
Externí odkaz:
http://arxiv.org/abs/2301.13037
Publikováno v:
Inclusive Leadership: Equity and Belonging in Our Communities