Zobrazeno 1 - 10
of 64
pro vyhledávání: '"Jiang, Junqi"'
Autor:
Liang, Xudong, Ding, Yimiao, Yuan, Zihao, Jiang, Junqi, Xie, Zongling, Fei, Peng, Sun, Yixuan, Gu, Guoying, Zhong, Zheng, Chen, Feifei, Si, Guangwei, Gong, Zhefeng
The Drosophila larva, a soft-body animal, can bend its body and roll efficiently to escape danger. However, contrary to common belief, this rolling motion is not driven by the imbalance of gravity and ground reaction forces. Through functional imagin
Externí odkaz:
http://arxiv.org/abs/2410.07644
Autor:
Zhai, Xuehao, Jiang, Junqi, Dejl, Adam, Rago, Antonio, Guo, Fangce, Toni, Francesca, Sivakumar, Aruna
Urban land use inference is a critically important task that aids in city planning and policy-making. Recently, the increased use of sensor and location technologies has facilitated the collection of multi-modal mobility data, offering valuable insig
Externí odkaz:
http://arxiv.org/abs/2406.13724
Autor:
Leofante, Francesco, Ayoobi, Hamed, Dejl, Adam, Freedman, Gabriel, Gorur, Deniz, Jiang, Junqi, Paulino-Passos, Guilherme, Rago, Antonio, Rapberger, Anna, Russo, Fabrizio, Yin, Xiang, Zhang, Dekai, Toni, Francesca
AI has become pervasive in recent years, but state-of-the-art approaches predominantly neglect the need for AI systems to be contestable. Instead, contestability is advocated by AI guidelines (e.g. by the OECD) and regulation of automated decision-ma
Externí odkaz:
http://arxiv.org/abs/2405.10729
Counterfactual Explanations (CEs) have emerged as a major paradigm in explainable AI research, providing recourse recommendations for users affected by the decisions of machine learning models. However, CEs found by existing methods often become inva
Externí odkaz:
http://arxiv.org/abs/2404.13736
Counterfactual explanations (CEs) are advocated as being ideally suited to providing algorithmic recourse for subjects affected by the predictions of machine learning models. While CEs can be beneficial to affected individuals, recent work has expose
Externí odkaz:
http://arxiv.org/abs/2402.01928
Model Multiplicity (MM) arises when multiple, equally performing machine learning models can be trained to solve the same prediction task. Recent studies show that models obtained under MM may produce inconsistent predictions for the same input. When
Externí odkaz:
http://arxiv.org/abs/2312.15097
Counterfactual Explanations (CEs) have received increasing interest as a major methodology for explaining neural network classifiers. Usually, CEs for an input-output pair are defined as data points with minimum distance to the input that are classif
Externí odkaz:
http://arxiv.org/abs/2309.12545
The use of counterfactual explanations (CFXs) is an increasingly popular explanation strategy for machine learning models. However, recent studies have shown that these explanations may not be robust to changes in the underlying model (e.g., followin
Externí odkaz:
http://arxiv.org/abs/2208.14878
Autor:
Chen, Kenan, Zhang, Zhehao, Jiang, Junqi, Wang, Junlin, Wang, Jing, Sun, Yuchun, Xu, Xiangliang, Guo, Chuanbin
Publikováno v:
In Heliyon July 2023 9(7)
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.