Zobrazeno 1 - 10
of 50
pro vyhledávání: '"Mengnan Du"'
Publikováno v:
BMC Medical Informatics and Decision Making, Vol 20, Iss S4, Pp 1-11 (2020)
Abstract Background Emotions after surviving cancer can be complicated. The survivors may have gained new strength to continue life, but some of them may begin to deal with complicated feelings and emotional stress due to trauma and fear of cancer re
Externí odkaz:
https://doaj.org/article/a482e2f1aebd44d3a3a57acfcccc0ae5
Publikováno v:
Applied AI Letters, Vol 2, Iss 4, Pp n/a-n/a (2021)
Abstract Natural language processing (NLP) models have been increasingly deployed in real‐world applications, and interpretation for textual data has also attracted dramatic attention recently. Most existing methods generate feature importance inte
Externí odkaz:
https://doaj.org/article/e5e70b1f8ffa4241bbc8bc89d89bad21
Publikováno v:
Communications of the ACM. Jan2024, Vol. 67 Issue 1, p110-120. 11p.
Publikováno v:
Communications of the ACM. Jan2020, Vol. 63 Issue 1, p68-77. 10p. 3 Color Photographs, 2 Diagrams.
Publikováno v:
Proceedings of the AAAI Conference on Artificial Intelligence. 36:9521-9528
Recent studies indicate that deep neural networks (DNNs) are prone to show discrimination towards certain demographic groups. We observe that algorithmic discrimination can be explained by the high reliance of the models on fairness sensitive feature
Publikováno v:
IEEE Transactions on Computational Social Systems. 9:458-468
Embeddings of textual data containing location names (e.g., social media posts) have essential applications in various contexts such as marketing and disaster management. In these downstream implementations, social biases behind location names are hi
Publikováno v:
IEEE Intelligent Systems. 36:25-34
Deep learning is increasingly being used in high-stake decision making applications that affect individual lives. However, deep learning models might exhibit algorithmic discrimination behaviors with respect to protected groups, potentially posing ne
Autor:
Yuening Li, Zhengzhang Chen, Daochen Zha, Mengnan Du, Jingchao Ni, Denghui Zhang, Haifeng Chen, Xia Hu
Publikováno v:
Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining.
Publikováno v:
ACM SIGKDD Explorations Newsletter. 23:59-68
With the wide use of deep neural networks (DNN), model interpretability has become a critical concern, since explainable decisions are preferred in high-stake scenarios. Current interpretation techniques mainly focus on the feature attribution perspe
Publikováno v:
2022 5th International Conference on Artificial Intelligence and Big Data (ICAIBD).