Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Miglani, Vivek"'
Captum is a comprehensive library for model explainability in PyTorch, offering a range of methods from the interpretability literature to enhance users' understanding of PyTorch models. In this paper, we introduce new features in Captum that are spe
Externí odkaz:
http://arxiv.org/abs/2312.05491
Autor:
Kokhlikyan, Narine, Alsallakh, Bilal, Wang, Fulton, Miglani, Vivek, Yang, Oliver Aobo, Adkins, David
We propose a fairness-aware learning framework that mitigates intersectional subgroup bias associated with protected attributes. Prior research has primarily focused on mitigating one kind of bias by incorporating complex fairness-driven constraints
Externí odkaz:
http://arxiv.org/abs/2212.13014
Autor:
Kokhlikyan, Narine, Miglani, Vivek, Alsallakh, Bilal, Martin, Miguel, Reblitz-Richardson, Orion
Saliency maps have shown to be both useful and misleading for explaining model predictions especially in the context of images. In this paper, we perform sanity checks for text modality and show that the conclusions made for image do not directly tra
Externí odkaz:
http://arxiv.org/abs/2106.07475
Autor:
Miglani, Vivek N.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Co
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Co
Externí odkaz:
https://hdl.handle.net/1721.1/123048
Autor:
Miglani, Vivek, Kokhlikyan, Narine, Alsallakh, Bilal, Martin, Miguel, Reblitz-Richardson, Orion
Integrated Gradients has become a popular method for post-hoc model interpretability. De-spite its popularity, the composition and relative impact of different regions of the integral path are not well understood. We explore these effects and find th
Externí odkaz:
http://arxiv.org/abs/2010.12697
We show how feature maps in convolutional networks are susceptible to spatial bias. Due to a combination of architectural choices, the activation at certain locations is systematically elevated or weakened. The major source of this bias is the paddin
Externí odkaz:
http://arxiv.org/abs/2010.02178
Autor:
Kokhlikyan, Narine, Miglani, Vivek, Martin, Miguel, Wang, Edward, Alsallakh, Bilal, Reynolds, Jonathan, Melnikov, Alexander, Kliushkina, Natalia, Araya, Carlos, Yan, Siqi, Reblitz-Richardson, Orion
In this paper we introduce a novel, unified, open-source model interpretability library for PyTorch [12]. The library contains generic implementations of a number of gradient and perturbation-based attribution algorithms, also known as feature, neuro
Externí odkaz:
http://arxiv.org/abs/2009.07896
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference [Annu Int Conf IEEE Eng Med Biol Soc] 2016 Aug; Vol. 2016, pp. 804-807.