Zobrazeno 1 - 10
of 1 378
pro vyhledávání: '"Hitzler, P"'
Autor:
Shimizu, Cogan, Stephe, Shirly, Barua, Adrita, Cai, Ling, Christou, Antrea, Currier, Kitty, Dalal, Abhilekha, Fisher, Colby K., Hitzler, Pascal, Janowicz, Krzysztof, Li, Wenwen, Liu, Zilong, Mahdavinejad, Mohammad Saeid, Mai, Gengchen, Rehberger, Dean, Schildhauer, Mark, Shi, Meilin, Norouzi, Sanaz Saki, Tian, Yuanyuan, Wang, Sizhe, Wang, Zhangyu, Zalewski, Joseph, Zhou, Lu, Zhu, Rui
KnowWhereGraph is one of the largest fully publicly available geospatial knowledge graphs. It includes data from 30 layers on natural hazards (e.g., hurricanes, wildfires), climate variables (e.g., air temperature, precipitation), soil properties, cr
Externí odkaz:
http://arxiv.org/abs/2410.13948
Autor:
Dalal, Abhilekha, Hitzler, Pascal
ConceptLens is an innovative tool designed to illuminate the intricate workings of deep neural networks (DNNs) by visualizing hidden neuron activations. By integrating deep learning with symbolic methods, ConceptLens offers users a unique way to unde
Externí odkaz:
http://arxiv.org/abs/2410.05311
Integrating structured knowledge from tabular formats poses significant challenges within natural language processing (NLP), mainly when dealing with complex, semi-structured tables like those found in the FeTaQA dataset. These tables require advance
Externí odkaz:
http://arxiv.org/abs/2409.14192
Depression is a common mental health issue that requires prompt diagnosis and treatment. Despite the promise of social media data for depression detection, the opacity of employed deep learning models hinders interpretability and raises bias concerns
Externí odkaz:
http://arxiv.org/abs/2407.21041
Understanding how high-level concepts are represented within artificial neural networks is a fundamental challenge in the field of artificial intelligence. While existing literature in explainable AI emphasizes the importance of labeling neurons with
Externí odkaz:
http://arxiv.org/abs/2405.09580
Publikováno v:
Data Intelligence, Vol 2, Iss 3, Pp 353-378 (2020)
Ontology alignment has been studied for over a decade, and over that time many alignment systems and methods have been developed by researchers in order to find simple 1-to-1 equivalence matches between two ontologies. However, very few alignment sys
Externí odkaz:
https://doaj.org/article/924146f178524fdf8877837bfbdcd43c
Autor:
Dalal, Abhilekha, Rayan, Rushrukh, Barua, Adrita, Vasserman, Eugene Y., Sarker, Md Kamruzzaman, Hitzler, Pascal
A major challenge in Explainable AI is in correctly interpreting activations of hidden neurons: accurate interpretations would help answer the question of what a deep learning system internally detects as relevant in the input, demystifying the other
Externí odkaz:
http://arxiv.org/abs/2404.13567
Publikováno v:
Neural-Symbolic Learning and Reasoning, NeSy 2024, Lecture Notes in Computer Science, vol. 14980, pp. 132-148, 2024
Explainable Artificial Intelligence (XAI) poses a significant challenge in providing transparent and understandable insights into complex AI models. Traditional post-hoc algorithms, while useful, often struggle to deliver interpretable explanations.
Externí odkaz:
http://arxiv.org/abs/2404.11875
Ontology alignment, a critical process in the Semantic Web for detecting relationships between different ontologies, has traditionally focused on identifying so-called "simple" 1-to-1 relationships through class labels and properties comparison. The
Externí odkaz:
http://arxiv.org/abs/2404.10329
The previously introduced Modular Ontology Modeling methodology (MOMo) attempts to mimic the human analogical process by using modular patterns to assemble more complex concepts. To support this, MOMo organizes organizes ontology design patterns into
Externí odkaz:
http://arxiv.org/abs/2402.18715