Zobrazeno 1 - 10
of 16
pro vyhledávání: '"Patrick Schramowski"'
Publikováno v:
Frontiers in Artificial Intelligence, Vol 3 (2020)
Allowing machines to choose whether to kill humans would be devastating for world peace and security. But how do we equip machines with the ability to learn ethical or even moral choices? In this study, we show that applying machine learning to human
Externí odkaz:
https://doaj.org/article/f3d987797b9346628b60961cdbfed3ff
Autor:
Anna Brugger, Jan Behmann, Stefan Paulus, Hans-Georg Luigs, Matheus Thomas Kuska, Patrick Schramowski, Kristian Kersting, Ulrike Steiner, Anne-Katrin Mahlein
Publikováno v:
Remote Sensing, Vol 11, Iss 12, p 1401 (2019)
Previous plant phenotyping studies have focused on the visible (VIS, 400−700 nm), near-infrared (NIR, 700−1000 nm) and short-wave infrared (SWIR, 1000−2500 nm) range. The ultraviolet range (UV, 200−380 nm) has not yet been used in plant pheno
Externí odkaz:
https://doaj.org/article/e8eb36855a494a748a4c650e219e410e
Publikováno v:
Nature Machine Intelligence. 5:319-330
Autor:
Nicolas Pfeuffer, Lorenz Baum, Wolfgang Stammer, Benjamin M. Abdel-Karim, Patrick Schramowski, Andreas M. Bucher, Christian Hügel, Gernot Rohde, Kristian Kersting, Oliver Hinz
Publikováno v:
Business & Information Systems Engineering.
The most promising standard machine learning methods can deliver highly accurate classification results, often outperforming standard white-box methods. However, it is hardly possible for humans to fully understand the rationale behind the black-box
Autor:
Patrick Schramowski, Stefan Paulus, Anne-Katrin Mahlein, Anna Brugger, Kristian Kersting, Ulrike Steiner
Publikováno v:
Plant Pathology. 70:1572-1582
Publikováno v:
Proceedings of the AAAI Conference on Artificial Intelligence. 35:9533-9540
Explaining black-box models such as deep neural networks is becoming increasingly important as it helps to boost trust and debugging. Popular forms of explanations map the features to a vector indicating their individual importance to a decision on t
Learning visual concepts from raw images without strong supervision is a challenging task. In this work, we show the advantages of prototype representations for understanding and revising the latent space of neural concept learners. For this purpose,
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::5f3c86418c4951421667af61837c5f53
http://arxiv.org/abs/2112.02290
http://arxiv.org/abs/2112.02290
Artificial writing is permeating our lives due to recent advances in large-scale, transformer-based language models (LMs) such as BERT, its variants, GPT-2/3, and others. Using them as pre-trained models and fine-tuning them for specific tasks, resea
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::7ea8b34a3dcc472314fb30063e6f7a0c
http://arxiv.org/abs/2103.11790
http://arxiv.org/abs/2103.11790
Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their Explanations
Publikováno v:
CVPR
Most explanation methods in deep learning map importance estimates for a model's prediction back to the original input space. These "visual" explanations are often insufficient, as the model's actual concept remains elusive. Moreover, without insight
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::386347cdb16d6a186af3aea71970cbc4
http://arxiv.org/abs/2011.12854
http://arxiv.org/abs/2011.12854