Zobrazeno 1 - 10
of 32
pro vyhledávání: '"Yoshimasa KUBO"'
Publikováno v:
Communicative & Integrative Biology, Vol 16, Iss 1 (2023)
ABSTRACTSince humans still outperform artificial neural networks on many tasks, drawing inspiration from the brain may help to improve current machine learning algorithms. Contrastive Hebbian learning (CHL) and equilibrium propagation (EP) are biolog
Externí odkaz:
https://doaj.org/article/56f5c9700ddc4fca90c6f0a9eca323cb
Publikováno v:
Frontiers in Computational Neuroscience, Vol 16 (2022)
Backpropagation (BP) has been used to train neural networks for many years, allowing them to solve a wide variety of tasks like image classification, speech recognition, and reinforcement learning tasks. But the biological plausibility of BP as a mec
Externí odkaz:
https://doaj.org/article/c5495c18cf0a43d88fa11d46c91f1243
Autor:
Artur Luczak, Yoshimasa Kubo
Publikováno v:
Frontiers in Systems Neuroscience, Vol 15 (2022)
Being able to correctly predict the future and to adjust own actions accordingly can offer a great survival advantage. In fact, this could be the main reason why brains evolved. Consciousness, the most mysterious feature of brain activity, also seems
Externí odkaz:
https://doaj.org/article/518ee08c5c604963a4eff558b559a5c1
Publikováno v:
Nature machine intelligence. 4(1)
Understanding how the brain learns may lead to machines with human-like intellectual capacities. It was previously proposed that the brain may operate on the principle of predictive coding. However, it is still not well understood how a predictive sy
Backpropagation has been used to train neural networks for many years, allowing them to solve a wide variety of tasks like image classification, speech recognition, and reinforcement learning tasks. But the biological plausibility of backpropagation
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::63bc5755e9891037fb27892202c8231d
https://doi.org/10.1101/2022.06.21.496871
https://doi.org/10.1101/2022.06.21.496871
Since humans still outperform artificial neural networks on many tasks, drawing inspiration from the brain may help to improve current machine learning algorithms. Contrastive Hebbian Learning (CHL) and Equilibrium Propagation (EP) are biologically p
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::fb2e8f542bd003724d0221812e1cb257
Publikováno v:
Endoscopy International Open, Vol 12, Iss 06, Pp E797-E798 (2024)
Externí odkaz:
https://doaj.org/article/4221c525542849ed80fc399618acec28
Autor:
Tesshin Ban, Yoshimasa Kubota, Tomonori Yano, Makiko Naka Mieno, Takuya Takahama, Shun Sasoh, Satoshi Tanida, Tomoaki Ando, Makoto Nakamura, Takashi Joh
Publikováno v:
The Turkish Journal of Gastroenterology, Vol 34, Iss 12, Pp 1212-1219 (2023)
Externí odkaz:
https://doaj.org/article/25608455fab84c83ae1d65056f2461fe
Publikováno v:
IJCNN
Adding small, well crafted perturbations to the pixel values of input images leads to adversarial examples, so called because these perturbed images can drastically affect the accuracy of machine learning classifiers. Defenses against such attacks ar
Autor:
Thomas Trappenberg, Yoshimasa Kubo
Publikováno v:
Advances in Artificial Intelligence ISBN: 9783030183042
Canadian Conference on AI
Canadian Conference on AI
Recent work has shown that neural networks are vulnerable to adversarial examples. There is an discussion if this problem is related to overfitting. While many researcher stress that overfitting is not related to adversarial sensitivity, Galloway et
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::3eed916de0e4b38177ea3d89aaaff123
https://doi.org/10.1007/978-3-030-18305-9_36
https://doi.org/10.1007/978-3-030-18305-9_36