Zobrazeno 1 - 10
of 21
pro vyhledávání: '"Stol, Maarten"'
Autor:
Stol, Maarten C., Mileo, Alessandra
Neurosymbolic background knowledge and the expressivity required of its logic can break Machine Learning assumptions about data Independence and Identical Distribution. In this position paper we propose to analyze IID relaxation in a hierarchy of log
Externí odkaz:
http://arxiv.org/abs/2404.19485
With the introduction of the transformer architecture in computer vision, increasing model scale has been demonstrated as a clear path to achieving performance and robustness gains. However, with model parameter counts reaching the billions, classica
Externí odkaz:
http://arxiv.org/abs/2210.06466
Instance-level contrastive learning techniques, which rely on data augmentation and a contrastive loss function, have found great success in the domain of visual representation learning. They are not suitable for exploiting the rich dynamical structu
Externí odkaz:
http://arxiv.org/abs/2106.10137
Finding well-defined clusters in data represents a fundamental challenge for many data-driven applications, and largely depends on good data representation. Drawing on literature regarding representation learning, studies suggest that one key charact
Externí odkaz:
http://arxiv.org/abs/2011.01977
Modern neural networks, although achieving state-of-the-art results on many tasks, tend to have a large number of parameters, which increases training time and resource usage. This problem can be alleviated by pruning. Existing methods, however, ofte
Externí odkaz:
http://arxiv.org/abs/2009.02594
The framework of variational autoencoders (VAEs) provides a principled method for jointly learning latent-variable models and corresponding inference models. However, the main drawback of this approach is the blurriness of the generated images. Some
Externí odkaz:
http://arxiv.org/abs/2006.05218
With the introduction of SNIP [arXiv:1810.02340v2], it has been demonstrated that modern neural networks can effectively be pruned before training. Yet, its sensitivity criterion has since been criticized for not propagating training signal properly
Externí odkaz:
http://arxiv.org/abs/2006.00896
Autor:
Apostol, Andrei C.1,2 (AUTHOR) apostolandrei23@gmail.com, Stol, Maarten C.2 (AUTHOR) maarten.stol@braincreators.com, Forré, Patrick1 (AUTHOR) p.d.forre@uva.nl
Publikováno v:
AI Communications. 2022, Vol. 35 Issue 2, p65-85. 21p.
Autor:
Lutscher, Daniel, Hassouni, Ali el, Stol, Maarten, Hoogendoorn, Mark, Nicosia, Giuseppe, Ojha, Varun, La Malfa, Emanuele, La Malfa, Gabriele, Jansen, Giorgio, Pardalos, Panos M., Giuffrida, Giovanni, Umeton, Renato
Publikováno v:
Lutscher, D, Hassouni, A E, Stol, M & Hoogendoorn, M 2022, Mixing Consistent Deep Clustering . in G Nicosia, V Ojha, E La Malfa, G La Malfa, G Jansen, P M Pardalos, G Giuffrida & R Umeton (eds), Machine Learning, Optimization, and Data Science : [Proceedings] 7th International Conference, LOD 2021, Grasmere, UK, October 4–8, 2021, Revised Selected Papers, Part I . vol. 1, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 13163 LNCS, Springer Science and Business Media Deutschland GmbH, pp. 124-137, 7th International Conference on Machine Learning, Optimization, and Data Science, LOD 2021, Virtual, Online, 4/10/21 . https://doi.org/10.1007/978-3-030-95467-3_10
Machine Learning, Optimization, and Data Science: [Proceedings] 7th International Conference, LOD 2021, Grasmere, UK, October 4–8, 2021, Revised Selected Papers, Part I, 1, 124-137
Machine Learning, Optimization, and Data Science ISBN: 9783030954666
Machine Learning, Optimization, and Data Science: [Proceedings] 7th International Conference, LOD 2021, Grasmere, UK, October 4–8, 2021, Revised Selected Papers, Part I, 1, 124-137
Machine Learning, Optimization, and Data Science ISBN: 9783030954666
Finding well-defined clusters in data represents a fundamental challenge for many data-driven applications, and largely depends on good data representation. Drawing on literature regarding representation learning, studies suggest that one key charact
With the introduction of the transformer architecture in computer vision, increasing model scale has been demonstrated as a clear path to achieving performance and robustness gains. However, with model parameter counts reaching the billions, classica
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::8938081335e329b0b2ee4195bb6f4f66
http://arxiv.org/abs/2210.06466
http://arxiv.org/abs/2210.06466