Zobrazeno 1 - 8
of 8
pro vyhledávání: '"Woisetschlaeger, Herbert"'
Open-weight large language model (LLM) zoos allow users to quickly integrate state-of-the-art models into systems. Despite increasing availability, selecting the most appropriate model for a given task still largely relies on public benchmark leaderb
Externí odkaz:
http://arxiv.org/abs/2411.00889
The European Union Artificial Intelligence Act mandates clear stakeholder responsibilities in developing and deploying machine learning applications to avoid substantial fines, prioritizing private and secure data processing with data remaining at it
Externí odkaz:
http://arxiv.org/abs/2407.08105
Autor:
Woisetschläger, Herbert, Erben, Alexander, Marino, Bill, Wang, Shiqiang, Lane, Nicholas D., Mayer, Ruben, Jacobsen, Hans-Arno
The age of AI regulation is upon us, with the European Union Artificial Intelligence Act (AI Act) leading the way. Our key inquiry is how this will affect Federated Learning (FL), whose starting point of prioritizing data privacy while performing ML
Externí odkaz:
http://arxiv.org/abs/2402.05968
Autor:
Woisetschläger, Herbert, Isenko, Alexander, Wang, Shiqiang, Mayer, Ruben, Jacobsen, Hans-Arno
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients. However, new approaches to FL often discuss their contributions involving small deep-learning models on
Externí odkaz:
http://arxiv.org/abs/2401.04472
Autor:
Woisetschläger, Herbert, Isenko, Alexander, Wang, Shiqiang, Mayer, Ruben, Jacobsen, Hans-Arno
Large Language Models (LLM) and foundation models are popular as they offer new opportunities for individuals and businesses to improve natural language processing, interact with data, and retrieve information faster. However, training or fine-tuning
Externí odkaz:
http://arxiv.org/abs/2310.03150
Federated Learning (FL) has become a viable technique for realizing privacy-enhancing distributed deep learning on the network edge. Heterogeneous hardware, unreliable client devices, and energy constraints often characterize edge computing systems.
Externí odkaz:
http://arxiv.org/abs/2306.05172
Autor:
Chen, Zongxiong, Geng, Jiahui, Zhu, Derui, Woisetschlaeger, Herbert, Li, Qing, Schimmler, Sonja, Mayer, Ruben, Rong, Chunming
The aim of dataset distillation is to encode the rich features of an original dataset into a tiny dataset. It is a promising approach to accelerate neural network training and related studies. Different approaches have been proposed to improve the in
Externí odkaz:
http://arxiv.org/abs/2305.03355
Autor:
Geng, Jiahui, Chen, Zongxiong, Wang, Yuandou, Woisetschlaeger, Herbert, Schimmler, Sonja, Mayer, Ruben, Zhao, Zhiming, Rong, Chunming
Dataset distillation is attracting more attention in machine learning as training sets continue to grow and the cost of training state-of-the-art models becomes increasingly high. By synthesizing datasets with high information density, dataset distil
Externí odkaz:
http://arxiv.org/abs/2305.01975