Zobrazeno 1 - 10
of 547
pro vyhledávání: '"BACKES, MICHAEL"'
Autor:
Yuan, Quan, Zhang, Zhikun, Du, Linkang, Chen, Min, Sun, Mingyang, Gao, Yunjun, Backes, Michael, He, Shibo, Chen, Jiming
Streaming graphs are ubiquitous in daily life, such as evolving social networks and dynamic communication systems. Due to the sensitive information contained in the graph, directly sharing the streaming graphs poses significant privacy risks. Differe
Externí odkaz:
http://arxiv.org/abs/2412.11369
Autor:
Hanke, Vincent, Blanchard, Tom, Boenisch, Franziska, Olatunji, Iyiola Emmanuel, Backes, Michael, Dziedzic, Adam
While open Large Language Models (LLMs) have made significant progress, they still fall short of matching the performance of their closed, proprietary counterparts, making the latter attractive even for the use on highly private data. Recently, vario
Externí odkaz:
http://arxiv.org/abs/2411.05818
Human Pose Estimation (HPE) has been widely applied in autonomous systems such as self-driving cars. However, the potential risks of HPE to adversarial attacks have not received comparable attention with image classification or segmentation tasks. Ex
Externí odkaz:
http://arxiv.org/abs/2410.07670
Large vision-language models (LVLMs) have been rapidly developed and widely used in various fields, but the (potential) stereotypical bias in the model is largely unexplored. In this study, we present a pioneering measurement framework, $\texttt{ModS
Externí odkaz:
http://arxiv.org/abs/2410.06967
Recent work on studying memorization in self-supervised learning (SSL) suggests that even though SSL encoders are trained on millions of images, they still memorize individual data points. While effort has been put into characterizing the memorized d
Externí odkaz:
http://arxiv.org/abs/2409.19069
Large language models (LLMs) have shown considerable success in a range of domain-specific tasks, especially after fine-tuning. However, fine-tuning with real-world data usually leads to privacy risks, particularly when the fine-tuning samples exist
Externí odkaz:
http://arxiv.org/abs/2409.11423
Machine learning has revolutionized numerous domains, playing a crucial role in driving advancements and enabling data-centric processes. The significance of data in training models and shaping their performance cannot be overstated. Recent research
Externí odkaz:
http://arxiv.org/abs/2409.03741
Adapting Large Language Models (LLMs) to specific tasks introduces concerns about computational efficiency, prompting an exploration of efficient methods such as In-Context Learning (ICL). However, the vulnerability of ICL to privacy attacks under re
Externí odkaz:
http://arxiv.org/abs/2409.01380
Text-to-image models, such as Stable Diffusion (SD), undergo iterative updates to improve image quality and address concerns such as safety. Improvements in image quality are straightforward to assess. However, how model updates resolve existing conc
Externí odkaz:
http://arxiv.org/abs/2408.17285
Despite being prevalent in the general field of Natural Language Processing (NLP), pre-trained language models inherently carry privacy and copyright concerns due to their nature of training on large-scale web-scraped data. In this paper, we pioneer
Externí odkaz:
http://arxiv.org/abs/2408.11046