Zobrazeno 1 - 10
of 71
pro vyhledávání: '"Habernal, Ivan"'
Autor:
Dutta, Subhabrata, Kaufmann, Timo, Glavaš, Goran, Habernal, Ivan, Kersting, Kristian, Kreuter, Frauke, Mezini, Mira, Gurevych, Iryna, Hüllermeier, Eyke, Schuetze, Hinrich
While there is a widespread belief that artificial general intelligence (AGI) -- or even superhuman AI -- is imminent, complex problems in expert domains are far from being solved. We argue that such problems require human-AI cooperation and that the
Externí odkaz:
http://arxiv.org/abs/2408.07461
Applying differential privacy (DP) by means of the DP-SGD algorithm to protect individual data points during training is becoming increasingly popular in NLP. However, the choice of granularity at which DP is applied is often neglected. For example,
Externí odkaz:
http://arxiv.org/abs/2407.18789
Autor:
Held, Lena, Habernal, Ivan
Why does an argument end up in the final court decision? Was it deliberated or questioned during the oral hearings? Was there something in the hearings that triggered a particular judge to write a dissenting opinion? Despite the availability of the f
Externí odkaz:
http://arxiv.org/abs/2312.05061
Autor:
Igamberdiev, Timour, Vu, Doan Nam Long, Künnecke, Felix, Yu, Zhuo, Holmer, Jannik, Habernal, Ivan
Neural machine translation (NMT) is a widely popular text generation task, yet there is a considerable research gap in the development of privacy-preserving NMT models, despite significant data privacy concerns for NMT systems. Differentially private
Externí odkaz:
http://arxiv.org/abs/2311.14465
Although the NLP community has adopted central differential privacy as a go-to framework for privacy-preserving model training or data sharing, the choice and interpretation of the key parameter, privacy budget $\varepsilon$ that governs the strength
Externí odkaz:
http://arxiv.org/abs/2307.06708
Protecting privacy in contemporary NLP models is gaining in importance. So does the need to mitigate social biases of such models. But can we have both at the same time? Existing research suggests that privacy preservation comes at the price of worse
Externí odkaz:
http://arxiv.org/abs/2305.14936
Most tasks in NLP require labeled data. Data labeling is often done on crowdsourcing platforms due to scalability reasons. However, publishing data on public platforms can only be done if no privacy-relevant information is included. Textual data ofte
Externí odkaz:
http://arxiv.org/abs/2303.03053
Autor:
Igamberdiev, Timour, Habernal, Ivan
Privatized text rewriting with local differential privacy (LDP) is a recent approach that enables sharing of sensitive textual documents while formally guaranteeing privacy protection to individuals. However, existing systems face several issues, suc
Externí odkaz:
http://arxiv.org/abs/2302.07636
Recent developments in deep learning have led to great success in various natural language processing (NLP) tasks. However, these applications may involve data that contain sensitive information. Therefore, how to achieve good performance while also
Externí odkaz:
http://arxiv.org/abs/2301.09112
Autor:
Yin, Ying, Habernal, Ivan
Pre-training large transformer models with in-domain data improves domain adaptation and helps gain performance on the domain-specific downstream tasks. However, sharing models pre-trained on potentially sensitive data is prone to adversarial privacy
Externí odkaz:
http://arxiv.org/abs/2211.02956