Zobrazeno 1 - 10
of 304
pro vyhledávání: '"Deursen, Arie"'
The machine learning development lifecycle is characterized by iterative and exploratory processes that rely on feedback mechanisms to ensure data and model integrity. Despite the critical role of feedback in machine learning engineering, no prior re
Externí odkaz:
http://arxiv.org/abs/2408.00153
Autor:
Bogomolov, Egor, Eliseeva, Aleksandra, Galimzyanov, Timur, Glukhov, Evgeniy, Shapkin, Anton, Tigina, Maria, Golubev, Yaroslav, Kovrigin, Alexander, van Deursen, Arie, Izadi, Maliheh, Bryksin, Timofey
Nowadays, the fields of code and natural language processing are evolving rapidly. In particular, models become better at processing long context windows - supported context sizes have increased by orders of magnitude over the last few years. However
Externí odkaz:
http://arxiv.org/abs/2406.11612
Transformer-based language models are highly effective for code completion, with much research dedicated to enhancing the content of these completions. Despite their effectiveness, these models come with high operational costs and can be intrusive, e
Externí odkaz:
http://arxiv.org/abs/2405.14753
Does the training of large language models potentially infringe upon code licenses? Furthermore, are there any datasets available that can be safely used for training these models without violating such licenses? In our study, we assess the current t
Externí odkaz:
http://arxiv.org/abs/2403.15230
Autor:
Siachamis, George, Psarakis, Kyriakos, Fragkoulis, Marios, van Deursen, Arie, Carbone, Paris, Katsifodimos, Asterios
Stream processing in the last decade has seen broad adoption in both commercial and research settings. One key element for this success is the ability of modern stream processors to handle failures while ensuring exactly-once processing guarantees. A
Externí odkaz:
http://arxiv.org/abs/2403.13629
Autor:
Izadi, Maliheh, Katzy, Jonathan, van Dam, Tim, Otten, Marc, Popescu, Razvan Mihai, van Deursen, Arie
Transformer-based language models for automatic code completion have shown great promise so far, yet the evaluation of these models rarely uses real data. This study provides both quantitative and qualitative assessments of three public code language
Externí odkaz:
http://arxiv.org/abs/2402.16197
Due to the continuous change in operational data, AIOps solutions suffer from performance degradation over time. Although periodic retraining is the state-of-the-art technique to preserve the failure prediction AIOps models' performance over time, th
Externí odkaz:
http://arxiv.org/abs/2401.14093
Large language models have gained significant popularity because of their ability to generate human-like text and potential applications in various fields, such as Software Engineering. Large language models for code are commonly trained on large uns
Externí odkaz:
http://arxiv.org/abs/2312.11658
Counterfactual explanations offer an intuitive and straightforward way to explain black-box models and offer algorithmic recourse to individuals. To address the need for plausible explanations, existing work has primarily relied on surrogate models t
Externí odkaz:
http://arxiv.org/abs/2312.10648
Anomaly detection techniques are essential in automating the monitoring of IT systems and operations. These techniques imply that machine learning algorithms are trained on operational data corresponding to a specific period of time and that they are
Externí odkaz:
http://arxiv.org/abs/2311.10421