Zobrazeno 1 - 10
of 231
pro vyhledávání: '"CALINESCU, RADU"'
Empowering safe exploration of reinforcement learning (RL) agents during training is a critical impediment towards deploying RL agents in many real-world scenarios. Training RL agents in unknown, black-box environments poses an even greater safety ri
Externí odkaz:
http://arxiv.org/abs/2405.18180
Autor:
Feng, Nick, Marsso, Lina, Yaman, S. Getir, Standen, Isobel, Baatartogtokh, Yesugen, Ayad, Reem, de Mello, Victória Oldemburgo, Townsend, Bev, Bartels, Hanne, Cavalcanti, Ana, Calinescu, Radu, Chechik, Marsha
Normative non-functional requirements specify constraints that a system must observe in order to avoid violations of social, legal, ethical, empathetic, and cultural norms. As these requirements are typically defined by non-technical system stakehold
Externí odkaz:
http://arxiv.org/abs/2404.12335
Autor:
Carwehl, Marc, Imrie, Calum, Vogel, Thomas, Rodrigues, Genaína, Calinescu, Radu, Grunske, Lars
In its quest for approaches to taming uncertainty in self-adaptive systems (SAS), the research community has largely focused on solutions that adapt the SAS architecture or behaviour in response to uncertainty. By comparison, solutions that reduce th
Externí odkaz:
http://arxiv.org/abs/2401.17187
Autor:
Feng, Nick, Marsso, Lina, Yaman, Sinem Getir, Baatartogtokh, Yesugen, Ayad, Reem, de Mello, Victória Oldemburgo, Townsend, Beverley, Standen, Isobel, Stefanakos, Ioannis, Imrie, Calum, Rodrigues, Genaína Nunes, Cavalcanti, Ana, Calinescu, Radu, Chechik, Marsha
As software systems increasingly interact with humans in application domains such as transportation and healthcare, they raise concerns related to the social, legal, ethical, empathetic, and cultural (SLEEC) norms and values of their stakeholders. No
Externí odkaz:
http://arxiv.org/abs/2401.05673
Publikováno v:
2023 26th International Conference on Information Fusion (FUSION), 1-8, 2023
The superior performance of object detectors is often established under the condition that the test samples are in the same distribution as the training data. However, in many practical applications, out-of-distribution (OOD) instances are inevitable
Externí odkaz:
http://arxiv.org/abs/2310.19119
Deploying deep learning models in safety-critical applications remains a very challenging task, mandating the provision of assurances for the dependable operation of these models. Uncertainty quantification (UQ) methods estimate the model's confidenc
Externí odkaz:
http://arxiv.org/abs/2308.09647
Autor:
Yaman, Sinem Getir, Cavalcanti, Ana, Calinescu, Radu, Paterson, Colin, Ribeiro, Pedro, Townsend, Beverley
Autonomous agents are increasingly being proposed for use in healthcare, assistive care, education, and other applications governed by complex human-centric norms. To ensure compliance with these norms, the rules they induce need to be unambiguously
Externí odkaz:
http://arxiv.org/abs/2307.03697