Abstrakt: |
In the realm of automated vehicles (AVs), the focus is predominantly on the potential of sub-symbolic deep-learning-based artificial intelligence (AI) systems. Our study questions the suitability of this data-driven approach for AVs, particularly in embodying societal values in their behaviour. Through a systematic examination of sub-symbolic and symbolic AI, we identify key issues for AVs, including adaptability, safety, reliability, trust, fairness, transparency, and control. Deep learning systems’ lack of adaptability and inherent complexities pose significant safety concerns and hinder meaningful human control. This limitation prevents humans from effectively updating AI decision-making processes to better reflect ethical values. Furthermore, deep learning systems are prone to biases and unfairness, leading to incidents that are difficult to explain and rectify. In contrast, symbolic, model-based approaches offer a structured framework for encoding ethical goals and principles within AV systems, thus enabling meaningful human control. However, they also face challenges, such as inefficiencies in handling large amounts of unstructured data for low-level tasks and maintaining explicit knowledge bases. Therefore, we advocate for hybrid AI, combining symbolic and sub-symbolic models with symbolic goal functions. We propose Augmented Utilitarianism (AU) as an ethical framework for developing these goal functions, aiming to minimise harm by integrating principles from consequentialism, deontology, and virtue ethics, while incorporating the perspective of the experiencer. Our methodology for eliciting moral attributes to construct an explicit ethical goal function engages collective societal values through iterative refinement, contributing to the development of safer, more reliable, and ethically aligned automated driving systems. |