Autor: |
M J Cook, M Thody, D Garrett, T Simpson |
Rok vydání: |
2018 |
Předmět: |
|
Zdroj: |
Proceedings of the International Ship Control Systems Symposium (iSCSS). |
ISSN: |
2631-8741 |
DOI: |
10.24868/issn.2631-8741.2018.016 |
Popis: |
A philosophy of technology use has developed in many safety-critical industries that is based upon the view that human operators are feckless and unreliable system operators, so wherever possible should not be trusted to execute safety-critical tasks. The implicit view of automation is that it invariably improves system performance and increases reliability. After many decades or even centuries of machine and automation development human error remains one of the dominant features in failures of modern systems. The drive towards introducing automation has claimed a larger performance envelope, lower operating costs with fewer people, less risk of hazard realisation, and a more economical path in development. One of the aims of introducing automation is to introduce higher reliability in the belief that this implicitly brings with it increases in safety. As Leveson (2011) points out high reliability can be misleading because interactions between elements that are working as expected may trigger the system failure because of transverse consequences. The propagation of the view that human operators are the weakest operational link and the pervasive myths about the reliability of automated solutions, which affords automation the easier scenarios of task execution, need to be re-visited (Cook, Thody and Garrett, 2017). This should ensure that the best capability and optimal safety case is developed for future systems based upon operator and system in synergy. This may be especially true if the claims for automation are treated more aggressively in terms of liability. |
Databáze: |
OpenAIRE |
Externí odkaz: |
|