Variability: for better and for worse in safety assurance

Autor: Des N.D. Hartford, Ben J. M. Ale, David H. Slater
Rok vydání: 2020
Předmět:
Zdroj: Medical Research Archives. 8
ISSN: 2375-1924
2375-1916
Popis: Traditionally, in trying to design ―safe‖ systems, variability has been seen as a threat, because it brings with it the possibility of an unwanted outcome. Variability of hardware was thus rigorously controlled by, amongst other things, precise specifications. Variability of human behaviour was thought to be adequately managed by inter alia, regulations and protocols. This philosophy is now referred to as SAFETY I and relied on reliability to guarantee the expected system performance. In the now fashionable philosophy of SAFETY-II, on the other hand, variability is seen as unavoidable; a given in real environments and can even be an asset as, in SAFETY-II, humans are recognised as being able to cope with and often exploit the variability of technology and circumstances, to keep systems working. This reliance on the human capacity for coping, has been seen as adding a necessary element of ―resilience‖ to the system. Thus the SAFETY II concept of resilience engineering could be used as a way to promote safety by exploiting the ingenuity of humans to keep systems within the desired operating envelope. Recently the meaning of resilience has been stretched to include the ability of restoring the operational state after an excursion into the realm of inoperability. The problem is that these approaches (SAFETY I and SAFETY II), could be seen as legitimate alternatives as philosophies in the design of physical and operational systems. This stretched, almost complacent interpretation of ―resilience‖ only serves to exacerbate the problem. The mistake that is often made, is to regard either of the approaches as sufficient in themselves, to guarantee safety in today’s highly complex systems of work and decision making organisations. As Rumsfeld and Taleb have so eloquently reminded us, we can no longer justify designing solely for the known knowns and white swans. Similarly reliance on humans to cope if an unexpected situation may arise, can reduce the emphasis on preventive measures that limit the probability that the system may behave in an unsafe manner. In today’s evermore complex and less transparent systems and work places, however, we obviously need both the SAFETY I belts and the SAFETY II braces (to paraphrase Kletz); as the errors that may be introduced by over-relying on humans correctly assessing situations can be catastrophic: not just for an individual or a company, but sometimes for the wider society. So we need to formalise the human’s resilient SAFETY II abilities (to monitor, respond to, learn from and anticipate the meaning of operational variability), and incorporate them fully into the (SAFETY I) design of the system. Enlightened training and management can then, as a bonus almost, further rely on the human’s extraordinary abilities, as an additional layer of security.The problem is that these approaches (SAFETY I and SAFETY II), could be seen as legitimate alternatives as design philosophies. This stretched, almost complacent interpretation of “resilience” only serves to exacerbate the problem. The mistake that is often made, is to regard either of the approaches as sufficient in themselves, to guarantee safety in today’s highly complex systems of work and decision making organisations. As Rumsfeld [ 1 ]and Taleb [ 2 ] have so eloquently reminded us, we can no longer justify designing solely for the known knowns and white swans. Similarly reliance on humans to cope if an unexpected situation may arise, can reduce the emphasis on preventive measures that limit the probability that the system may behave in an unsafe manner. In today’s evermore complex and less transparent systems and work places, however, we obviously need both the SAFETY I belts and the SAFETY II braces (to paraphrase Kletz [ 3 ]); as the errors that may be introduced by over-relying on humans correctly assessing situations can be catastrophic: not just for an individual or a company, but sometimes for the wider society. So we need to formalise the human’s resilient SAFETY II abilities (to monitor, respond to, learn from and anticipate the meaning of operational variability), and incorporate them fully into the (SAFETY I) design of the system. Enlightened training and management can then, as a bonus almost, further rely on the human’s extraordinary abilities, as an additional layer of security.
Databáze: OpenAIRE