Modeling Perspective on Human-Automation Interaction (HAI): Levels and Trust in Automation
Abstract
The advent of Maritime Autonomous Surface Ships (MASS) represents a significant leap forward in the maritime industry, promising to redefine sea transportation's efficiency, safety, and economics. However, this technological advance brings forward the complex interplay between human operators and autonomous systems, particularly in the context of Shore Control Centers (SCCs), where remote operators play critical roles. The success of integrating MASS into the global shipping infrastructure depends not just on technological advancements but equally on understanding and optimizing Human-Automation Interaction (HAI). The transition to supervisory control roles introduces a paradigm shift in operational dynamics. Remote operators are tasked with maintaining oversight over multiple vessels simultaneously, each possibly facing different sea conditions and operational challenges. This multi-vessel management can significantly amplify the cognitive load, requiring operators to prioritize information effectively and make swift decisions to ensure safety and efficiency. One of the primary concerns is the risk of over-reliance on automation, which may lead to complacency and reduced situational awareness. The remote nature of operation may exacerbate these issues, as operators are removed from the immediate physical environment of the vessels they control. Moreover, the unpredictable and dynamic nature of maritime environments makes complete autonomy a challenging goal; remote operators must be prepared to take control in complex or emergency situations.
To address these challenges and leverage the full potential of MASS, it is imperative to develop scientific and robust models of HAI. These models should account for the unique demands of maritime environments and the specific roles of remote operators. By understanding the cognitive, psychological, and social factors that influence remote operators' performance, researchers and practitioners can design more intuitive and effective interfaces and decisionsupport systems. Effective HAI models can guide the development of training programs tailored to the needs of remote operators, focusing on critical skills such as situational awareness, decision-making under uncertainty, and effective communication with autonomous systems. Moreover, these models can help identify potential sources of error, the operators’ responses, and cognitive overload, enabling the design of systems that support operators' decision-making processes and reduce the likelihood of accidents. Two pivotal aspects of these models are the Levels of Automation (LOAs) and Trust in Automation (TiA). Understanding and accurately modeling these dimensions are crucial for designing systems that effectively balance human supervisory control of autonomous capabilities.
In response to the growing scrutiny regarding the validity of Human Factors and Ergonomics (HFE) models, as well as the need for flexible yet credible HAI models, this dissertation concentrated on the importance of models and modeling within Human-Automation Interaction (HAI), particularly emphasizing Trust in Automation (TiA) and Levels of Automation (LOA) as central themes for modeling exploration. This dissertation commences by exploring the significance of scientific modeling and developing criteria that can be utilized to assess the relative scientific credibility of various models. Furthermore, models of Trust in Automation (TiA) were assessed against these criteria not only to showcase the use of the criteria but also to understand the TiA modeling efforts in the literature. On the other hand, epistemological accounts of modeling efforts were investigated, to realize the suitability of each approach for modeling HAI. The findings suggested simulation as a viable approach to tackle the complexities in modeling TiA and LOA within the context of HAI and supervisory control of MASS. By incorporating models of Trust in Automation (TiA) and Levels of Automation (LOA), simulation offers a powerful tool for examining complex interactions and dynamics that are difficult, if not impossible, to study in real-world settings due to safety, cost, and practicality concerns.
Has parts
Article 1: Poornikoo, M., & Øvergård, K. I. (2023). Model evaluation in human factors and ergonomics (HFE) sciences; case of trust in automation. Theoretical Issues in Ergonomics Science, 1-37. https://doi.org/10.1080/1463922X.2023.2233591Article 2: Poornikoo M., Mansouri M. (2023), "Systems approach to modeling controversy in Human factors and ergonomics (HFE)," 18th Annual System of Systems Engineering Conference (SoSe), Lille, France, 2023, pp. 1-8, https://doi.org/10.1109/SoSE59841.2023.10178634
Article 3: Poornikoo, M., & Øvergård, K. I. (2022). Levels of automation in maritime autonomous surface ships (MASS): A fuzzy logic approach. Maritime Economics & Logistics, 24(2), 278-301. https://doi.org/10.1057/s41278-022-00215-z (Omitted from online publication)
Article 4: Poornikoo M., Gyldensten W., Vesin B., Øvergård, K. I. (In review) Trust in Automation (TiA): simulation model, and empirical findings in supervisory control of Maritime Autonomous Surface Ships (MASS), International Journal of Human-Computer Interaction