Automated Vehicles (AVs) are on the cusp of commercialization, prompting global governments to organize the forthcoming mobility phase. However, the advancement of technology alone cannot guarantee the successful commercialization of AVs without insights into the accidents on the read roads where Human-driven Vehicles (HV) coexist. To address such an issue, The New Car Assessment Program (NCAP) is currently in progress, and scenario-based approaches have been spotlighted. Scenario approaches offer a unique advantage by evaluating AV driving safety through carefully designed scenarios that reflect various real-world situations. While most scenario studies favor the data-driven approach, the studies have several shortcomings, including perspectives of data, AI models, and scenario standards. Hence, we propose a holistic framework for generating functional, logical, and concrete scenarios. The framework composes explainable scenarios (X-Scenarios) based on real-driving LiDAR data, and visual trend interpretation using eXplainable AI (XAI). The framework consists of four components as follows: (1) voxelization of LiDAR PCD and extraction of kinematic features; (2) classification of critical situations and generation of attention maps using visual XAI and Vision Transformer (ViT) to generate range values of elements in logical scenarios; (3) analysis of the importance and correlations among input data features using SHapley Additive exPlanations (SHAP) for selecting scenarios based on the most relevant criteria; and (4) composition of AV safety assessment scenarios. X-scenarios generated from our framework involve the parameters of ego vehicles and surrounding objects on the highways and urban roads. With our framework highly trustworthy AV safety assessment scenarios can be created. This novel work provides an integrated solution to generate trustworthy scenarios for AV safety assessment by explaining the scenario selection process.
Keywords: 3D-liDAR; Automated vehicle; SHAP; Safety; Scenario; Vision transformer.
Copyright © 2024 Elsevier Ltd. All rights reserved.