Improving Robustness of DNN-Dependent ADS: Transforming Input Distributions to Account for Inference and System State
Published:
Abstract:Autonomous Driving Systems (ADSs) are becoming more advanced and ubiquitous, enabled by increasingly sophisticated deep neural networks (DNNs). As ADSs' autonomy levels rise, so does the cost and complexity of their failures. Often, these failures arise when these DNNs are less robust than expected. In studying these systems, I found that these failures can occur due to unexplored inputs from the long tail of driving scenarios or when the system evolution affects the input distribution. These circumstances are common but challenging to accommodate because their effects are difficult to anticipate and solutions may not generalize, leaving us with a brittle system. I postulate that the impact of input distribution shifts on the robustness of a DNN-dependent system can be manipulated through the careful design and encoding of transformations that account for their effects on DNN predictions, analysis of their compounding effects on system state, and naturalness. To overcome these threats to robustness, I have developed two types of techniques. First, I have engineered techniques to mitigate robustness-related failures when the cause is known but the effect on the DNN prediction is not, specifically for the common scenario when a sensor component used to collect the training dataset for a DNN onboard an ADS is swapped out. Second, I have extended adversarial test generation techniques, which aim to produce input perturbations that cause a DNN to compute incorrect outputs and estimate DNN robustness, to consider how the effect of perturbations are attenuated by other ADS subsystems and are less effective as ADS state evolves. However, these perturbations are often not in distribution and appear unnatural, making them easy to spot and dismantle or less likely to resemble deployment conditions. I will investigate two approaches to increase naturalness while retaining perturbation strength: constraining perturbation generation techniques to only reproduce features seen during DNN training, and manipulating unconstrained perturbations with diffusion models to appear naturalistic to humans. Overall, the family of proposed techniques takes a significant step to improving ADS robustness.
Recommended citation: