Context

The large-scale implementation stage of autonomous (a.k.a. self-driving, driver-less or robotic) vehicles (AVs) seems to be irremediably and regularly postponed as designing safe and reliable systems able to drive in open environments remains extremely challenging. Recent studies suggest technologies would not be mature before several decades [LitmanĀ 2017] and that the legal and ethical concerns may incur additional delays [Martinez-Diaz et al. 2018]. Research efforts have been carried out in enabling AVs technologies to be operational by taking advantage of the recent advances in Artificial Intelligence (AI). In AI-based approaches, ensuring a system autonomy usually requires to tackle three critical steps: perception, decision and control that can each be designed as specific but inter-dependent algorithms [Bojarski et al. 2016; Lee et al. 2017; Zeng et al. 2019]. This allows researchers to be focused on a part of the big picture while contributing to the main objective: enabling robot-cars to outperform human drivers. Before reaching this long-term target, research is needed and we specifically imagined this project to contribute to the perception stage of AVs systems. Our assumption is that the latter two steps (decision and control) are there particularly tied to the output of the perception stage which needs to provide a very accurate representation of the driving environment(s) while allowing a clear discrimination between similar but different contexts.

Research challenges

The project takes the perspective of vision-based embedded systems (i.e., relying on cameras or similar sensors) that are among the most promising perception solutions. Their underlying sensing technologies however make them sensitive to an important research challenge: (C1) facing adverse conditions (such as bad weather or sun glare). Designing an AI-based (and especially a learning-based) system gives a strong importance to the amount of data (a.k.a. experience) required for algorithms to converge to a suitable solution that covers the wide range of situations (contexts or domains) the system will face. AVs platforms (or experimental equipped vehicles) being precious and limited resources that often lack the legal framework to allow to operate everywhere, the amount of data gathered and the range of domains they are tested in is inherently restricted. For this reason, development often goes through a simulation stage or a testing step on a simplified system (e.g., smaller vehicles, standalone sensors or robotic models). While this allows to artificially (or virtually) extend the range of contexts the learning-based system is trained in; it also faces two important research challenges that are clearly identified: (C2) reality gap, when a simulation/model fails to capture all the particularities of a real system, and (C3) extended development time caused by the inherent repeated iterative process of adapting an algorithm from a system/domain to a different one.

Objectives

The main objectives of this project are :

  • Dealing with adverse conditions in vision- and learning-based AVs perception: by modelling or reproducing such environment/conditions in a robotic model/simulation and reusing previously learnt strategies to cope with missing or altered images/situations.

  • Reducing reality gap between simulation/robotic models and AVs: by introducing an intermediate environment and allowing algorithms to be transferred from one (virtual or real) world to another.

  • Enabling domain adaptation to accelerate AVs algorithms development and deployment: by investigating state-of-the-art and novel techniques akin to Transfer Learning and domain/context transfer.

Methodology

In MultiTrans, we propose to tackle autonomous driving algorithms development and deployment jointly.

The idea is to enable data, experience and knowledge to be transferable across the different systems (simulation, robotic models, and real-word cars), thus potentially accelerating the rate an embedded intelligent system can gradually learn to operate at each deployment stage.

The research hypothesis acting as a starting point of MultiTrans corresponds to the current state of deployment of autonomous driving technologies: AVs can be programmed (or are able to learn) to react and operate in controlled (or restricted) environments autonomously. Research is needed to help these systems during the perception stage, enabling them to be operational and safer in a wider range of situations.

Expected impacts and benefits of the project

The project is expected to contribute to substantial advances with respect to state of the art, by resulting in:

  1. A novel theoretical framework and new algorithms on transfer and frugal learning in virtual and real environments

  2. Advances in multi-domain and multi-source computer vision for semantic segmentation and scene recognition applied to safe autonomous driving

  3. The development of a robotic autonomous vehicle model demonstrator combined with a virtual world model