GUIDED
Funding information
- Source: Defense-Related Research Action (DEFRA)
- Project code: DEFRA-GUIDED
- Total cost: 1.9 M€
- Start date: January 1st, 2025
- End date: December 31st, 2027
Keywords: 3D localization and mapping, virtual reality, demining, multi-spectral imaging
Context and motivation
In the field of Explosive Ordnance Disposal (EOD), robots are used to safely dismantle suspicious objects. These robots are navigated remotely (either wirelessly or wired) by a human operator who controls the robot actuators based on a 2D video stream from a camera on the robot. DOVO operators currently struggle with the non-intuitive user interface on such systems, which lacks situational awareness. The operation process is difficult and as a consequence imposes continuous training for experts.
In this project, we want to help EOD operators by creating an augmented 3D reconstruction of the scene which provides useful context and 3D awareness during robot operation. The goal is to create a multi- spectral sensor TRL5 demonstrator that can be mounted on a DOVO EOD robot. We will develop advanced image processing software which can provide a real-time 3D overview of the scene. The operator can then more easily operate the robot from a third-person or birds-eye-view (BEV) rather than having to rely on a single 2D camera view. By presenting an overview of the surrounding scene in Head- Mounted Displays, more commonly referred to as VR-goggles, the operator can more easily take in all the information and operate the robot in a more intuitive way. The information from multispectral cameras can provide interesting indicators to the operator which could be vital for successful disposal of improvised explosive devices (IEDs). One of the challenges is that an operator can only monitor a limited set of spectra at the same time. Swapping between the different views can consume valuable time during operation and the associated cognitive load can distract the operator, potentially causing severe consequences. To reduce the cognitive load for the operator, we will use AI to localise regions in specific spectra which contain useful information that is not obvious from the visible spectrum. Inside the VR- interface, a user-friendly interface will combine the information from the predicted regions of interest and visually present them when the operator’s attention is needed.