ClearView 3D

Autonomously driven cars need a bunch of sensors to make them robust against many possible street scenarios. In adverse weather conditions especially, state-of-the-art sensors show a very limited performance. The approach in ClearView 3D is based on compressed sensing.

Autonomously driven cars need a bunch of sensors to make them robust against many possible street scenarios. In adverse weather conditions (fog, heavy rain, snow) especially, state-of-the-art sensors show a very limited performance. Therefore, the Institute for Applied Optics (ITO) is developing a sensor system to overcome these limitations in a joined research project with the Institute for Autonomous Intelligent Systems (AIS), University of Freiburg and Fraunhofer Institute for Physical Measurement Techniques (IPM). The sensor system comprehends a Lidar sensor, which is specifically designed for the use in adverse weather conditions (IPM), as well as an intelligent data analysis via multimodal neural networks by AIS. ITO is developing a so-called “time-gated single-pixel-camera” to enrich the Lidar information for specifically interesting regions of its point cloud.

Time gating enables detection through fog: Laboratory environment without fog (a). With fog, the chair and a person standing behind it cannot be detected by a conventiona32l camera (b). This becomes possible with a time-gated camera (WiDySenS 640V STP, 100ns gating) (c).

Time-gated or range-gated cameras provide excellent penetration through scattering media (see above), especially if the gated range is chosen small, although this simultaneously leads to low light levels. An eye-safe pulsed laser with as much power as allowable is therefore crucial. This is why we fixed the operating wavelength at 1540 nm. In the infrared though, camera technology is not as developed as in the visible range, moreover if time gating with very short gating intervals in the nanosecond regime is considered. We want to circumvent this problem by measuring with a simple photodiode, which is fast and inexpensive. The spatial information is provided by a set of masks in front of the photodiode (see below). The number of masks is significantly reduced compared to the equivalent number of camera pixels, because the scene is well compressible with our sensor concept. The binary measurement matrix (the masks) are not predefined but trained with a neural network, in order to generate the best object recognition performance with as few masks as possible.

An infrared pulsed-laser illuminates the object. The photodiode is triggered after some variable delay time (depending on the distance of interest). The masks in front of the photodiode (PD) enable spatial detection (a). Possible implementation of the masks on a rotating disc (b).

Financed by:

Baden-Württemberg Stiftung

Responsible

This image shows Claudia  Bett

Claudia Bett

M.Sc.

Research Assistant

To the top of the page