Autonomous cars collect tons of data about the world around them, but even the best computer vision systems can not see through the brick and mortar. But by carefully watching the reflected light of a laser bouncing off a nearby surface, they might be able to see around the corners – this is the idea behind the recently published research by the engineers of Stanford.
The basic idea is the one we have already seen: it is possible to discern the shape of an object on the other side of an obstacle by illuminating a surface or a structured light on a nearby surface and analyzing the dispersion of light. Models appear when certain impulses come back faster than others, or are modified by interacting with the invisible object.
This is not easy to do. The reflected laser light can easily be lost in the sound of daylight, for example. And if you want to rebuild a model of the object precise enough to know if it’s about a person or a stop sign, you need to a lot of data and processing power to crunch this data.
It is this second problem that Stanford researchers, the school’s Computational Imaging Group, tackle in a new article published in Nature.
“Despite recent advances, [non-line-of-sight] imaging has remained impractical due to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multicast light”, reads -in part.
“A major challenge in non-line-of-sight imaging is to find an effective way to recover the 3D structure of the hidden object from noisy measurements,” said David Lindell, graduate student, co-author of the paper, in a Stanford press release.
The data collection process still takes a long time because the laser scans a surface – think a few minutes to an hour, although it is still down for this type of technique. The photons do their thing, bounce off the other side, and some bring it back to their original point, where they are picked up by a high-sensitivity detector.
The detector sends its data to a computer, which processes them using the valuable algorithm created by the researchers. Their work allows this part to unfold extremely quickly, restoring the scene in relatively high fidelity with only one or two seconds of treatment.
The resulting system is also less susceptible to interference, which allows it to be used in indirect sunlight.
Of course, it is not very useful to detect a person on the other side of a wall if it takes an hour to do it. But the laser patterns used by the researchers are very different from the high-speed scanning lasers found in lidar systems. And the algorithm that they have built should be compatible with these, which could significantly reduce the time of data acquisition.
“We believe that the computational algorithm is already ready for LIDAR systems,” said Matthew O. Toole, co-lead author of the paper (with laboratory manager Gordon Wetzstein). “The key question is whether the current hardware of LIDAR systems supports this type of imaging.”
If their theory is correct, then this algorithm could soon allow existing lidar systems to analyze their data in a new way, potentially locating a moving car or a person approaching an intersection even before it. It is visible. It will still be a while, but at this point, it is just a question of intelligent engineering.