Arthur C. Clarke once wrote, "Any sufficiently advanced technology is indistinguishable from magic." Whether it's invisibility, teleportation, or mind-reading, it seems like scientists are hard at work turning magical ideas into real-life gadgets. This latest breakthrough is no exception: a team at Stanford has created a way of seeing objects around corners. No line of sight needed.
The Stanford team isn't just developing this technology to sneak up on friends or check for quarters under the couch — they're doing it specifically to make autonomous vehicles better at sensing their surroundings. Many driverless cars already use LIDAR, a laser-based imaging system that builds a 3D map of the the environment by bouncing infrared light off of objects and measuring how long it takes to come back. It's able to pick out much more detail than cameras or radar.
But what if LIDAR could do cars one better? Instead of just seeing a cyclist after he veers out from behind a parked car, what if it could see the cyclist beforehand when he was still hidden from view? That could allow precious seconds more for the car to make a life-saving decision, and that's what the Stanford team is aiming for.
Just Around the Laser Bend
The paper on the new development was published in Nature. Here's how it works: they placed a laser and a photon detector next to each other and aimed them both at a wall. Placed next to the laser and detector was an object, separated by the device by a partition. Next, they fired laser pulses at an angle, making them glance off the wall and hit the object.
But here's the trick: the photon detector, which is sensitive enough to detect individual particles of light, wasn't there to detect the light that bounced off of the object. Instead, the team used it to detect the light that bounced off of the object, then perhaps the partition, the floor, and the wall before returning to the detector. "We are looking for the second, and third and fourth bounces — they encode the objects that are hidden," Dr. Matthew O'Toole, co-lead author of the paper, told The Guardian.
In fact, the biggest breakthrough might be in the algorithm they designed to make the computer ignore the light from that first bounce. Once it could do that, it could process the other light signals to construct a 3D image of whatever was around the corner. The only problem is that this entire process takes a while — anywhere from two minutes to an hour, depending on the lighting conditions and the object being detected. Unfortunately, daylight conditions are especially tough.
But there's a big upside. Since many autonomous vehicles already use LIDAR, it could just be a matter of programming the algorithm into those systems. "We believe the computation algorithm is already ready for LIDAR systems," O'Toole said in a press release. "The key question is if the current hardware of LIDAR systems supports this type of imaging."
That's before you start exploring other uses, like seeing through dense forests in helicopters and seeing people trapped under rubble after a disaster. Senior author Gordon Wetzstein is optimistic. "This is a big step forward for our field that will hopefully benefit all of us," he said.