Your pathetic human eye can only see what is right in front of it and it's not hard to fool people. Robots, on the other hand, are already learning to see in the corners. This will probably be useful during the apocalypse of the robot when the machines will have to hunt down fugitives, but in the meantime, it could be a real boon for autonomous cars and other autonomous technologies.
Researchers have in the past used computational methods to detect large objects in a corner. But Professor Ioannis Gkioulekas of the Carnegie Mellon Institute of Robotics said it was the first time anyone was able to solve millimeter and micrometer shapes without a field of view. It is therefore a completely new application of so-called NLOS technology (non-line-sight-tech).
Most of the light in your eye or camera is reflected off a surface, but a portion of the light is scattered and bounces several times before reaching the viewer. Generally, these signals are too important to convey information, but the NLOS techniques developed at Carnegie Mellon can amplify this signal with high-speed lasers.
The team fired the laser on a flat surface, which allowed it to reflect and illuminate a hidden object. The computer knows when you hit the lights, so you can calculate the time it takes to bounce off the object, hit the wall and turn to a sensor. It is similar to the flight time sensors used in autonomous cars with current lidar.
The method developed by the Carnegie Mellon team relies entirely on the geometry of the hidden object. The algorithm allows them to accurately measure curvatures, which allows for very accurate small-scale detail. In the lab, the team created reasonable approximations of jugs, vases, ball bearings and a neighborhood neighborhood around a bend. You can see above how the NLOS image (left) compares to a direct scan (right). It's very close.
Currently, the technique developed at the Carnegie Mellon Robotics Institute only works in the lab. The range of the NLOS sensor is about one meter. It is therefore not practical for real applications. This is only the first attempt, however. This technology could eventually help autonomous vehicles to avoid collisions by detecting hazards at the next turn. The team also thinks that it could help with ultrasound imaging and seismic measurements.
Top image credit: Getty Images