Tracking movement with shadows

Departments - Rearview

MIT engineers have developed the ShadowCam, a device that senses subtle changes in shadows to sense if a vehicle is coming around a blind corner.

Subscribe
Researchers tested ShadowCam systems for autonomous vehicles in their offices with the wheelchair and in a parking garage with a car.
All photos courtesy of MIT

Changes in shadows can indicate something is moving before people or cameras can see anything. Massachusetts Institute of Technology (MIT) engineers have developed a system using that principle for sensing systems in autonomous vehicles.

In experiments with an autonomous car driving around a parking garage and an autonomous wheelchair navigating hallways, the shadow-sensing system beat traditional light detection and ranging (LiDAR) – which can only detect visible objects – by more than half a second.

“Where robots are moving around environments with other moving objects or people, our method can give the robot an early warning,” says Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL), the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, and co-author of a paper on the development. “The big dream is to provide X-ray vision of sorts to vehicles moving fast on the streets.”

The system has only been tested indoors, where robots move slowly, and lighting is consistent.

MIT professors William Freeman and Antonio Torralba, who are not co-authors on the paper, collaborated on earlier versions of the ShadowCam system, which were presented at conferences in 2017 and 2018. It uses sequences of video frames to target a specific area, such as the floor in front of a corner, and detects changes in light intensity throughout time, from image to image. Some of those changes may be difficult to detect or invisible to the naked eye, but ShadowCam computes that information and classifies each image as containing a stationary or dynamic object.

A test wheelchair equipped with MIT’s ShadowCam identified when a person might be coming around a blind corner.

Adapting ShadowCam for autonomous vehicles required a few advances. A previous version relied on lining an area with augmented reality labels, similar to QR codes. Robots could scan labels to detect and compute their precise 3D positions and orientations.

To eliminate the tags, researchers combined image registration and visual odometry. Image registration essentially overlays multiple images to reveal variations. Medical image registration, for instance, overlaps medical scans to compare and analyze anatomical differences.

Visual odometry, used for Mars Rovers, estimates camera motion in real-time by analyzing pose and geometry in sequences of images. The direct sparse odometry (DSO) variant computes feature points in environments similar to those captured by the tags by plotting features on a 3D point cloud. A computer-vision pipeline then selects only the features in a region of interest.

ShadowCam uses DSO-image-registration to overlay all the images from the same viewpoint of the robot. Even as a robot is moving, it can zero in on a shadow to detect subtle deviations.

One test steered an autonomous wheelchair toward various hallway corners while humans turned the corner into the wheelchair’s path. With and without tags, ShadowCam achieved 70% classification accuracy.

Researchers also installed ShadowCam in an autonomous car in a parking garage. With the headlights off, ShadowCam detected the car turning around pillars about 0.72 seconds faster than LiDAR, accurately classifying images 86% of the time.

Massachusetts Institute of Technology