A growing fleet of smart cars may add their street camera views to those of the surveillance camera networks already covering many major cities. That could open the door for a new technology that enables different video cameras to “talk” with one another and track the same individual person across many different camera views.
The technology is based on a computer algorithm that can compare different camera views of the same person and learn to recognize the same individuals across many camera views by focusing on body color, texture and movement. Researchers envision a large-scale version of the system tracking pedestrian traffic on a virtual map—perhaps displayed on a car’s GPS screen—or enabling police to easily track fleeing suspects across multiple surveillance camera views.
Rresearchers tested the algorithm with cameras placed on moving cars. They trained individual cameras learn to work together in pairs by training them on a certain set of test footage. Such testing led to the new work, which was presented at the Intelligent Transportation Systems Conference sponsored by IEEE and held in Qingdao, China. But the technology does not have to work with only car-mounted cameras. The team has experimented with using their algorithm on cameras carried byflying drones.
This work is done by the researchers from University of Washington. For further details, please check the available publication on ITSC 2014 and their website.
Part of the content is from IEEE Spetrum.