[Video] Some challenges that Google cannot handle now 一些谷歌目前无法解决的挑战

For more details please refer to the original post Hidden Obstalces for Google’s Self-Driving Cars.

This post is about some tough situations that Google still cannot handle. As widely known, Google’s self-driving car relies highly on its detailed map. This causes ambiguity or uncertainty when the car drives into this situation.  Also, some extrem hard outdoor environments are common obstacles not only for Google, but for the whole field.

Watch out the above image: Google’s self-driving car can “see” moving objects like other cars in real time. But only a pre-made map lets it know about the presence of certain stationary objects, like traffic lights.

From the below video, you can better understand how Google detects these dynamic information respected to the static objects. Also, from my personal point of view, Google seems to adopt a very smart trick: some obstacles or behaviors are classified with different traffic signs. Or rather, Google asigns a proper traffic sign to some situations, in ordre to simplify and  divide the sets of different behaviors concisely.

In a word, these  problems, confirmed by the project leader Chris Urmson,  are detailed as:

  1. The snow weather. Google hasn’t tested the car under snow weather.
  2. Heavy rainy day. Due to consideration of safety, Google hasn’t done the tests neither.
  3. Big, open parking lots or multilevel garages.
  4. The accurate recognition of color information of traffic lights when facing towards to sun.
  5. No detailed classfication between normal pedestrain or policeman as google relies more on the laser point data.
  6. Can’t tell if a road obstacle is a rock or a crumpled piece of paper.
  7. Can’t detect potholes or spot an uncovered manhole. (IMHO, this could be solved when exploiting the obstacle is concave or convex)

Besides the problem confirmed by the team itself, my professor Alberto Broggi, and other researchers also expressed their concerns.

Alberto Broggi, a professor studying autonomous driving at Italy’s Università di Parma, says he worries about how a map-dependent system like Google’s will respond if a route has seen changes.

Michael Wagner, a Carnegie Mellon robotics researcher studying the transition to autonomous driving, says it is important for Google to be open about what its cars can and cannot do. “This is a very early-stage technology, which makes asking these kinds of questions all the more justified.”

John Leonard, an MIT expert on autonomous driving, says he wonders about scenarios that may be beyond the capabilities of current sensors, such as making a left turn into a high-speed stream of oncoming traffic.

Another related post on MIT Review are here. I quote the comments by professors.

Alberto Broggi: Humans make use of myriad “social cues” while on the road, such as establishing eye contact or making inferences about how a driver will behave based on the car’s make and model. Even if a computer system can recognize something, understanding the context that gives it meaning is much more difficult. For example, a fully autonomous car would need to understand that someone waving his arms by the side of the road is actually a policeman trying to stop traffic.

John Leonard: he and other academics find themselves constantly battling the assumption that all of the technology challenges associated with robotic cars have been solved, with only regulatory and legal issues remaining. “It’s hard to convey to the public how hard this is,” he says.


无人驾驶汽车研究员 半吊子摇滚混子乐手 Researcher on Intelligent Vehicle Guitarist/Singer/Rocker
此条目发表在Autonomous Driving分类目录,贴了, 标签。将固定链接加入收藏夹。