Notes from West London

Friday, February 17, 2006

DARPA Grand Challenge

Just been to a great seminar by Nelson Bridwell of Intel about the Grand Challenge on autonomous robots in the Mojave desert last October. The Stanford team that won was sponsored by Intel, so Nelson had lots of leeway to walk around and interview people.

He started with Stanford team lead Sebastian Thrun, who gave no technical detail but did say that testing was key. You would think this is obvious, but the extent to which it mattered only became clear during later interviews. Whenever Nelson asked someone what their team would do with more money and time, they almost always said "more testing". Since everyone used roughly the same technology platform - hardened all-terrain suspension, laser scanners (LIDAR range finders), x86-based blades - it was a question of who could tweak the platform best for DARPA's course. And that meant testing.

Laser scanners are very cool. One team had 64 scanners in a cab-mounted unit that looked like a spinning zoetrope. It generated 640,000 data points per second for the 10 metres of terrain ahead of the vehicle, which their leader admitted was way too detailed. They were doing well until the unit shook itself free of the power cord!

Of the winning Stanford vehicle, Wikipedia says:
"Data from the LIDARs was fused with images from the vision system to perform more distant look-ahead. If a path of drivable terrain could not be detected for at least 40 meters in front of the vehicle, speed was slowed and the LIDARs used to locate a safe passage."

Nelson elaborated on the "could not be detected for at least 40 meters" part. First, in order to complete the course in time, you need to drive an average of 20mph. This means going 40mph+ in the flat open sections (dry lakebeds), to compensate for the rough and twisty sections. To drive at 40mph means looking a recommended 40 metres ahead. Now, what Stanford did was laser-scan to map the edge of the road immediately ahead, then looked at the colour of that area with a video camera. Then they looked for that colour in the further distance, and assumed that further area continued to be road.

Now, Nelson had some interesting views on this supposed "state of the art" algorithm: it's not machine learning and it's definitely not next-generation! But it was the most effective technique in the field, where the winner takes all. On a similar tack, radar "ought" to be useful because it's impervious to weather, but that doesn't help in a perfectly dry October desert; you'd be better off disabling the radar unit and focusing on stereo video image stabilisation (or whatever).

Nelson had a never-ending interview with a guy from Carnegie-Mellon who described their route-planning software and techniques. Apparently, their vehicle in the first Challenge drove catastrophically off the road, so they're paranoid now. They entered two vehicles, one designed to drive more aggressively than the other in order to finish an hour earlier, though a broken gas pedal foiled that plan. They even had someone drive 2100 miles in the desert to gather detailed mapping data on any potential route; when DARPA announced the actual route, only 2% of CMU's driving was relevant :-|

The U.Florida team had air-conditioned computers - smart - while a team from Oregon had on-board computer failures in the 100degree+ heat - not so smart. Coolest vehicle: TerraMax.

Johnny told me that DARPA consider the desert driving challenge "done" and aren't going to do another. But Wikipedia says they're considering an urban version instead!

PS Thanks to Kate for remembering the word "zoetrope"!

0 Comments:

Post a Comment

<< Home