Google’s blunder with a bus

Google’s blunder with a bus

It had to happen sooner or later. Google’s driverless car seems to have caused a crash. It collided with a bus as it pulled out at a junction. The bus was travelling at 15 mph, the car at 2 mph. Thankfully no one was injured. The car’s human driver saw the bus in the left-side mirror. He assumed it would slow down and let the car pull into lane so he didn’t override the car’s autonomous mode. But the bus didn’t do that.

Up till now Google’s driverless cars haven’t been held responsible, at least not directly, for the relatively few collisions in which they’ve been involved – they’ve often been rear-ended. There has, however, been plenty of talk about how their ultra-cautious driving style might be indirectly contributing to accidents.

With respect to this recent crash, Google was reported to have said in an official statement: “We clearly bear some responsibility, because if our car hadn’t moved, there wouldn’t have been a collision. That said, our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.”

[video_embed url=”https://www.youtube.com/watch?v=neFqatFxwnw” embed_style=”default”]

Footage via AP

While Google has since revised its algorithms to account for what the bus driver didn’t do, this latest collision – and admission of responsibility – raises an important point about liability. Just who is responsible in the eyes of the law for accidents caused by driverless cars? Is it the car’s human driver, the registered owner, the manufacturer or the software maker? Who would be taken to court if charges were brought? And whose insurance company would have to pay for the damage?

These questions aren’t new. They’ve been around for a while, although perhaps not on the consumer’s radar. Most modern cars have a variety of “out of sight, out of mind” technology which can operate without the driver’s full awareness or intervention: air bags, cruise control, collision avoidance and self-parking for instance. These technologies aren’t self-aware, or learning, but they are taking away control from the driver, and that’s not automatically a good thing.

Consider stability control: a welcome, potentially life-saving piece of technology. If a child darts into the road, a human driver will instinctively brake and swerve. But if the driver swerves too hard, the stability control might kick and correct the car’s direction and speed such that it might still end up hitting the child.

The car don’t know the child is there. In this sense, the technology is blind. It’s not driving like a human. It’s just responding to a certain set of mechanical forces. It doesn’t have a pair of moral eyes to help it decide what the best course of action is. Maybe the stability control shouldn’t kick in if it means the car can avoid hitting the child.

So even without driverless cars being on our roads, the idea that humans are in full control of their vehicles, at all times, is hard to justify. The more technology we incorporate, the less control we have. In which case, how can we be held fully liable?

This point hasn’t been missed. The US National Highway Traffic Safety Administration (NHTSA) recently told Google it would likely give the self-driving computer the same legal treatment as a human driver. But how this will affect the final decision about who was to blame for the collision with the bus isn’t yet clear. We’ll have to wait and see if it will be the human driver who has to accept full liability, the computer or a combination of the two.

Crashed Google car

 

Real road experience

Blind technology and issues of liability aside, Google’s crash is also a timely example of something else many of us already know. Computers don’t drive like humans. People, whatever vehicle they’re in, don’t always drive how we anticipate or assume they will. There is an ever-present element of unpredictability. We humans know this instinctively and we adjust our driving behavior to suit.

Reliably interpreting what another human driver is going to do comes down to experience. The more we drive, the better – in theory – we become at it. Or at least, the more familiar we become with driving conditions and behaviour.

 

Simulations are not enough

Google might be running millions of autonomous driving simulations every day in an effort to improve the effectiveness of its self-driving cars, but these are no substitute for on the road experience. Driving rules and habits vary enormously from time to time, country to country and culture to culture – and let’s not forget about the weather! How we drive in the rain changes the rules again. So the more miles driverless cars can clock up on real roads all over the world, the quicker they’ll learn what it’s really like out there.

 

Is perfection possible?

This is why driverless cars are a work in progress – and might never be perfect driving machines. In spite of the advanced state of autonomous technology, it’s hard to imagine they’ll be able to predict what to do in each and every situation they come across. In that respect, driverless cars are just like us. When two human drivers misinterpret or fail to anticipate what the other is going to do, accidents can happen. That said, driverless cars don’t really have to be perfect. All they have to do is be consistently better than we are at driving a car. If they do that, accidents will decrease and lives will be saved.

MOST COMMENTED POSTS

0

RECENT COMMENTS

MOST LIKED POSTS

TAG CLOUD

FIND US ON