Human crashes robot-driven car
A review of an incident (read ‘near-crash’) involving a self-driving car and another vehicle in Mountain View, California has concluded that it could have been avoided had the car’s human co-pilot not taken command of the car.
Waymo has just got permission to start testing its autonomous fleet in California, despite its Arizona-based testing to date raising the ire of some suburban Phoenix residents.
The accident in California occurred when a vehicle began to move into the self-driven car’s lane. The Waymo’s human driver immediately took over, moving his charge into the lane to the right. However, in doing so, he clipped a motorcycle with his rear bumper.
In a blog post in Medium, Waymo’s CEO, John Krafcik said:
“While our test driver’s focus was on the car ahead, our self-driving system was simultaneously tracking the position, direction and speed of every object around it. Crucially, our technology correctly anticipated and predicted the future behavior of both the merging vehicle and the motorcyclist. Our simulation shows the self-driving system would have responded to the passenger car by reducing our vehicle’s speed, and nudging slightly in our own lane, avoiding a collision.”
Waymo wishes everyone a safe Halloween. We're using our cameras, LiDAR, and Radar to accurately detect pedestrians, even those dressed in costume. pic.twitter.com/cNCmwQMzRd
— Waymo (@Waymo) October 31, 2018
The key word in this quote is ‘simulation’. Waymo has released no empirical data about the incident, and so, therefore, there is no specific reason as to why Krafcik’s proposition can be proven or otherwise. But to paraphrase, if the autonomous systems in the car had been allowed to continue without human intervention, then there wouldn’t have been a problem.
Krafcik went on to explain that Waymo’s fleet of self-driven vehicles‘ co-pilots undergo rigorous training, especially in defensive driving, which is designed to rescue driver and vehicle from just the sort of potential scrape that took place.
But the technology’s ability to see and sense 360 degrees around the vehicle at all times would, he proposed, “[…] have responded to the passenger car by reducing our vehicle’s speed, and nudging slightly in our own lane, avoiding a collision.”
The situation faced by Waymo (owned by Google) and other companies testing autonomous transport (such as Apple, and many others) is a complex one concerning ethics. As Krafcik noted:
“[…]some dynamic situations still challenge human drivers. People are often called upon to make split-second decisions with insufficient context.”
Whether or not machines make better split-second decisions is the matter of debate, whichever the context – ethical, legislative, or even philosophical. In this case, human error has been suggested as being at fault. But there have been other incidents where the autonomous systems designed by Waymo, and others, have made mistakes. What is often forgotten is that programmed systems such as those in use in autonomous systems are created by the same humans who crash vehicles rather a lot.
30 July 2021