How deepfake tech is speeding up autonomous vehicle development
- There’s potential in deepfake technology to expose autonomous vehicles to “near infinite” driving scenarios
- The safety autonomous vehicles requires miles of trials and development before mass deployment is possible
Developed by sophisticated artificial intelligence (AI) technology, deepfakes refer to manipulated video or other digital media that yield fabricated images and sounds that appear to be real. This deceptive quality has earned them a shady reputation, and there are no shortage of examples of how they’ve used with devious intent.
Examples of misuse include the prevalence in fake celebrity pornography and political scams and smear campaigns (they have been called a potential “threat to democracy”), but technology leaders have also found positive and beneficial ways to employ the technology.
A UK-based autonomous vehicle software company has developed a deepfake technology that is able to generate thousands of photo-realistic images in minutes, which helps it train autonomous driving systems in lifelike scenarios.
Oxbotica says the technology is capable of exposing its autonomous vehicle systems to “near infinite variations of the same situation – without real-world testing of a location having ever taken place.”
Besides the ability to create thousands of photo-realistic images in minutes, the real catch in enlisting deepfakes is its ability to reproduce the same scene in various conditions such as poor weather or adverse occurrences. The advanced technology can replace objects in images such as a tree for a building, a concept known as a “class switch,” and is capable of changing the lighting of an image, down to the details of shadow positions and directions of reflections.
The realism of these simulations helps the autonomous vehicles software experience countless scenarios in various road and traffic conditions, potentially saving thousands of hours of on-the-road testing.
The mechanism behind these simulations is a pair of co-evolving AIs, with one model creating convincing fake images, and the other differentiating which are real and which have been reproduced.
Oxbotica engineers added in a feedback mechanism that ensures both entities evolve over time to outsmart their adversary. When the detection mechanism is unable to detect the differences, the deepfake AI module will then be used to generate data to teach other AIs.
Are we there yet?
Paul Newman, Co-Founder and CTO at Oxbotica, said the concept of deepfakes in providing a “near infinite variations” of the same situation aims to tackle the format of “miles driven” as a standard for maturity, performance and safety in the world of autonomous technology.
“There is no substitute for real-world testing but the autonomous vehicle industry has become concerned with the number of miles traveled as a synonym for safety. And yet, you cannot guarantee the vehicle will confront every eventuality, you’re relying on chance encounter,” said Newman.
“The use of deepfakes enables us to test countless scenarios, which will not only enable us to scale our real-world testing exponentially; it’ll also be safer.”
Recently, TechHQ spoke with Stan Boland, CEO of Five, on the state of the autonomous vehicles safety framework. Boland shared that developing safe autonomous systems is a continuous process that will span the course of decades.
“There’s always going to be a validation gap between what the real world really is and what our testing environment is, and that gap is never going to be zero,” said Boland.
“Accidents will happen – but we hope that those gaps will be sufficiently small that they’ll be contained within the envelope that today looks like human driving. And so the accident rate here would be the same or less than we in human driving.”