The Next Web reported in September that tech startup Transdev had entered an agreement with the progressive city of Babcock Ranch to demonstrate its autonomous shuttles. At the time, The Next Web called the initiative, which saw young children riding the bus on public roadways, reckless. It looks like The Next Web wasn’t the only one who thought as much.
The NHTSA released a statement ordering the company to stop:
“The U.S. Department of Transportation’s National Highway Traffic Safety Administration (NHTSA) has issued a letter directing Transdev North America to immediately stop transporting school children in the Babcock Ranch community in Southwest Florida on the EZ10 Generation II driverless shuttle.”
The document calls the program “illegal” and says Transdev failed to disclose pertinent information to the government when it requested permission to use the autonomous shuttles in Florida.
“In March 2018, NHTSA granted Transdev permission to temporarily import the driverless shuttle for testing and demonstration purposes. Transdev requested permission to use the shuttle for a specific demonstration project, not as a school bus. Transdev failed to disclose or receive approval for this use. School buses are subject to rigorous Federal Motor Vehicle Safety Standards that take into account their unique purpose of transporting children, a vulnerable population.”
Actions like Transdev’s, whether intentional or through ignorance of federal laws surrounding the transportation of school children, are a black eye on the entire field of AI. Worse, they invite regulation.
Facebook’s AI guru, Yann LeCun, recently said it was nuts to invite AI regulation at this point. His assertion is that government involvement could quickly stagnate developments beyond what we’ve accomplished so far with deep learning.
If the government steps in and says companies, already under certain guidelines concerning public safety, have to take additional steps to develop products, it could stifle or even cripple cutting-edge research.
There’s a fine line between necessary oversight and clunky regulations. The way we keep the needle towards the former isn’t by using children as the subject of a test involving autonomous robotics. Especially considering that the most current research indicates that driverless vehicles may be at greater risk for collision from human drivers than human-driven vehicles are.
It’s clear that autonomous vehicle technology isn’t ready to be trusted with our children’s lives. And it’s disturbing that the federal government had to get involved before the leadership of an entire city and an AI company were made to realize that.