Drivers working for Amazon Delivery Service Partners (DSPs) are increasingly under constant surveillance for safe driving, monitored by artificial intelligence which awards them a score and generates voice reminders for safe driving. That score is used to award bonuses, promotions and more.
Drivers who spoke to Vice’s Motherboard complained the tech is too sensitive, often wrong and making their jobs miserable — and not to mention, taking money out of their paycheck. But Amazon spokeswoman Alexandra Miller told Threatpost if the choice is between a few unhappy drivers and improved safety, it’s an easy call — safety wins.
Earlier this year, Amazon rolled out the pilot program for its DSPs using video surveillance and AI tech from Netradyne. The company says that so far, about half of its delivery fleet has the technology, and since the program launched it has experienced a 48-percent decrease in accidents. Also, drivers running stop signs and red lights fell by 77 percent; and distracted driving went down 75 percent.
Drivers Complain about AI Video Surveillance
Miller explained that the video protects drivers too, who might be wrongly accused of something, need to be exonerated for a driving offense or experience an attack from someone in the community.
“The majority of drivers say they love it,” Miller told Threatpost.
But drivers who spoke to Vice said they have been flagged for so-called “events” that never happened. The alert is also annoying, beeping and correcting perceived infractions all day, drivers told Vice. At the end of the week, Amazon assigns each driver a rating based on several data points, including the Netradyne data, Vice reported. If those ratings go down, both drivers and entire delivery companies can take a financial hit.
Drivers are aware of the data being collected and need to sign a biometric data waiver to show they’ve agreed.
Once an “event” like harsh braking is picked up by the system, the cameras start to record and the feed is uploaded. There’s a team at Amazon running the Netradyne system in-house and Amazon maintains control of the data, Miller confirmed.
Netradyne didn’t respond to Threatpost’s requests for more information.
Miller said that negotiating whether something the system flags is legitimate ultimately comes down to a dispute between the delivery service and driver. The DSP can make a request to Amazon for the video, but an individual cannot, she explained.
“Each Delivery Service Partner is trained on the safety technology and are required to communicate to their teams how the events impact the DSP scorecard,” Miller added.
Asked about protecting privacy data, Miller said the driver’s faces are blurred to conceal their identity but didn’t provide specifics about how this potentially sensitive information is secured.
Companies like Zoom have learned hard lessons about locking down video. Zoom was just slapped with an $85 million settlement for not having end-to-end video encryption. At the same time attacks on IoT devices like video cameras have increased by more than 100 percent in the first half of 2021.
It’s not hard to think about the kinds of damage a threat actor could do with cache of information about specific drivers, their routes, routines and vehicles filled with goods.
Will Amazon AI Chill Driver Recruitment?
At a time when drivers of all kinds are scarce, and Amazon is facing a shortage, can the company afford to continue this kind of program without impacting hiring and retention? One of Amazon’s DSPs just made headlines by announcing they would no longer drug-test drivers for marijuana in an effort to boost recruitment.
Miller said she isn’t aware of Amazon losing drivers because of the new program.
But Vice reported that drivers have started to find workarounds for the cameras, including stickers to cover the lens, sunglasses to obscure their eyes to avoid being dinged for “distracted driving” and more. Angry drivers looking for loopholes could ultimately work against Amazon’s intended interest in boosting safety, according to KnowBe4’s James McQuiggan.
“Humans tend to bypass controls to get what they need to complete their tasks, potentially putting their organization at risk,” McQuiggan told Threatpost. “While they work most of the time, the automation tools can fail, leading to more significant issues. Unless corrected, they can result in employee dissatisfaction and potentially the need to leave or worse be disgruntled and will work to find ways around the situation.”
Rule #1 of Linux Security: No cybersecurity solution is viable if you don’t have the basics down. JOIN Threatpost and Linux security pros at Uptycs for a LIVE roundtable on the 4 Golden Rules of Linux Security. Your top takeaway will be a Linux roadmap to getting the basics right! REGISTER NOW and join the LIVE event on Sept. 29 at Noon EST. Joining Threatpost is Uptycs’ Ben Montour and Rishi Kant who will spell out Linux security best practices and take your most pressing questions in real time.