Researchers at IOActive have found nearly 50 vulnerabilities in industrial collaborative robots, machines that work side-by-side with people in manufacturing and other settings, that can be abused to possibly cause physical harm to workers, or even configured to spy on their surroundings.
The machines can be remotely tampered with to alter safety configurations that prevent them from operating outside a designated safety boundary, for example. Others that have built-in cameras and microphones can also be accessed and used for commercial espionage.
The researchers, Cesar Cerrudo and Lucas Apa, published a paper today that complements their initial research published in February with technical details on the vulnerabilities and proof-of-concept exploits, along with demonstrations. The researchers are also scheduled to do a talk on their findings Thursday at Hack In The Box Singapore.
“These industrial robots we researched are collaborative robots (cobots). They are different than traditional robots where they are in a fixed place doing repetitive work,” Cerrudo said. “These new collaborative robots are smarter and can do a lot of different things. There the threat is different. Once they are hacked, they have a lot of people around them; you’re talking about really powerful robots that can lift a lot of weight. It’s very possible they can end up seriously hurting a person.”
Cerrudo and Apa studied publicly available firmware and software to learn how these machines work, learning their ecosystem, how they connect to local networks, including other robots, as well as to their respective vendors, including to cloud-based update systems, for example. They were able to find numerous security issues common to most software in other applications.
“Most of the [vendors] did not protect against these common problems,” Apa said, adding they found a range of vulnerabilities, such as insecure communication, authentication problems, cryptographic issues and more. “Some of these vulnerabilities were very easy to exploit.”
Cerrudo and Apa looked at robots from vendors such as Rethink Robotics’ Baxter/Sawyer and Universal Robots. Baxter/Sawyer was responsive, patching in February a hoard of issues from insecure authentication, insecure transport in protocols, default configurations and usage of a known vulnerable research framework. Universal Robots, on the other hand, has yet to patch the authentication, memory corruption and insecure communication vulnerabilities IOActive privately disclosed in January.
The researchers point out too that a number of integrators are responsible for installing robots on-site and for ensuring that safety hazards are eliminated by conducting risk assessments, setting up safety settings and deploying passwords that ensure the user will not modify safety measures. Some of those safety settings include limits on force and power when the robot clamps down on something, as well as momentum limits that reduce speeds in the event of a collision. Tool-orientation limiting is also used to reduce dangers from sharp edges pointed at an operator. There are also speed limitations that are configured to ensure robot arms, for example, operate at low speeds.
Cerrudo said that he hopes this research will spur an uptake in security as these machines become more mainstream in industrial settings. For now, security is not a priority, which is a similar refrain to what’s happening in IoT, for example.
“What we saw that is very common in the robotics community is that there is a lot of research and shared tools and open-source frameworks. These open-source frameworks are really insecure; they are for research and development and they don’t have security, and that is well known,” Cerrudo said. These research projects that lack security, he said, once they are funded and are ready to go commercial are generally moved directly into production without a security audit.
“They end up with the same code base that’s very insecure because it was a prototype that was done for research with open-source tools and it doesn’t have any security,” Cerrudo said. “Then they just start making the commercial version and start to sell it. They don’t really understand that security is important. They didn’t even know how to handle the security reports we sent to them. Most of the vendors are very immature in security. They don’t understand the basic concepts or have a procedure in place to provide updates.”
In all, the researchers contacted six principal vendors in this space, with four replying. Some said they would consider fixing the vulnerabilities, while others such as SoftBank Robotics said they could not, likely because of a compatibility issue or design problem. Others such as UBTECH Robotics of China just thanked the researchers for their notification sent Threatpost a statement from North America general manager John Rhee:
“UBTECH is committed to maintaining the highest security standards in all of its products. As a result the company has conducted a full investigation into the claims made in the IOActive report regarding the Alpha 2 robot. The Alpha 2 robot was designed to be on an open-sourced platform where developers are encouraged to program their robots with code. UBTECH has fully addressed any concerns raised by IoActive that do not limit our developers from programming their Alpha 2”
“It’s important to highlight that while we don’t see robots everywhere now, they will be everywhere in the near future,” Cerrudo said. “Right now, they are very insecure. If we don’t do anything about it and improve the security, then it will be a complete mess. They can end up doing really nasty things. The same problems you are seeing right now with IoT that are causing losses and being hacked every day will be 10 times worse with robots. They can move around, grab things, damage property, have camera, microphones, so the threat is a lot bigger.”
This article was updated Aug. 22 with a statement from UBTECH.