Threatpost Op-Ed is a regular feature where experts contribute essays and commentary on what’s happening in security and privacy. Today’s contributor is Katherine Carpenter.
The imaginary world in which an artificial intelligence can kill a person by adjusting the insulin from his pump to a deadly dose may not be here yet, but we now live in a world where people can hack an insulin pump to adjust the insulin to a deadly dose, or use a heart-rate monitor to send life-threatening shocks to a pacemaker.
Medical devices are critical for the lives of many patients. They save lives, or prolong or ease the lives of many people. Safety and security of medical devices has been discussed for many years, although recent revelations of their vulnerabilities brought concerns about their safety to the mainstream. This fall, two companies experienced very different situations because of alleged security issues with their medical devices. One company had a negative experience, while the other had a fairly positive one.
When Muddy Waters announced it was shorting St. Jude Medical stock based on research from MedSec Holdings, a security research firm, which found vulnerabilities in St. Jude pacemakers, the jarring news created a controversial ripple in the information security community, and underscored concerns of undoing years of progress in vulnerability reporting.
About a month later, Rapid7, a security research organization, published a blog post about a security vulnerability in Aminas’s OneTouch Ping insulin pump. The research coincided with a letter from Animas to its customers. The insulin pump announcement included instructions to disconnect the pump from Wi-Fi access, suggesting the vulnerability could not be accessed by a bad actor. It also included a thoughtful message from the (diabetic) security researcher who discovered the vulnerability to the diabetic community.
Controversy is Fashionable in the InfoSec Community
The security community thrives on controversy. One of the best ways to make a name for yourself in the community is to act in a publicly outrageous and controversial way. This could mean an airline passenger hacking their plane while in flight, disclosing insulin pump vulnerabilities at a renowned hacker conference, or collaborating with a short-selling firm to announce the life-threatening vulnerabilities in a pacemaker. This behavior screams for attention, and the negative attention initially received often becomes positive attention for the researcher over time.
Controversy damages the relationships that researchers and others build to create a constructive and collaborative vulnerability disclosure process. Disruption may not be a bad thing if it successfully educates and protects the public while spurring a company into action. However, the benefits of disruption can disappear if a company’s response is a lawsuit and denial of the problem. In an ideal world, there would be more cooperation, less public shaming, less litigation, and more work fixing serious vulnerabilities.
What Worked, What Didn’t, And Why?
The medical device situations above, as case studies, illustrate how the collaboration with Animas and Rapid7 was far more constructive and less damaging to patients, the company, and the security researchers than the public disclosure about St. Jude or its response to MedSec and Muddy Waters.
What worked: Animas had a vulnerability disclosure program and it was responsive to the researchers. Animas and Rapid7 worked together for approximately six months creating a plan to remediate the security issue and alert their customers before publicly disclosing.
What didn’t work: St. Jude also had a vulnerability disclosure program. However, it was ineffective because the company did not have a good reputation of working with security researchers and the MedSec researchers found that security problems from 2013 had not been addressed. For that reason, MedSec did not disclose the vulnerabilities to St. Jude before Muddy Waters’ announcement.
The communication between MedSec’s researchers and St. Jude was adversarial from the outset. It began with negative publicity from the Muddy Waters announcement they were shorting St. Jude stock. St. Jude’s response denied any possibility of device insecurity and stated that they are compliant with international security standards (as though compliance meant their devices and network were secure). St. Jude threatened a lawsuit and followed through on their threat (the suit is making its way through the court system).
A better way for St. Jude to have handled this controversy would have been to respond in a concerned and curious way to the researchers: St. Jude might have said, “We have a security problem? We care about the security of our devices, will you work with us to help us fix it?” To give St. Jude credit, it did say it “would have preferred the opportunity to review a detailed account of the information.” However, instead of working with MedSec researchers, St. Jude sued them. Even if the beginning was ugly and public with Muddy Water’s disclosure, negativity often leads all parties in a downward spiral.
The ethicist in me recognizes the value in the MedSec/Muddy Waters disclosure because I care about patient and customer safety. The bridge builder in me realizes the flash-in-the-pan publicity is not healthy for building positive relationships between vendors and researchers. Perhaps we can aim for a culture that will combine the best of both. The goal for the future is figuring out how to create a beneficial and constructive relationship with medical device and embedded device companies.
A Path For The Future
It is important to build and maintain trust between security researchers and organizations that build products, particularly in the medical device sector and other sectors that directly impact human life. Security researchers need to trust that when they disclose vulnerabilities to organizations, those organizations will do the “right thing” and fix life-damaging security vulnerabilities. Responding to researchers and remediating vulnerabilities in a timely manner will help to build that trust.
It is important to disclose vulnerabilities in embedded systems to people who can (and will) fix them. It is also important to convince the people who can do something about the security problem to listen and fix the problem, rather than deny its existence, or be aggressive toward anyone who tells them something they didn’t want to hear.
It is time to create an environment where attention goes to the curious and considerate rather than those who misbehave. Thomas Stallkamp said: “The secret is to gang up on the problem, rather than each other.” Notoriety may be one way to build a researcher’s reputation, but our InfoSec ecosystem will prosper through cooperation rather than antagonism.
Katherine Carpenter (JD, MA) is a consultant working in Information Security, Privacy, Health, and Ethics.