SAN FRANCISCO – The slogan of this year’s RSA Conference is “Better” – and accordingly, Tuesday morning’s keynotes zeroed in on reversing some of the disturbing sociological trends that have been festering of late.
That includes “information warfare,” meant to undermine citizens’ trust in media and the information they consume, the use of social media for influence campaigns and the emergence of artificial intelligence and the internet of things (IoT) as both boons and potential menaces to society.
To move into a more hopeful future, the cyber-community must work to rebuild trust – trust in technology and trust in ourselves, according to Niloofar Razi Howe, a cybersecurity strategist and entrepreneur who took the stage during the keynote sessions. “Trust is to the economy what water is to life,” she noted. “We are facing a trust crisis.”
It’s a crisis that could leave deep sociological scars if not addressed, she added.
A Dark Vision
This theme of engendering trust and positivity was engaged when actress Helen Mirren made a surprise appearance to open the conference with short monologue. She intoned a hopeful message about cyber-practitioners making up a community and being there to support each other, even in times of adversity. She was followed by a flash-mob-like song and dance number featuring the Oakland Interfaith Gospel Choir (this answered any questions the audience may have had about what the announcer meant when he said “no filming during the ‘performance'”). The group did a cover of Howard Jones’ “Things Can Only Get Better” – not putting too fine a point on the overarching theme.
Soon after, however, the vision darkened a bit when Howe and Rohit Ghai, president of RSA, got down to brass tacks by outlining a look into the world of 2049. By the 2040s, they noted, a fifth great age of human advancement would be in full swing. Following the agricultural, industrial, internet and digital revolutions familiar to us as historical sociological phenomena, we would enter a Biodigital Age, they said, where biology and technology have come together.
And indeed, 2049 sounds like an interesting place: There are billions online using tens of millions of connected devices. Traditional currency has been replaced by two main global virtual currencies (although “people are probably still losing money on Bitcoin,” Ghai quipped) – and the world is mainly powered by renewable energy. Oh, and we’re printing organs, there’s space station on Mars called Elon Muskia, and everyone gets a guaranteed minimum wage, so income disparity is on the wane.
However – and here’s where the darkness comes in, Howe and Ghai pointed out that we didn’t get to the world of 2049 easily – there were some dark times in the 2020s. In describing this, they described what they see as our future if trust isn’t rekindled now.
After years of election-season hacking and “deep fake” false news and information perpetrated by nation-states and disseminated via social media, more than half of all Americans have lost faith in democracy by 2025, they posited. Most have lost faith in the independence and veracity of news in a world where “fact, misinformation and opinion is blurred,” Howe said. She added, “Fact-based rational discourse all but disappeared. No one knew if government could or would solve important societal problems.” There’s also a global trade war in full swing by 2025, resulting in the siloing of the internet and the building of various digital “walls” separating the global population.
Ghai added, still envisioning the future, “In the 2020s, the trust crisis emerged to shake the very foundations of our society.”
Potentially adding to this brewing trust crisis is the emergence of new technologies that introduce a new potential attack surface or which can be used for unintended, malicious consequences. Artificial intelligence (AI) has become one of the buzziest of these, and Steve Grobman, senior vice president and CTO at McAfee, took to the keynote state to discuss AI and whether we can trust it.
AI, he outlined, is the new foundation for cyber-defense, and it will enable us to better detect threats and out-innovate our adversaries, in theory. It also can help address the talent shortage by delegating some tasks for automation, so humans can focus on more critical tasks.
The problem is that “technology doesn’t understand morality,” he said, drawing a parallel between a wing of an airplane that doesn’t know or care whether it lifts a war plane or a medical envoy. In the same vein, “the same encryption algorithm can prevent the theft of 150 exabytes of data every month on the web, or it can enable a ransomware attack,” he said. “Encryption at its core is just math, and you can’t stop someone from doing math.”
When it comes to AI, he added, “like flight, we can’t just focus on potential for helping us; we must understand the limitations and how it will be used against us.”
He noted that public safety organizations in San Francisco are using modern data science to optimize where police focus their patrols, based on crime levels and other metrics. Grobman pointed out that it’s possible to use the exact same data set to create a map that would allow a criminal to commit crime more effectively and minimize the likelihood of arrest – and showed an AI model of just that.
Celeste Fralick, McAfee chief data scientist and one of Forbes’ Top 50 Women in Tech, also took the stage to explain how AI enables the deep fakes that Ghai and Howe referred to.
“AI is suited to create fake content that’s highly believable,” she said, before running a demo that used freely available public comments from Grobman to create an AI model, which was then tasked to create a video with her words coming out of his mouth.
“It’s possible to create massive chaos with this,” she said.
On top of that, Fralick also explained how AI can be used to create automated, targeted messaging to combine “the effectiveness of spear phishing with the scale of traditional phishing,” and further, how it’s possible to attack AI engines themselves. For instance, she demoed how one could make an image classifier think that a picture of a penguin is actually a picture of frying pan.
“Imagine using this kind of adversarial machine learning on malware classification and other cyber-defense models,” she said.
IoT in the Spotlight
Another arena of weakening trust centers around the increasing number of connected devices that are becoming part of the fabric of critical infrastructure, utilities and manufacturing environments.
Matt Watchinski, vice president of the Global Threat Intelligence Group at Cisco Talos, pointed out during his keynote the numerous predictions that call for an explosion of IoT devices – perhaps 250 billion of them (including sensors and the like) that will be hooked up by 2020.
“Already, there are hundreds of thousands of interconnected devices around us, all with unique and interesting threat profiles,” he said. “They’re no longer running proprietary protocols; they’re running on IP. We’re talking traffic meters, stoplights, all connected to our IP world where previously they were not. The technologies that we’re building in IoT are bleeding into our IT world. Eventually they will bleed into the operational technology (OT) world of critical infrastructure and how we deliver water and power. And we’re going to have to learn the completely new world of OT security.”
Elaborating on that, Liz Centoni, senior vice president and general manager for Cisco IoT, also took the stage to explain that a core security challenge when it comes to OT is understanding how OT teams – and their goals – are different from the more well-known IT environment.
“In the operational world a plant manager is tracking things like OEE – overall equipment effectiveness,” she said. “He’s looking at a safety incident report – because a no-safety-incident day is a very good day.”
OT managers also look at availability stats—how many customers have been impacted in a power outage, for instance, and for how long? “That’s what regulators measure them on,” Centoni noted. “They care about safety, availability and resiliency, not data loss.” She added, “and they don’t think about visibility as a means to security, they look at it as a way to get operational insights.”
Addressing security effectively, she concluded, will require leaning in. “Learn about the OT environment – make new friends Learn how to ask the right questions so you know what’s important to them before you think about security. Partner up across the carpeted and non-carpeted space,” she added referring to the office vs. a factory floor. “As defenders of this galaxy, we have to be the bridge between IT and OT.”
Solving the Trust Crisis
All of these challenges – the potential for information wars, a citizenry that eschews fact-based discussion, and new obstacles in AI and IoT to creating secure business and consumer environments – boil down to trust. Ghai and Howe accordingly laid out a three-pronged plan for addressing this brewing “trust crisis” – which can stand in the way of us now and the 2049 vision they laid out.
These are: First, understanding that risk and trust coexist; next, encouraging trustworthy digital twinning; and third, creating a chain of trust.
To the first point, even as trust in organizations, political and social institutions plummets, the average person is busy “trusting complete strangers, inviting them into their homes, cars and lives, thanks to platforms forming peer-to-peer trust between individuals,” Howe noted, referring to AirBnB, Uber and the like.
In this peer-to-peer scenario, people understand that it’s not about eliminating risk but understanding, prioritizing and managing it. The same approach should be taken with information of all stripes.
“In 2049, we have machines practicing DevSecOps,” Ghai said, noting the potential for AI to help in managing risk. “In theory, every piece of tech is capable of patching itself. Data has become liquid, and to cope, every piece of technology is instrumented to asses risk and adjust accordingly – it has a ‘Spidey sense,’ like human intuition.”
Also, he added that data should be labeled at the point of creation and tagged on an ongoing basis to track where it goes and who owns it.
The next idea, trustworthy twins, is the idea that man and machine together are more trustworthy than either is individually – which addresses some concerns around AI.
“We suck at passwords and still click on dancing cat videos,” Ghai said. “Meanwhile, AI is a lightning rod because it costs jobs and suffers from any bias in the data that fuels it.” So, each can bring its strengths to the table, using the other as a check.
“A machine can outperform humans in most tasks,” Ghai explained. “Humans are great at creative stuff, and humans are better in knowing what questions to ask and investigating. So picture an ocean of data, humans asking the questions and machines fetching answers.”
The third idea is a chain of trust, which has to do with reputation.
“We will measure reputation to measure trustworthiness,” said Howe. “Think of it as a ledger – you make a deposit when you do the right thing (especially if it’s a difficult right) and a withdrawal when you don’t (especially if it’s an easy wrong).”
She added, “Trust doesn’t require perfection, it’s more about honesty, responsibility and transparency. Report good events, disclose bad ones.” Howe also suggested labeling products with a digital trust score: trust/risk: a trust quotient.
In the end, Ghai said, he would like to look back at RSA 2019 and say, “that was the year that 40,000 cyber-practitioners had an epiphany – and began to obsess about the trust landscape. That paved the way forward.”