InfoSec Insider

Debunking AI’s Impact on the Cybersecurity Skills Gap

There is no argument artificial intelligence will have an impact on the cybersecurity skills gap. The question is how.

Artificial intelligence is the latest buzzword to take hold of the cybersecurity industry. It is being touted, among other things, as the ultimate solution to the cybersecurity skills gap. But just how accurate is this belief?

Will AI be the cure to all of our cybersecurity ailments, as human security analysts are replaced by robots powered with artificial intelligence (AI) technology? Or will it make the skills gap even worse by changing the landscape? Only time will tell, but I feel the answer lies somewhere in the middle.

Will AI Solve All Our Cybersecurity Problems?

Enterprise IT security departments are facing a serious skills shortage. Not only are they struggling to find the right talent, but there is hardly any talent to choose from. This skills shortage comes at a time when security teams are struggling with the big data problem. There is just too much data for any enterprise IT team to manage, bare minimum. But to also be able to analyze all of that data, seeking out anomalies that could signal a looming threat, is another story entirely. Even if an enterprise IT team had enough manpower to manage the data at hand, there is just no way for us as humans to analyze that much data in real time.

AI is being lauded as the long-awaited fix for these cybersecurity woes, but is that a realistic proposition? When people talk about cybersecurity’s future and the promise of AI, it conjures images of security operations centers running autonomously by robots instantaneously responding to alerts after quickly ingesting contextual insights in order to decide on the best response within microseconds. But before we get carried away with the idea of robots taking over the world, we need to look a little closer.

I argue the our greatest danger with AI is not – as Hollywood would like us to believe – robots eventually causing the downfall of humanity by taking over all available jobs, but rather not fully understanding both AI’s benefits and limitations. AI will never fully replace humans. In fact, it may actually make the skills gap even worse for a time, as organizations struggle to find employees who are adept at managing these new technologies. In order for IT teams to successfully implement AI technologies, they will need a new category of experts to train the AI technology, run it, watch over its outputs and analyze the results. This will require organizations to either hire humans already trained to do so or have existing team members learn even more cybersecurity-related skills.

In this scenario, AI is not actually replacing humans but rather giving them opportunities to expand their skill sets with new tools, while it does the heavy lifting of tedious, repetitive manual processes. Organizations that accept this opportunity to help employees grow their cybersecurity skills while tempering their expectations for a fully automated workforce will make giant strides in improving their cybersecurity operations with a combination of the two.

The Dawn of AI in Identity

From an identity governance standpoint, AI has great potential to alleviate the skills gap many organizations are currently feeling, rather than making it larger. Instead of replacing humans, AI can help identity teams do their jobs better, especially when it comes to the immense volume of identity data they must sort through to find anomalies and exposure points. Business applications and data usage are growing at an exponential rate, while the volume of identities is rapidly increasing both on the human side – from employees, contractors and business partners – and the non-human side – from bots and robotic process automation. This creates an enormous amount of activity from an ever-expanding list of users that generates a tremendous amount of data.

When it comes to identity governance, enterprises want to do two things: manage risk and drive efficiency. With an identity governance program that incorporates AI and machine learning, IT teams can rely on technology to more swiftly identify ways to reduce risk and improve efficiency. In this scenario, AI and machine learning will not replace expertise, but rather these algorithms will be used as a force-multiplier for security professionals who all need to efficiently and smartly sort through an increasing amount of information to do their jobs effectively. It’s a powerful combination, and it’s a win-win scenario for organizations and their employees.

AI + Cybersecurity: Time will Tell

AI is still in the early stages in cybersecurity and still needs to evolve. But now is the time to consider if and where artificial intelligence fits into cybersecurity strategy for businesses today. In order to use AI to its full potential, we must accept what AI is capable of and what its limitations are.

While AI may be great for processing large amounts of data or replacing autonomous manual tasks, it will never be able to replace a security analyst’s insights into the organization based on these activities. While AI may one day help bridge the cybersecurity skills gap, employees can rest assured that it will not replace human expertise. I believe it will elevate what security analysts, identity management professionals and incident responders are capable of, helping them work smarter, making their jobs less mundane and opening the door for an even more exciting career outlook.

(A marketing veteran with more than 20 years of experience, Juliette Rizkallah brings a wealth of expertise and pragmatism to SailPoint in her role as chief marketing officer. No stranger to the world of enterprise security, Juliette leads the company’s worldwide marketing efforts, and is responsible for articulating the company vision, product solutions, technology innovations and business purpose to customers, partners and media around the globe.)

Suggested articles

Using Fuzzing to Mine for Zero-Days

Infosec Insider Derek Manky discusses how new technologies and economic models are facilitating fuzzing in today’s security landscape.

Discussion

  • Philip on

    Cyber Security made complicated. One wonders why we seek to complicate. The biggest issue is not the control of data. It is the control of access to systems, servers and data. twenty maybe even seventy years ago we understood identification and control over access was the key to security. Unfortunately frictionless and convenient always trumped security. Today we need to add a tiny bit of friction to remind all of the value of security. Multi-factor Authentication or other layered approaches are now available. Why complicate the solution. Simply properly verify the identity at first point of contact and implement FIDO.
  • Richard Steven Hack on

    Every couple of decades there is an "AI hype cycle" in which companies with AI expertise push the notion that AI is going to solve everyone's problems in one industry or another. It never happens. And it won't until the primary issue in AI - conceptual processing, i.e., the ability to really understand and process concepts - is solved. Which as far as I know is not yet the case. Pure processing power driving inadequate algorithms does not solve this issue.
  • Nils on

    AI could solve all security problems for a starter, but eventually, in my opinion, the security would be of more danger since hackers could come by creating even scarier AI for themselves.

Leave A Comment

 

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to our newsletter, Threatpost Today!

Get the latest breaking news delivered daily to your inbox.