Okta Says It Goofed in Handling the Lapsus$ Attack

“We made a mistake,” Okta said, owning up to its responsibility for security incidents that hit its service providers and potentially its own customers.

On Friday, Okta – the authentication firm-cum-Lapsus$-victim – admitted that it “made a mistake” in handling the recently revealed Lapsus$ attack.

The mistake: trusting that a service provider had told Okta everything it needed to know about an “unsuccessful” account takeover (ATO) at one of its service providers and that the attackers wouldn’t reach their tentacles back to drag in Okta or its customers.

Wrong-o, it turned out: About a week ago, Lapsus$ bragged about having gotten itself “superuser/admin” access to Okta’s internal systems, gleefully posting proof and poking fun at Okta for its denials that the Jan. 20 attack had been successful.

Infosec Insiders Newsletter
Okta went on to discover that the attack had affected 2.5 percent, or 366, of its customers.

In an FAQ published on Friday, Okta offered a full timeline of the incident, which started on Jan. 20 when the company learned that “a new factor was added to a Sitel customer support engineer’s Okta account.”

What Happened at Sitel

The target of the Jan. 20 attack was Sykes Enterprises, which Sitel acquired in September 2021. Okta has referred to the company as Sitel – a third-party vendor that helps Okta out on the customer-support front – in its updates and FAQ.

The threat actor failed in its attempt to add a new factor – a password – to one of Sitel’s customer support engineer’s Okta account. Okta Security had received an alert that a new factor was added to a Sitel employee’s Okta account from a new location and that the target didn’t accept a multifactor authentication (MFA) challenge, which Okta said blocked the intruder’s access to the Okta account.

Nonetheless, “out of an abundance of caution,” the next day – Jan. 21 – Okta reset the account and notified Sitel. On the same day, Okta Security shared indicators of compromise (IOC) with Sitel, which told Okta that it had retained outside support from “a leading forensic firm.”

According to the full report that Sitel commissioned, the threat actor had access to Sitel’s systems for a five-day window, from Jan. 16-21: dates that back up the screenshots that Lapsus$ posted on March 21.

During the five-day window wherein it had access to Sitel, the attacker’s only action was the attempted password reset.

Timeline of Okta hack. Source: Okta.

How Okta Screwed Up

As far as why Okta didn’t notify customers when it learned of the ATO attack in January, it acknowledged on Friday that “we made a mistake.”

“Sitel is our service provider for which we are ultimately responsible,” it admitted in the Friday FAQ.

You can’t know what you don’t know, though: “In January, we did not know the extent of the Sitel issue – only that we detected and prevented an account takeover attempt and that Sitel had retained a third party forensic firm to investigate,” Okta said. “At that time, we didn’t recognize that there was a risk to Okta and our customers. We should have more actively and forcefully compelled information from Sitel.”

Coulda, woulda, should, it said: “In light of the evidence that we have gathered in the last week, it is clear that we would have made a different decision if we had been in possession of all of the facts that we have today.”

It must be a painful mea culpa: Okta’s share price had dropped nearly15 percent as of Friday. As the Wall Street Journal reported, that’s a common reaction after major cyber attacks, such as those at SolarWinds, Mimecast and Mandiant, all of which saw shares slide after they reported their own incidents.

The WSJ’s headlines say it all: “Identity-management company has strong market position, but business impact of recent hack won’t be clear for a while,” the business daily said on Friday, predicting that ” Okta Faces Long Road Back.”

Potential Extent of Compromise

In its Friday FAQ, Okta said that, as detailed in its blog, the company has already identified and contacted 366 potentially affected customers. Okta service itself was not breached, it said: “There is no impact to Auth0 or AtSpoke customers, and there is no impact to HIPAA and FedRAMP customers.”

As such, customers don’t have to reset passwords, Okta said: “We are confident in our conclusions that the Okta service has not been breached and there are no corrective actions that need to be taken by our customers.

“We are confident in this conclusion because Sitel (and therefore the threat actor who only had the access that Sitel had) was unable to create or delete users, or download customer databases.”

That lack of access is by design, Okta explained. “In assessing the potential extent of the compromise, it is important to remember that by design, Sitel’s support engineers have limited access. They are unable to create or delete users, or download customer databases. Support engineers are able to facilitate the resetting of passwords and multi-factor authentication factors for users, but are unable to choose those passwords. In other words, an individual with this level of access could repeatedly trigger a password reset for users, but would not be able to log in to the service.”

Besides its attack on Okta, the precocious Lapsus$ gang – a group of data extortionists potentially thinned out by London police having collared seven suspected members last week – also posted some of Microsoft’s source code and data about internal projects and systems around the same time as it shared Okta screenshots.

How Much Should We Blame Okta?

Security specialists aren’t jumping to blame Okta for its admitted “mistake.” The thinking: There but for the grace of God go us.

After all, ATOs are common. How should an organization know which ones to consider as worthy of close inspection, and when should they follow up with a deeper dive to ensure the attempt wasn’t successful?

Sounil Yu, chief information security officer at JupiterOne – provider of cyber asset management and governance technology – told Threatpost on Monday that these intrusions (or, rather, attempted intrusions, as the case may be) occur regularly, but the “vast majority” are beaten back before they have a serious impact or lead to further incidents.

“It’s easy in hindsight to understand the true severity of an incident, but hard in the present time,” he said via email.

Chris Morgan, senior cyber threat intelligence analyst at digital risk protection firm Digital Shadows, explained that ATOs are “incredibly common” due to a combination of the effectiveness and availability of brute-force cracking tools and threat actors’ ability to sell stolen accounts on cybercriminal forums.

What Should Trigger a Report?

The question of whether certain incidents are material enough to report “can be more art than science,” Yu said. But the Okta case will probably cause many organizations to reconsider what ratings and thresholds they’re applying to such incidents, he surmised, “so that we are not seen as negligent in meeting our reporting obligations.”

Knowing when to conduct a more robust investigation depends on what facts are uncovered during the incident management process, along with the risk associated with the targeted account, Morgan said via email. “An account with significant privileges should be treated with a higher priority than those that [have] limited functionality,” he advised.

Initial triage of ATO attacks aim to identify key facts over what activity the account has been involved in, to accurately determine the risk and next steps, Morgan said. “This is typically done by checking authentication logs and observing login activity and includes spotting whether the account has attempted to login to additional services, changed any passwords, or downloaded external material.” he continued. “It also includes activity that may have an impact on the overall risk, like whether the account has accessed sensitive data or attempted to establish persistence.”

No ‘God-like Access” Was Gained

When the Okta breach first came to light, there was concern about a “superuser” app pictured in Lapsus$ screenshots. Okta clarified on Friday that this was no “Super Admin” account, as had been feared initially. Rather, it’s an in-house application – known as SuperUser or SU – used by support staff to handle most queries.

“This does not provide “god-like access” to all its users,” Okta Chief Security Officer David Bradbury explained. “This is an application built with least privilege in mind to ensure that support engineers are granted only the specific access they require to perform their roles.”

Specifically, SuperUser engineers can’t create or delete users or download customer databases.

What SuperUsers can do: “Support engineers do have access to limited data – for example, Jira tickets and lists of users – that were seen in the screenshots,” Bradbury clarified. “Support engineers are also able to facilitate the resetting of passwords and MFA factors for users, but are unable to obtain those passwords.”

The fact that the Sitel account Lapsus$ took over was reportedly built with the principle of least privilege in mind “should have minimized the data and services that Lapsus$ were able to view,” Morgan said, in response to Threatpost asking what Okta did right.

“Okta should also be praised for how quickly they identified and worked to lock down the compromised account,” he added.

However, clearly, that timeliness didn’t extend to the forensic reporting and communication of the incident, as Okta itself has now admitted.

Moving to the cloud? Discover emerging cloud-security threats along with solid advice for how to defend your assets with our FREE downloadable eBook, “Cloud Security: The Forecast for 2022.” We explore organizations’ top risks and challenges, best practices for defense, and advice for security success in such a dynamic computing environment, including handy checklists.

Suggested articles