Cloud-based storage and infrastructure provides myriad benefits for any organization, like letting them avoid the costs of expensive hardware and granting them quick access to infrastructure as needed. Companies can use cloud services for minutes or years, depending on their needs. However, there is a darker side to this picture, in which cybercriminals can take advantage of the cloud if the infrastructure is not set up correctly and secured.
According to the Cloud Adoption & Risk Report released by McAfee earlier this year, there has been a 27.7 percent increase in cloud-related security incidents from the last year. With 65 percent of organizations using some form of an infrastructure-as-a-service (IaaS) model, organizations need to be aware of the risks that cloud-based options bring, and ensure that security is a top priority when deploying them.
Many people believe data security is the purview of cloud provider platforms like Amazon Web Services (AWS) and Microsoft’s Azure. But AWS and others use a shared-responsibility model: Amazon takes the responsibility of securing its infrastructure, but the customer is responsible for configuring their environment securely. This includes ensuring that data is not shared inappropriately, identifying when a system is misused and enforcing compliance and/or governance policies (e.g., GDPR, PCI DSS, etc.).
As the McAfee report shows, the shared responsibility model for IaaS requires organizations to secure user access, data, applications, operating systems and network traffic. This leaves just the hypervisor, infrastructure, and physical systems up to the provider to secure.
This situation has led to a number of exposures and breaches of sensitive data because of an oversight on the part of cloud customers. For instance, in the Capital One data breach in July 2019, the financial giant used secure AWS as a platform, but a misconfiguration coupled with an ill-intending former employee of Amazon resulted in a massive data breach. About 140,000 Social Security numbers, 80,000 bank account numbers and details from more than 100 million consumer-credit applications were compromised.
Capital One’s cloud-related breach isn’t uncommon. In 2010, Microsoft had a breach in its Business Productivity Online Suite (BPOS). Like the Capital One breach, Microsoft’s was also the result of a configuration issue in Microsoft’s data centers. And since this incident, the misconfiguration problem has snowballed, with multiple data exposures every month being reported from companies of all stripes.
The stakes are high: One misconfigured server is all it takes, and the door is wide open for cybercriminals to steal all kinds of data. There are other dangers too: Cybercriminals that utilize Magecart malware have been automatically compromising domains and websites with credit-card skimmers by actively scanning for misconfigured Amazon S3 buckets.
Setting Up the Cloud
The biggest problem is that when deploying cloud environments, many pieces need to be configured, including the routing and firewall rules that grant access to the servers being deployed, the servers themselves, and the application-level firewalls and access rules within those servers. With so many components, and with effectively non-existent security in most default configurations, it is easy to see why one or more components may be deployed in an insecure state.
Even when users go through these configurations, some settings (like access control lists or ACLs) can be extremely long and complex to manage. This means that extensive testing is required to validate each rule. When time is insufficient, insecure settings may persist. According to the Cloud Security Alliance’s report Top Threats to Cloud Computing: Egregious Eleven, “[a]n absence of effective change control is a common cause of misconfiguration in a cloud environment. Cloud environments and cloud computing methodologies differ from traditional information technology (IT) in ways that make changes more difficult to control.”
This is because unlike in on-premise deployments, “infrastructure elements that were static in the corporate data center are now abstracted to software in the cloud.”
Securing the Cloud from Stormy Weather
Misconfigurations may be common, but now that 21 percent of files in the cloud contain sensitive information, businesses must improve their data-security game with a multi-point security approach.
Generally speaking, network traffic analytics and user behavioral analysis can be used to spot anomalies that can alert IT to misconfigurations – as well as exposures that occur due to misconfigurations.
Since cloud platforms are inherently network-connected deployments, network traffic is a major way to understand how data is moving across these systems. In the case of AWS, Virtual Private Cloud (VPC) log information provides a clear picture of how data traverses Amazon’s network to individual systems within AWS.
But VPC logs don’t provide a complete picture if the system can be accessed outside the corporate network. Organizations must also ensure that access to cloud systems is restricted to individuals that authorized to tap into specific data on the corporate network. By requiring individuals to be physically present and authenticated on the network, organizations can track user behavior from everyone on the network.
When physical presence isn’t possible, such as with remote employees, organizations should require employees to log on through the corporate VPN or other service that requires proper authentication (single sign-on, token validations, valid user credentials for corporate access, etc.), before connecting to corporate resources.
Meanwhile, network analysis alerts organizations when employees communicate with cloud systems they don’t regularly connect to, and it allows security teams to spot potentially unauthorized access when a new connection takes place.
For example, it would be strange to have members of human resources or marketing connecting to a cloud system that maintains research and development resources, especially if the individual has never connected to the system before. In such instances, security teams can identify misconfigurations — and also any additional problems like stolen employee credentials, rogue employees and malware, based on the network traffic patterns.
Finding the Data Leak in the Cloud
As mentioned, network traffic can be a foundational resource for finding misconfigurations. While ACLs are crucial to stopping unauthorized connections, network traffic should also be used to verify that the rules are working as intended.
By seeing how resources communicate with one another, network and security teams can see when rogue agents are connecting to privileged resources or violating firewall rules. When security protocols are in place, and network traffic can verify that no unauthorized connections are taking place, businesses can verify that their cloud deployments are functioning as intended.
As the number of connections and the threat landscape grow, businesses must ensure their cloud buckets are properly configured, and that users are not abusing systems or being granted unauthorized access. Adding a few extra layers of security can go a long way in that effort.
Justin Jett is director of Audit and Compliance for Plixer.
Please check out all of the latest posts in our Infosec Insider Community.