The zero-trust concept is often (and pithily) summarized as “trust no one, verify everything.” No enterprise can stave off the myriad of cyberthreats as long as they assume that any individual element can be trusted as secure. No traffic, whether internal or external, can automatically be deemed safe, so organizations must simply stop trusting anything or anyone.
Importantly, zero trust transforms the way current cybersecurity strategies are orchestrated and executed. While zero trust was first put forward almost a decade ago, interest has recently surged – and a host of new micro-segmentation solutions have been created – to actualize the model and put it into play.
Here are a few considerations to take into account when implementing a zero-trust approach.
Implementing Micro-Segmentation
Implementing micro-segmentation and authentication is a necessary but an arduous process. An IT team must carefully map huge numbers of data processes and workloads across every individual, network and device present in the organization, including third-party actors. A single misstep in the configuration, and an organization can be set back a day’s worth of productivity or more.
Meanwhile, even minute changes to authentication processes can negatively impact the user experience and significantly impede productivity. As individuals, devices and process are added or removed, policies and permissions must be updated and maintained. As a result, zero trust requires ongoing vigilance and management.
Do Zero-Trust Organizations Really Trust No One?
Micro-segmentation and related solutions go the distance to secure networks and data from everyone and everything. However, gaps still remain in truly isolating all traffic within the network and from the outside. Remarkably, web browsing is a primary area of risk not covered by micro-segmentation or other related zero-trust solutions.
Browsing plays a huge role in today’s business environment and together with malicious email (much of which links to the web), it continues to be a dominant vector through which malware penetrates organizations. Micro-segment as granularly as you like, but it cannot prevent browser-based malware and threats, including many ransomware variants, cross-site scripting attacks and drive-by downloads, from gaining a foothold in your network.
Many zero-trust experts suggest whitelisting trusted sites as the answer, while rejecting access to all other sites. However, this method of limiting access to all but known sites has been known to hamper productivity and frustrate employees. Limiting access creates obstacles for users and burdensome busywork for IT staff. Users must constantly request access and wait, while IT staff must shift their attention from more important tasks to manage, investigate and respond to such requests.
Hypothetically, even if organizations could whitelist every site their users needed access to, they still run the risk of being vulnerable to malware that can infiltrate by means of legitimate sites. There is no way to know with 100-percent certainty what is transpiring on the backend of even whitelisted sites, and even the most mainline sites have been known to serve malicious ads or be infected with malware.
While URL filtering, anti-phishing software, web gateways, and other detection and signature-based solutions can stop most attacks, most of the time, they cannot hermetically block all threats from the web. For complete airtight security, no website should be trusted, yet users must be able to access the sites that they need.
Zero-Trust for the Web
The zero-trust model can accommodate the web by trusting no site to interact with vulnerable endpoint browsers and through them, organizational networks.
One increasingly talked-about way to do this is remote browser isolation (RBI) — an idea that operates under the assumption that nothing from the web is to be trusted. Every website, item of content and download is suspect. But to avoid the issue with impacting the user experience, all browsing takes place remotely, on a virtual browser.
The virtual browser can be housed in a disposable container located in a DMZ or in the cloud. Users interact naturally with all websites and applications in real time via a safe media stream that is sent from the remote browser to the endpoint browser of their choice, so no content touches the user device. When the user is finished browsing, the container and all its contents are destroyed.
Whitelisting isn’t necessary, and users can transparently interact with any sites that they need, without access requests.
Bottom line: The zero-trust objective is to implement granular security policies that allow organizations to control what communications can and can’t be allowed between the different entry points on the network, and which individuals are privy to each. All devices, networks and IP addresses are micro-segmented, and access to individual components are restricted in accordance with security policies and user authentication. It’s a great approach, but to be successful, recognize that it’s a big job with some not-so-obvious considerations when it comes to effectively implementing the concept.
David Canellos is president and CEO of Ericom Software.
Enjoy additional insights from Threatpost’s InfoSec Insider community by visiting past contributions.