The “up side” of social networks like Facebook, Twitter and G+ are well known. But the down side of these networks for both users and for organizations that employ them are only now becoming clear. Worms, malware and spam are just the beginning of the security problems engendered by the social net. In this exclusive interview, conducted via e-mail, Threatpost editor Paul Roberts asked Joe Gottlieb, the CEO of security event management firm Sensage, about the many, subtle ways that social networks are eroding organizations’ online defenses.
Threatpost: You’ve spoken about the dangerous phenomenon of social networks “automating trust.” Please explain?
Joe Gottlieb: Social fabrics boil relationships down to simple transactions. By automation, I mean that by simply “liking” something, or “friending” someone, you create automated associations that lead to exposure of both good and bad social interactions. The notion of “automated trust” uses the paradigm that by taking those actions, you are prepared for all of the consequences. Users of social networks range from prudent to promiscuous, and every point in between, when it comes to their trust-level and engagements. What’s worse is that we have come to trust that messages and interactions in these settings are reaching us because they are relevant in some way (sent by a friend, or because we liked something). This trust makes it even easier for an attack to occur – our defenses are down!
Threatpost: We recently heard about a study in which researchers created “SocialBots” – fake social networking profiles that were still able to assemble very healthy networks of real human beings. What practical steps could Facebook or other social networks take to prevent phony profiles from being created?
Joe Gottlieb: Social media vendors have enjoyed the ability to serve very indulgent communities with, only recently, the concern for increased controls and security. It will be critical for these proprietors to take a continuous design improvement approach to their security practices, and their responsibility to educate users on the increasingly granular controls that are available, then force users to adopt safe techniques in their community.
Threatpost: It seems as if Facebook looks at user feedback on profiles to spot suspicious activity. Is that too trusting?
Joe Gottlieb: This is a great question – worthy of a live conversation. First, do we want to assume that SocialBots care about durable identities? Or do they act long before end-user commentary engages an alert. In fact, we have seen that these bots are very transient and therefore, “reputation” won’t really stop them since they will act and shut down before user feedback catches up. It will be up to vendors to determine how much weight to place on “unlikes,” “unfriending,” and the like. Automating solely on that feedback is dangerous…which brings us to your next question…and that tests end-user motivation.
Threatpost: Is there any feasible way to police this type of activity since its really relying on human nature to spread, rather than some platform failure?
Joe Gottlieb: End-user motivation alone is questionable. It is a combination of methods – activity monitoring, historical patterns, etc. This is an approach we follow when it comes to security event management. It’s never one approach that will detect suspicious events. And there is never consideration that you can fully stop attacks – it’s about creating sustainable and repeatable processes for discovering and eradicating risk.
Threatpost: What are some steps that, say, Facebook users should take to protect information?
Joe Gottlieb: Facebook has done a great job of making granular controls available. I am speculating that the average participant does not fully understand or leverage them. So start there – get educated about your online presence and what trust level you want to exhibit. Understand that, outside of the control sphere you create, everything else you share is available to the public.
Use the same prudence in online environments just as you have had to learn to do in your email box –knowing not to click on email that appears to contain a bogus link – now your social media communities can be an attack vector. Be careful who you bring into your network, and don’t assume that your wall can’t be leveraged for phishing attacks, etc.
Threatpost: Dan Geer, the CISO at In-Q-Tel recently wondered whether having humans in the loop is a failsafe or a liability and, alternately, whether fully automated security to be desired or to be feared. Your thoughts?
Joe Gottlieb: We all enjoy the simplicity of knowing that associations with people we trust drive other desirable interactions. However, we should not be so naïve as to believe that someone won’t take the opportunity to capitalize on our interests, if given the opportunity. Humans, who either create no boundaries around their information, or take unnecessary risks on line, will fall prey to those cyber-capitalists. In that case, naïve or cocky behavior leads to a level of exposure that makes headlines.
“Trust” can’t be fully automated and neither should security. Just as you put controls around who you consider part of your trusted network, you should add a layer of human intuition and security in everything you do. If we rely solely on the social networking engines to secure our digital life, we will lose the ability to spot scams – and that is just lazy behavior on our parts. I liken it to old email phishing scams. We have become so educated about what to look for – and have built-in senses around emails that “just don’t seem right.” We will need to employ similar senses for social media attacks.
At the same time, automating the “computation of trust context” to the best of our ability can result in better guidance for necessarily human/manual decisions. In the security event monitoring world, automation can help us run statistical filters on vast data sets that we are unable to review manually. In the social networking world, we will most likely see the more security conscious fabrics produce prompts that assemble what can be known about a looming opt-in decision…the current example is how your smartphone asks you if you want to share location information with the app that you just activated.
Future examples might evolve to produce more meaningful context such as secondary considerations like “App xyz is requesting that you share location information with a website whose security certificate does not match that on file with app xyz. Moreover, you a presently occupying a personal residence and so location information sharing may be less valuable or less appropriate at this time.”
Threatpost: What can corporations can do about the human and human-plus-Facebook problem?
Joe Gottlieb: Start with education. A smart on-line community member will save you hours of IT support time and reduced risk overall. By understanding the pitfalls of open sharing, random responding, etc., employees will exhibit better behaviors whether on the clock or not. Next, put policies in place around acceptable use, in terms of how, when and where employees can interact with social mediums. And don’t assume anything is understood. A launch date may seem like a wonderful thing to share with friends … but could be devastating competitively.
Threatpost: You said in your previous response that they need to create “sustainable and repeatable processes for discovering and eradicating risk,” but what does that mean practically?
Joe Gottlieb: Security teams need to build processes that integrate all security events – not just network gear, or endpoint traffic, etc. Build a system that looks for trends – an example would be the number of times each of the marketing employees connects to Facebook per day. Use that trending to set a threshold you can monitor then put an alert that triggers a warning when someone exceeds that threshold. Same with downloads. If it appears that, on average, your sales team downloads 200GB per day in Internet files, look for spikes that show a 2x or 3x that amount, Or suspicious URLs – put filters in place for activities that lead to suspicious website. Then ensure that the system you are using can correlate all those activities – so that, individually, they may appear innocent enough, but when combined with three or four suspicious metrics, lead you to a possible attack.