Thoughts on Treating the Root of End-User Risk


By Kurt Wescoe, Chief Architect, Wombat Security Technologies

In July, I had the opportunity to spend time with many other members of the infosec community at Black Hat USA 2017. Though I have worked in the cybersecurity space for the past ten years, this was my first time attending Black Hat. I was impressed with the breadth of topics covered and — none too surprising — I found a lot of interesting talks in the “Human Factors” track. It was refreshing to see how broadly this community is looking at security.

One of the points that most resonated with me during the show was one I heard during the keynote by Alex Stamos, Facebook’s Chief Security Officer. Stamos offered a number of great insights, but the one that stuck with me was the statement that we too often focus on fixing a specific issue or bug, and fail to think about the root cause and how we can address that. Frankly, this is sage advice for life as well as security. That’s not to say every problem presented to us should be regarded in a philosophical or “meta” manner; however, if you see similar things happening over and over, it’s worthwhile to take a step back and attempt to look at the situation with fresh eyes. Doing so can help reveal fundamental flaws that are causing repeated issues.

I’ve spent a lot of my time thinking about non-technical end users, and their impact on security. Many of my peers have been doing the same, and asking questions: Are we doomed? Are users unteachable? If not, why do we keep seeing the same mistakes made? I certainly do not believe we are doomed; further, in line with what Stamos discussed in his keynote, I think that, historically, we’ve not looked at and addressed the whole problem.

Infosec professionals tend to look at technology solutions as panaceas; I know many of my peers believe that, eventually, technical advancements will allow us to effectively automate problem solving so that human error can’t be introduced. While automation is improving efficiency and effectiveness in certain cases, all technical solutions are essentially augmentations of human processes; technology enhances, not replaces, the human component of security. Email filters are a prime example; these tools automatically stop a large percentage of malicious and junk messages from getting to our inboxes, preventing us from having to weed out all the bad eggs ourselves. But good as those tools are, they still miss phishing attacks. And because email is essential to daily business, users — including us — are making decisions on messages every day because we couldn’t do our jobs if we didn’t.

Still, when mistakes like credential compromise or the installation of malware, a virus, or ransomware happen, the infosec response typically targets only on the technical side of the equation. We add a signature for the binary to our block list. We reimage the computer and restore a backup, and send the offending user on his or her way. If it was a big enough or bad enough issue, there might be an organization-wide response, including a corporate communication and/or awareness exercise. Yes, all of these things are helpful and necessary in the wake of a successful phishing attack — but we’re missing the cause of the problem during our focus on the effect.

Whether we want to admit it or not, users are consistently showing us the root of the problem: there are things about cybersecurity they don’t understand — or that they don’t understand the consequences of. And this lack of knowledge is leading them to make poor decisions. I don’t feel that, on the whole, the infosec community is doing enough to remediate that aspect of the issue.

In my opinion, we need to take a more holistic view of incident response. Given that humans are part of the process, it’s to our detriment to simply remediate devices and look to technology to solve our problems; we also have to work to address the knowledge gaps that are putting us in hot water. Over the last decade or so, we’ve seen the proliferation of APIs, with almost every piece of software we use providing a programmatic way to connect it to something else. As security professionals, we should be asking — even pushing — vendors to help bridge the gap between systems and enable us to address our challenges more completely.

Ultimately, I left Black Hat feeling encouraged. Yes, there were many exploits discussed, and plenty of scary things to keep us up at night. And I’m certain we all have security challenges we’d like to do a better job with. That said, I think we all need to be working some time into our schedules to periodically think about the bigger pictures at play in cybersecurity. I know it’s difficult; we all face pressures to minimize downtime and get compromised devices and systems back up and running as quickly as possible following end-user errors. But while we absolutely need to fix problem instances, we also need to think about root causes, particularly in the case of problems (like successful phishing scams) that present themselves with regularity.

That doesn’t mean we should put technology to the side entirely to help us solve the root of our users’ issues. Just as we’ve used technology to enhance and advance other processes, I believe we can use it to do the same for end users’ recognition of and response to cybersecurity threats. It just that, too often, we focus on what’s directly in front of us and don’t consider the fundamentals. Fortunately, the technology is already there to share information between systems; we just need to identify when and where it makes sense to do so.