What Public Health Can Teach Us About How to Give Better Security Advice

A story about the philosophy behind my VPN advice post, and a much deeper elaboration of my followup, for security people, public health people, and interested laypeople. Neither post is required reading for this one.

And a disclaimer, as always: I no longer work for Akamai, and I speak here only in my personal capacity. I don’t represent Akamai here in any way.


Most of the time I was at Akamai, I worked for Michael Stone, who has influenced my thinking about security and its place within the broader problem of safety a great deal.  (To even understand and acknowledge that security exists within a broader context, I owe to him.)

One of Michael’s great strengths as a problem-solver, which he has worked consciously to develop, is that he has a large collection of what he calls “frames” or “lenses”—ways of thinking about problems, and other model problems to which to draw analogies—drawn both from within technology and outside it.  In collecting these, he recognizes that often the first step to answering a question is to ask (or frame) it in the right way. His frames are “right ways” which have proven useful to others in the past.

Michael has a family background in public health, particularly from his mother’s work, but it was only slowly through my time working with him that we came to recognize that those ideas might have value to us.  It was during the Heartbleed crisis, in April 2014, when I remember us first talking seriously about applying frames from public health to computer security issues.  As that crisis wore on, it became clear just how effective they were at helping us understand and manage the complexity of the problems we were faced with.

Heartbleed affected some companies a great deal, and some companies not very much at all.  For Akamai, which had tens of thousands of vulnerable servers, and thousands of potentially compromised certificates, Heartbleed was a significant crisis.  Separating servers and certificates into vulnerable populations, considering various potential courses of action as interventions, and putting together triage plans were some of the ways that the frames helped us communicate with each other, our management, and our customers.

Five months later, the Shellshock crisis showed that the frame was also productive to apply not just inside the company but outside as well, to an Internet containing populations of servers running vulnerable services. We asked ourselves, what kinds of interventions could Akamai provide to protect our customers while they patched?  Could that have a material effect on the health of the whole Internet?  Would the benefits outweigh the side-effects?  Since then, I increasingly approach all security advice through the frame of public health.

To caveat what follows: I’m not a public health professional, and I have only an interested layperson’s understanding of these concepts. If you are a public health professional, and I have misused terms or misapplied concepts, I would love correction, either privately or in the comments.

The reversion of Federal broadband privacy rules and the recommendation of VPN services, which I wrote about last week, is a model opportunity to apply the frame of public health to a security advice question.  We have a population—broadband users in the United States.  We have a threat, in the sale of their aggregate browsing information to marketing organizations by their ISPs.  And we have a proposed intervention—using a VPN.

By framing the problem this way, I immediately cut out a great deal of complexity.  No longer do I need to consider all possible users of VPNs, or all possible uses to which they might put them.  Users have different priorities, and different uses dictate different priorities. Sometimes these priorities are even at cross purposes! The number of combinations is large, but I only need to consider the one. As a communicator, defining my audience well helps me communicate better with everyone, not least because those to whom I am speaking can easily recognize that fact, and those to whom I am not speaking can know that they are safe to ignore me.

Then we consider the nature of the particular threat facing our population of broadband users in the United States—the sale of their browsing information to marketing organizations and perhaps other, less savory, actors, by their ISPs.  This is a concerning threat, although it must be weighed against the sale of similar information by ad networks and social media platforms, and will vary from ISP to ISP depending on what each chooses to stipulate in their particular terms of service.

And last we consider the proposed intervention, a VPN service: its costs, its potential benefits, and its potential side-effects.  VPNs can be free, although as I’ve mentioned that is a red flag.  Acceptably-good VPNs for our purposes need not be expensive, though; they are often made, after all, out of commodity VPSes and open source software, which can be had cheaply. And as to potential benefits, a properly configured open source VPN software package running on a commodity VPS will protect a vulnerable US residential broadband user from having their browsing data rolled up and sold by their ISP for marketing purposes.

As to potential side effects, though, as I laid out in my previous posts, we know that many VPNs do not provide the protections they claim to provide, and many are actively malicious. In that context, recommending a VPN is like recommending an entirely unlicensed drug in an unregulated market where it is known that many substances sold as that drug are of wildly varying purity and often laced with other, harmful substances.  Some doctors may still agree to monitor individual patients’ use of such drugs, when they believe that their patients understand the risks, the doctors can make efforts to test the drugs for impurities before use, they can monitor the effects, and the drug’s benefits to the patient still outweigh the risks.

However, few public health officials, faced with anything less than life-or-death stakes, would recommend such a drug as an intervention by public policy.  Both a public health official and a primary care doctor share an obligation to “first, do no harm,” and yet that applies very differently to a population than to an individual.  (As an example of a case where public health officials did choose to suggest for individuals to do exactly what I outline above, consider this article on the use of PrEP in Great Britain.  But even there, in a literally life-or-death situation, they did not recommend or require something they weren’t fully confident of.)  And for essentially all US broadband users, having our browsing data aggregated by our ISPs and sold for marketing purposes is not a life-or-death proposition.

So this is why I do not recommend US residential broadband users use a VPN, as a matter of what we might call public health policy in computer security.  And I think that, as computer security professionals, we will give better advice, which is more actionable to our users, if we at least think of security advice questions ourselves in this frame, and use it to structure our advice, and perhaps use terms drawn from public health when we speak in public about security advice questions as well.

It’s not that there isn’t an explicit user statement or threat model here—I lay out a very clear one.  But this framing allows me to consider a broad group of users facing a narrow threat and so issue extremely specific and actionable advice.  I might even feel safe giving this advice without prefacing it with the threat model, if I feel my advice follows the “first, do no harm” principle well enough, and people outside the target population will not be hurt if they follow it.

I’ve attended a few security trainings where the leaders tried to enumerate every possible combination of user and use case and give advice for each, and that resulted in what I found to be very confusing security advice. It also overrepresented the needs of the one person with challenging or interesting, often technical, needs, over those of the twenty people whose needs were less obviously so.

I have also frequently seen security people respond to every security question with “well, it depends on your threat model,” which is surely a reaction to bad blanket security advice given in the past. While it is strictly true as a statement, and is something I highly encourage for 1-on-1 conversations (as much to break the advice-givers out of our technical mindsets as anything), in group settings we can find some reasonable aggregates and make some reasonable assumptions based on context and perhaps a bit of research.

Also, using this public health frame encourages me to consider not just whether the proposed solution could work, but in what ways it might introduce new problems.  This makes it clear to me, not just as an advice-giver, but as a developer of security solutions and a manager of such teams, that there is more to my work than whether the solution is narrowly effective.  I must consider how much it costs—in money, in time, in my users’ emotional and cognitive energies (sometimes referred to as ‘spoons‘).   I need to consider how it might break, how it might be subverted, how my users’ needs and habits might change as a result of its use.

When we started applying this frame to software security, even I, as a layperson, understood roughly that public health officials needed to do this work I’ve just outlined when considering their interventions.  Michael convinced me that, not only did we need to do this work in computer security as well—and in practice I do not find it onerous—but the fact that we weren’t doing this work is why our interventions so regularly failed, and did so in ways which regularly hurt and sometimes killed people.  And the key insight he had, from which all of this understanding sprung, was to frame the problem as a public health problem—or even to ask, “Could we?”

I encourage you to try this frame, the next time you have cause to provide security advice to a group of people.  If you do, I would love to hear how well it applies for you: what works, what doesn’t, where you get stuck.  And finally, I hope that this idea of frames spurs us to ask the next questions, about what else this frame can teach us and what other frames exist in the world which might apply to our work. Who else has in the past solved similar problems to the ones we face now?