If You Think Network Security is a Safety Issue, You’ll Need to Deal with Cost-Benefit Analysis

The first thing I want to do on this blog is give credit where much-deserved credit is due: a conversation I had at the wonderful Hive76 in Philadelphia inspired this first post for XRDS. (Support your local hackerspace!)

I gave a short presentation at Hive76 recently, and after it was over I hung around answering questions. One man asked me, repeatedly, whether the FDA has codified security standards for networked and wireless medical devices like insulin pumps and pacemakers. The answer wasn’t satisfactory for either of us; he was sincerely alarmed, and I couldn’t reassure him. When I said they have no such standards, he asked me why not. If their mandate is to ensure the safety of medical devices, he said, how could they reasonably neglect such an obvious security risk? I could only say that, in general, network security is thought of as being categorically separate from “health and safety.” For days I tried to figure out why it had felt like we were talking past each other. It wasn’t until weeks later that I realized something important was missing from our conversation.

Many of us believe that this kind of security bears seriously on human safety and human rights. Insecure networks can pose a very real threat to people living under authoritarian regimes, or to members of persecuted minorities. Both are examples of people for whom safe, private communication may be profoundly important. The man I spoke to was deeply worried that poorly designed or nonexistent security in medical devices endangers people’s lives. I think this perspective is common in engineers, designers, and hackers — people for whom network and wireless security are tangible realities, who are accustomed to translating abstract into concrete.

And yet, government agencies tasked with ensuring our safety seem oblivious to the danger posed by insecure networks. That’s why the man I spoke to was so frustrated — he saw technological security as a natural extension of the FDA’s mandate to ensure the safety of medical implants and devices. It seemed to him like they were failing to adequately do their job. Look: civil servants are as able as anyone to comprehend the danger that insecure pacemakers or insulin pumps might pose. But they’re also required to prepare and present cost-benefit analysis reports to the Office of Management and Budget before they can do anything.

Cost-benefit analysis (CBA) is probably familiar to most readers of this blog. It’s a method of decision-making that aims to maximize welfare along an economic model, and the easiest way to explain it is with by illustration: A project is any action that causes a change in the status quo. To evaluate a project, we compare the future “project state of the world” (P) with the “status quo state of the world” (S) — any benefits that arise from maintaining the status quo are treated explicitly as benefits that “S” enjoys and “P” lacks. After accumulating data on the benefits and costs of each project, the analysis determines whether the benefits of project P outweigh the costs. In a simplified world, the only problem is the practical challenge of collecting the data. In the real world, of course, CBA is exponentially more complicated, but at its heart it always relies on weighing costs and benefits of action against the costs and benefits of inaction.

Since 1981, all federal agencies have been required by law to use cost-benefit analysis to determine how they will carry out their individual mandates. Even if they have other reasons for considering a course of action, they must show that their choice is justified by CBA. The question they need to answer is not whether a project would be useful, or a logical extension of an agency’s mandate, but whether the project is economically rational.

Economic rationality is a special value. Its boundaries are more clearly defined than “rationality” as it is generally understood, let alone “responsibility” or “ethicality.” My conversation in Philadelphia met an impasse when we assumed we were talking about the same values – or maybe we ignored the value dimension of the discussion altogether. To use the language of economics again: I assumed the FDA wants to maximize its economic rationality; he assumed the FDA wants to maximize some other value. I still don’t know how he would have described his idea of the agency’s responsibilities and duties. But the exchange made me more careful about assumptions. Even if we use the classic economic catchall term “welfare” to describe what we want to improve or maximize, it’s not enough. People understand “welfare” in different ways, and I think that’s why the man in Philadelphia was so frustrated. Ethical, responsible behavior may coincide with economically rational behavior, or it may not. It can be hard (believe me) to understand how anyone could justify something unethical, but in the tightly bounded world of cost-benefit analysis, the only values permitted are economic.

The risk posed to human life by bad or nonexistent security is difficult to quantify. CBA requires the quantification and precise evaluation of risk, which is complicated and time-consuming even for comparatively straightforward situations (quantifying lives saved by seatbelts in cars, for example). And each federal agency has, over time, developed some institutional competency in evaluating certain kinds of risk and harm. The FDA would need to do so much outside work just to prepare itself to do these initial evaluations that it’s unlikely to ever seem worth it compared to the current status quo. It is possible that investing in security — developing standards, hiring or outsourcing experts, training employees in an area with which they are presumably not familiar — just isn’t an economically rational thing for the FDA to do.

There is no federal agency that regulates the network security of consumer devices, and it seems unlikely that existing agencies will find it economically rational in the near future to invest in learning how to evaluate the risk and harm that might come of bad security. If security experts want to persuade the administrative state to take a serious interest in this problem — and many of us believe that it should — the language and values of cost-benefit analysis are important to consider. If CBA can be used to make a case for device security, we should make it. If the numbers don’t add up in our favor, it’ll be all the more essential to articulate why agencies like the FDA should look at network security in a different way.

 

Lea Rosen writes about technology and law, with a focus on civil and human rights issues inherent in the creation and adoption of new technologies. She holds a JD from Rutgers School of Law, Camden, and a B.A. in Humanities from New College of Florida. She has worked with the Electronic Frontier Foundation, the ACLU, the Center for Constitutional Rights, and the National Lawyers Guild, and she tweets as @lea_rosen. 

One thought on “If You Think Network Security is a Safety Issue, You’ll Need to Deal with Cost-Benefit Analysis

  1. Don’t you feel that this is an endemic problem to governments? Their structure lends them towards being a more ‘reactive’ force rather than a ‘proactive’ force in society.

    Sure, militaries may appear to be more ‘proactive’ at points in time; but what I mean is that new directions and possibilities offered by technology and societal changes take time to be reflected in government policy and enforcement.

    Thus, we need a few critical cases showing a security need for personal/health devices, and then a reaction from governments will occur.

    I suppose I’m just saying that law evolves at a pace inherently slower than the rest of society, although I do not think this is good or bad. More just a neutral observation.

Leave a Reply

Your email address will not be published. Required fields are marked *