About Lea Rosen

Some Thoughts (And Questions) About U.S. v. Cotterman – Part 2 of 2

So, in my first post about the recent Ninth Circuit opinion U.S. v. Cotterman, I introduced the opinion’s idea of a “forensic computer search” and asked some questions about what that category might include, and whether it’s a coherent bar for a heightened level of Fourth Amendment privacy protection at the United States border.

This post is more of the “what have we learned?” side of the discussion. I think that the privacy problems identified in the opinion reveal one underlying idea:

Continue reading

Some Thoughts (And Questions) About U.S. v. Cotterman – Part 1 of 2

Last week, the Ninth Circuit released its decision in U.S. v. Cotterman, articulating a new and fascinating standard for border searches of electronic devices. An en banc majority held that government agents need “reasonable suspicion” to justify “forensic examination” of electronic devices at the border. The ruling has been characterized as a win for digital privacy rights – as a general rule, no suspicion whatsoever is required to search people and property at the border. This jump from “no suspicion required” to “reasonable suspicion required” limits when the government can do “forensic examinations,” and grants an exceptional level of protection to electronic devices.

Continue reading

Drones and the Digital Panopticon

There has been a lot of alarming speculation in the media since February about the potential consequences of the FAA Modernization and Reform Act, which requires that the FAA prepare the national airspace for the introduction, in 2015, of privately owned and operated unmanned aerial vehicles (UAVs). The airspace already hosts UAVs flown by federal, state, and local governments; the Act makes it easier for such agencies to acquire them and permits private entities to get licenses to fly them too. It is designed to get as many drones as possible in the air as quickly as possible.

Much ink has been spilled speculating about the potential effects of a widespread drone presence in this country, mostly focusing on either their ramifications for privacy or on the potential for physical injury they represent. These observations fail to address what makes a sky full of drones so radically unsettling.

Drones are going to be used to gather data, and the data will be integrated into the marketing scan. All drones gather at least the data they need in order to function remotely, and some of them will be able to photograph in staggeringly high resolution, or track up to 65 separate people at once. They won’t all be doing this, obviously, but the FAA’s licensing process doesn’t require drone operators to go into detail about what their vehicles will carry or collect.

We also know that data about people’s movements and behavior is hugely valuable to marketers. It is already collected unobtrusively from us as we move around in the virtual space of the Internet. In an important sense, that space is already patrolled by drones with data collection capabilities, similar to the ones that will soon be operated in the national airspace by private entities. Behavioral information is lucrative. There is every reason to think that the data collected by airborne drones will be just as interesting to the purchasers of bot-collected online behavior data.

Of course, much of our physical-space movement is monitored already, and it is possible to aggregate this information to create an eerily complete picture of a person’s movements, social circle, and preferences. Credit cards, license plate scanners, CCTV cameras, transit passes, and smartphones are all sources of this information.

Over this web of information, drones can add a layer of photographic evidence. The marketing scan of the online drone will merge into the marketing scan of the physical-space drone, and the result will be that we are even more easily identified, tracked, tagged, and followed. Privacy advocates are justly concerned about the erosion of basic notions of privacy by ubiquitous monitoring.

This is a danger separate from safety hazards, because it undermines one of the most basic presumptions of freedom – the absence of arbitrary power. Conceptually, the danger potentially posed by the coming drone squadrons can be separated from privacy concerns, too. The concept of the panopticon (likely familiar to many of you) illustrates the loss of freedom that accompanies arbitrary power, and shows how distinct it is from the lack of contextual integrity that marks an absence of privacy.

The panopticon exemplifies the reality of arbitrary power. The English philosopher Jeremy Bentham invented the Panopticon: a prison in which guards can watch prisoners without prisoners knowing whether they are being watched. The architectural design features a central guard tower, from which a single guard could see every cell in the prison. Bentham reasoned that this architecture would force prisoners to behave at a minimal cost, since fewer resources would need to be invested in guarding them.

100 years later, French sociologist Michel Foucault observed that the “panoptic mechanism” exists in the abstract, as a form of social control. A panoptic arrangement exists wherever there is ongoing subjection to a “field of visibility.” Drones do this, literally: they could be watching at any time, but it will be impossible for us to know at any given time whether we are being observed. The constant subjection to this field, coupled with the capacity for this data to be used by the government to punish or by the marketing scan to determine what information we receive, means that our rational self-interest will lead us to self-censor. We are already seeing this play out socially; people have developed strategies like maintaining separate social network identities for personal and business use, or paying cash for transit passes to avoid being traceable via credit card.

Domestic drones taking photographs or video won’t significantly change this dynamic. They will push it further toward an extreme, in which it becomes harder and harder to extricate ourselves from the marketing scan, and in which the marketing scan and the eye of the State merge (because law enforcement will have ready access to privately owned and aggregated data).

My point in writing this is not to challenge anyone to come up with a “solution,” but rather to point out that the negative effects of drone presence are not exemplified by their security vulnerabilities or their tendency to drop out of the sky. Abstract as it might seem, the increased power and intensity of this “field of visibility” is what will affect our lives the most. It will determine the distribution of information through the marketing scan; we will eventually be aware of it for this reason. And as the reality of our observed status sinks in, we will rationally self-monitor in case we’re being recorded. This state of being poses a radical threat to the way we think about freedom.

 


If You Think Network Security is a Safety Issue, You’ll Need to Deal with Cost-Benefit Analysis

The first thing I want to do on this blog is give credit where much-deserved credit is due: a conversation I had at the wonderful Hive76 in Philadelphia inspired this first post for XRDS. (Support your local hackerspace!)

I gave a short presentation at Hive76 recently, and after it was over I hung around answering questions. One man asked me, repeatedly, whether the FDA has codified security standards for networked and wireless medical devices like insulin pumps and pacemakers. The answer wasn’t satisfactory for either of us; he was sincerely alarmed, and I couldn’t reassure him. When I said they have no such standards, he asked me why not. If their mandate is to ensure the safety of medical devices, he said, how could they reasonably neglect such an obvious security risk? I could only say that, in general, network security is thought of as being categorically separate from “health and safety.” For days I tried to figure out why it had felt like we were talking past each other. It wasn’t until weeks later that I realized something important was missing from our conversation.

Many of us believe that this kind of security bears seriously on human safety and human rights. Insecure networks can pose a very real threat to people living under authoritarian regimes, or to members of persecuted minorities. Both are examples of people for whom safe, private communication may be profoundly important. The man I spoke to was deeply worried that poorly designed or nonexistent security in medical devices endangers people’s lives. I think this perspective is common in engineers, designers, and hackers — people for whom network and wireless security are tangible realities, who are accustomed to translating abstract into concrete.

And yet, government agencies tasked with ensuring our safety seem oblivious to the danger posed by insecure networks. That’s why the man I spoke to was so frustrated — he saw technological security as a natural extension of the FDA’s mandate to ensure the safety of medical implants and devices. It seemed to him like they were failing to adequately do their job. Look: civil servants are as able as anyone to comprehend the danger that insecure pacemakers or insulin pumps might pose. But they’re also required to prepare and present cost-benefit analysis reports to the Office of Management and Budget before they can do anything.

Cost-benefit analysis (CBA) is probably familiar to most readers of this blog. It’s a method of decision-making that aims to maximize welfare along an economic model, and the easiest way to explain it is with by illustration: A project is any action that causes a change in the status quo. To evaluate a project, we compare the future “project state of the world” (P) with the “status quo state of the world” (S) — any benefits that arise from maintaining the status quo are treated explicitly as benefits that “S” enjoys and “P” lacks. After accumulating data on the benefits and costs of each project, the analysis determines whether the benefits of project P outweigh the costs. In a simplified world, the only problem is the practical challenge of collecting the data. In the real world, of course, CBA is exponentially more complicated, but at its heart it always relies on weighing costs and benefits of action against the costs and benefits of inaction.

Since 1981, all federal agencies have been required by law to use cost-benefit analysis to determine how they will carry out their individual mandates. Even if they have other reasons for considering a course of action, they must show that their choice is justified by CBA. The question they need to answer is not whether a project would be useful, or a logical extension of an agency’s mandate, but whether the project is economically rational.

Economic rationality is a special value. Its boundaries are more clearly defined than “rationality” as it is generally understood, let alone “responsibility” or “ethicality.” My conversation in Philadelphia met an impasse when we assumed we were talking about the same values – or maybe we ignored the value dimension of the discussion altogether. To use the language of economics again: I assumed the FDA wants to maximize its economic rationality; he assumed the FDA wants to maximize some other value. I still don’t know how he would have described his idea of the agency’s responsibilities and duties. But the exchange made me more careful about assumptions. Even if we use the classic economic catchall term “welfare” to describe what we want to improve or maximize, it’s not enough. People understand “welfare” in different ways, and I think that’s why the man in Philadelphia was so frustrated. Ethical, responsible behavior may coincide with economically rational behavior, or it may not. It can be hard (believe me) to understand how anyone could justify something unethical, but in the tightly bounded world of cost-benefit analysis, the only values permitted are economic.

The risk posed to human life by bad or nonexistent security is difficult to quantify. CBA requires the quantification and precise evaluation of risk, which is complicated and time-consuming even for comparatively straightforward situations (quantifying lives saved by seatbelts in cars, for example). And each federal agency has, over time, developed some institutional competency in evaluating certain kinds of risk and harm. The FDA would need to do so much outside work just to prepare itself to do these initial evaluations that it’s unlikely to ever seem worth it compared to the current status quo. It is possible that investing in security — developing standards, hiring or outsourcing experts, training employees in an area with which they are presumably not familiar — just isn’t an economically rational thing for the FDA to do.

There is no federal agency that regulates the network security of consumer devices, and it seems unlikely that existing agencies will find it economically rational in the near future to invest in learning how to evaluate the risk and harm that might come of bad security. If security experts want to persuade the administrative state to take a serious interest in this problem — and many of us believe that it should — the language and values of cost-benefit analysis are important to consider. If CBA can be used to make a case for device security, we should make it. If the numbers don’t add up in our favor, it’ll be all the more essential to articulate why agencies like the FDA should look at network security in a different way.

 

Lea Rosen writes about technology and law, with a focus on civil and human rights issues inherent in the creation and adoption of new technologies. She holds a JD from Rutgers School of Law, Camden, and a B.A. in Humanities from New College of Florida. She has worked with the Electronic Frontier Foundation, the ACLU, the Center for Constitutional Rights, and the National Lawyers Guild, and she tweets as @lea_rosen.