So, in my first post about the recent Ninth Circuit opinion U.S. v. Cotterman, I introduced the opinion’s idea of a “forensic computer search” and asked some questions about what that category might include, and whether it’s a coherent bar for a heightened level of Fourth Amendment privacy protection at the United States border.
This post is more of the “what have we learned?” side of the discussion. I think that the privacy problems identified in the opinion reveal one underlying idea:
User Agency Is A Privacy Issue.
When we use our personal digital devices (I’ll use laptops as a default example in this post), the devices automatically collect and store data to maintain a detailed record of what we were doing. Digital information’s unique qualities – fungibility, persistence, ease of storage, recoverability, reproducibility – make these behavioral records accordingly unique in the same ways.
These qualities give rise to special privacy concerns. They create a persistent, indefinitely stored cache of behavioral data (which frequently includes information that our laws already categorize as private, like correspondence and medical history) that can be effortlessly copied and/or analyzed at an unprecedented level of detail in the blink of an eye.
Of course, the special qualities of digitally stored information exist whether or not the information in question is something we feel deserves to be protected from governmental intrusion. Privacy concerns are raised when personal information that we ordinarily think of as private or sensitive is stored in the digital format, because that form of storage enables digital search and analysis. Digital search and analysis is faster and easier than searches of tangible media, and has the added bonus of being increasingly revealing of sensitive personal information – information that might fall within the ambit of our ideas about privacy.
User agency is the ability of the device user to control what the device does. In most devices, user agency is quite limited, especially when it comes to the automated collection and storage of user-generated information. (I don’t mean only information input by the user, but all the information resulting from user activity.) I’ll use an example from Cotterman to illustrate what I mean.
The opinion says that a forensic search is invasive in part because it allows the forensic technician to access information that the user can’t access herself, and goes on to say that the retention of information “beyond the perceived point of erasure” is a privacy issue.
“Perceived erasure” is a great example of how conventions in software and computer design deny user agency. “Deletion” marks the perceived point of erasure for many computer users.
Of course, deleting a file does not “erase” it from the computer’s hard drive. But there are reasons that most people perceive “deleting” a file to be the same as “erasing” it. Here’s what the Mirriam-Webster online dictionary tells me “delete” means:
“to eliminate, especially by blotting out, cutting out, or erasing”
A button labeled “Delete” tells users that pressing it will eliminate or erase data. The people who design computers and computer software know better than most that this just isn’t true. The language we use to describe how computers work is misleading. That’s a user agency issue because a user who wants to erase information isn’t directly, immediately, and explicitly given the ability to accomplish that goal. Instead, she’s offered a function that looks like it will erase information, and she’s given no indication of what initiating that function actually does. She’s given a visualization of the file disappearing from view, or sliding into a trash can that she can then “empty.”
Why not design a user experience that’s less misleading? Shouldn’t some of the burden of understanding rest with the creators of these programs?
One response to this is to say that the burden rests with users. Users should familiarize themselves more with the way their computers actually function, rather than simply relying on the labels that come with their operating systems and user-facing software applications. I took what I think was a reasonable first step to achieving this goal and looked for information online. Here’s what I found when I searched Wikipedia for information about how data gets erased (or not):
“Many operating systems, file managers, and other software provide a facility where a file is not immediately deleted when the user requests that action. Instead, the file is moved to a holding area, to allow the user to easily revert a mistake. Similarly, many software products automatically create backup copies of files that are being edited, to allow the user to restore the original version, or to recover from a possible crash (autosave feature). Even when an explicit deleted file retention facility is not provided or when the user does not use it, operating systems do not actually remove the contents of a file when it is deleted unless they are aware that explicit erasure commands are required, like on a solid-state drive. … Likewise, reformatting, repartitioning, or reimaging a system is not always guaranteed to write to every area of the disk, though all will cause the disk to appear empty or, in the case of reimaging, empty except for the files present in the image, to most software.”
Well, I made a good faith effort to familiarize myself with how my computer actually works, default software interfaces notwithstanding. Where did it get me? Nowhere. I’m more aware of the limits on my user agency, but I’m no more able to erase data than I was before. In fact, I’m starting to think that it’s not really possible to erase data from my computer. And that brings me back to the same question: why does it have a “delete” button at all?
I think the most likely explanation is simple: the button says “delete” because the first systems used the word, people got used to it, and nobody wants to introduce confusion by changing the vocabulary around. It’s a skeuomorph – a design that makes a thing resemble another material or technique. Designers talk about perceived affordances, about using comfortably familiar shorthand to indicate to users what something can or can’t do. The problem is that all too often, designs present perceived affordances that are misleading.
Designing For User Agency Will Improve Our Privacy Rights
I think that designers and computing engineers should bear some of the burden in the digital privacy debate, because otherwise their valuable perspective will be lost. It is inconceivable to me that the systems we currently use represent the absolute apex of usability and transparency. Designers and engineers deserve more credit than that. Increased transparency doesn’t necessarily mean unwieldy user experience, or forcing everyone to use command-line interfaces for everything. Small changes could yield meaningful results: changing defaults so that users can access a visual representation of where everything is, including files that we would otherwise call “deleted.” Changing the word “delete” to something else – “rewrite” or “scramble,” maybe. (I don’t know, I’m not a designer or an engineer; these are just two words that seem more descriptive to me of what computer systems actually do when I tell them to delete.)
Designing systems for transparency and user agency will benefit the privacy debate on two levels. Individuals will have more freedom to manipulate the information retained by their devices, and the introduction of a new vocabulary of personal computing will move us closer as a culture to a coherent understanding of what our computers can and can’t actually do.