Algorithms influence every facet of modern life: criminal justice, education, housing, entertainment, elections, social media, news feeds, work… the list goes on. Delegating important decisions to machines, however, gives rise to deep moral concerns about responsibility, transparency, freedom, fairness, and democracy. Algorithms and Autonomy connects these concerns to the core human value of autonomy in the contexts of algorithmic teacher evaluation, risk assessment in criminal sentencing, predictive policing, background checks, news feeds, ride-sharing platforms, social media, and election interference. Using these case studies, the authors provide a better understanding of machine fairness and algorithmic transparency. They explain why interventions in algorithmic systems are necessary to ensure that algorithms are not used to control citizens’ participation in politics and undercut democracy. This title is also available as Open Access on Cambridge Core.
Abstract: When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in a number of recent cases involving automated, or algorithmic, decision-systems. We apply our conception of agency laundering to a series of examples, including Facebook’s automated advertising suggestions, Uber’s driver interfaces, algorithmic evaluation of K-12 teachers, and risk assessment in criminal sentencing. We distinguish agency laundering from several other critiques of information technology, including the so-called “responsibility gap,” “bias laundering,” and masking.
Despite widespread agreement that privacy in the context of education is important, it can be difficult to pin down precisely why and to what extent it is important, and it is challenging to determine how privacy is related to other important values. But that task is crucial. Absent a clear sense of what privacy is, it will be difficult to understand the scope of privacy protections in codes of ethics. Moreover, privacy will inevitably conflict with other values, and understanding the values that underwrite privacy protections is crucial for addressing conflicts between privacy and institutional efficiency, advising efficacy, vendor benefits, and student autonomy. My task in this paper is to seek a better understanding of the concept of privacy in institutional research, canvas a number of important moral values underlying privacy generally (including several that are explicit in the AIR Statement), and examine how those moral values should bear upon institutional research by considering several recent cases.
Neural engineers and clinicians are starting to translate advances in electrodes, neural computation, and signal processing into clinically useful devices to allow control of wheelchairs, spellers, prostheses, and other devices. In the process, large amounts of brain data are being generated from participants, including intracortical, subdural and extracranial sources. Brain data is a vital resource for BCI research but there are concerns about whether the collection and use of this data generates risk to privacy. Further, the nature of BCI research involves understanding and making inferences about device users’ mental states, thoughts, and intentions. This, too, raises privacy concerns by providing otherwise unavailable direct or privileged access to individuals mental lives. And BCI-controlled prostheses may change the way clinical care is provided and the type of physical access caregivers have to patients. This, too, has important privacy implications. I In this chapter we examine several of these privacy concerns in light of prominent views of the nature and value of privacy. We argue that increased scrutiny needs to be paid to privacy concerns arising from Big Data and decoding of mental states, but that BCI research may also provide opportunity for individuals to enhance their privacy.
In discussions of state surveillance, the values of privacy and security are often set against one another, and people often ask whether privacy is more important than national security.2 I will argue that in one sense privacy is more important than national security. Just what more important means is its own question, though, so I will be more precise. I will argue that national security rationales cannot by themselves justify some kinds of encroachments on individual privacy (including some kinds that the United States has conducted). Specifically, I turn my attention to a recent, well publicized, and recently amended statute (section 215 of the USA Patriot Act3), a surveillance program based on that statute (the National Security Agency’s bulk metadata collection program), and a recent change to that statute that addresses some of the public controversy surrounding the surveillance program (the USA Freedom Act).4 That process (a statute enabling surveillance, a program abiding by that statute, a public controversy, and a change in the law) looks like a paradigm case of law working as it should; but I am not so sure. While the program was plausibly legal, I will argue that it was morally and legally unjustifiable. Specifically, I will argue that the interpretations of section 215 that supported the program violate what Jeremy Waldron calls “legal archetypes,”5 and that changes to the law illustrate one of the central features of legal archetypes and violation of legal archetypes.
The paper proceeds as follows: I begin in Part 1 by setting out what I call the “basic argument” in favor of surveillance programs. This is strictly a moral argument about the conditions under which surveillance in the service of national security can be justified. In Part 2, I turn to section 215 and the bulk metadata surveillance program based on that section. I will argue that the program was plausibly legal, though based on an aggressive, envelope-pushing interpretation of the statute. I conclude Part 2 by describing the USA Freedom Act, which amends section 215 in important ways. In Part 3, I change tack. Rather than offering an argument for the conditions under which surveillance is justified (as in Part 1), I use the discussion of the legal interpretations underlying the metadata program to describe a key ambiguity in the basic argument, and to explain a distinct concern in the program. Specifically that it undermines a legal archetype. Moreover, while the USA Freedom Act does not violate legal archetypes, and hence meets a condition for justifiability, it helps illustrate why the bulk metadata program did violate archetypes.
Artificial intelligence and brain–computer interfaces must respect and preserve people’s privacy, identity, agency and equality, say Rafael Yuste, Sara Goering and colleagues:
Blaise Agüera y Arcas, Guoqiang Bi, Jose M. Carmena, Adrian Carter, Joseph J. Fins, Phoebe Friesen, Jack Gallant, Jane E. Huggins, Judy Illes, Philipp Kellmeyer, Eran Klein, Adam Marblestone, Christine Mitchell, Erik Parens, Michelle Pham, Alan Rubel, Norihiro Sadato, Laura Specker Sullivan, Mina Teicher, David Wasserman, Anna Wexler, Meredith Whittaker& Jonathan Wolpaw
Abstract: “Big Data” and data analytics affect all of us. Data collection, analysis, and use on a large scale is an important and growing part of commerce, governance, communication, law enforcement, security, finance, medicine, and research. And the theme of this symposium, “Individual and Informational Privacy in the Age of Big Data,” is expansive; we could have long and fruitful discussions about practices, laws, and concerns in any of these domains. But a big part of the audience for this symposium is students and faculty in higher education institutions (HEIs), and the subject of this paper is data analytics in our own backyards. Higher education learning analytics (LA) is something that most of us involved in this symposium are familiar with. Students have encountered LA in their courses, in their interactions with their law school or with their undergraduate institutions, instructors use systems that collect information about their students, and administrators use information to help understand and steer their institutions. More importantly, though, data analytics in higher education is something that those of us participating in the symposium can actually control. Students can put pressure on administrators, and faculty often participate in university governance. Moreover, the systems in place in HEIs are more easily comprehensible to many of us because we work with them on a day-to-day basis. Students use systems as part of their course work, in their residences, in their libraries, and elsewhere. Faculty deploy course management systems (CMS) such as Desire2Learn, Moodle, Blackboard, and Canvas to structure their courses, and administrators use information gleaned from analytics systems to make operational decisions. If we (the participants in the symposium) indeed care about Individual and Informational Privacy in the Age of Big Data, the topic of this paper is a pretty good place to hone our thinking and put into practice our ideas.
Abstract: In recent years, educational institutions have started using the tools of commercial data analytics in higher education. By gathering information about students as they navigate campus information systems, learning analytics “uses analytic techniques to help target instructional, curricular, and support resources” to examine student learning behaviors and change students’ learning environments. As a result, the information educators and educational institutions have at their disposal is no longer demarcated by course content and assessments, and old boundaries between information used for assessment and information about how students live and work are blurring. Our goal in this paper is to provide a systematic discussion of the ways in which privacy and learning analytics conflict and to provide a framework for understanding those conflicts.
We argue that there are five crucial issues about student privacy that we must address in order to ensure that whatever the laudable goals and gains of learning analytics, they are commensurate with respecting students’ privacy and associated rights, including (but not limited to) autonomy interests. First, we argue that we must distinguish among different entities with respect to whom students have, or lack, privacy. Second, we argue that we need clear criteria for what information may justifiably be collected in the name of learning analytics. Third, we need to address whether purported consequences of learning analytics (e.g., better learning outcomes) are justified and what the distributions of those consequences are. Fourth, we argue that regardless of how robust the benefits of learning analytics turn out to be, students have important autonomy interests in how information about them is collected. Finally, we argue that it is an open question whether the goods that justify higher education are advanced by learning analytics, or whether collection of information actually runs counter to those goods.
Disputes at the intersection of national security, surveillance, civil liberties, and transparency are nothing new, but they have become a particularly prominent part of public discourse in the years since the attacks on the World Trade Center in September 2001. This is in part due to the dramatic nature of those attacks, in part based on significant legal developments after the attacks (classifying persons as “enemy combatants” outside the scope of traditional Geneva protections, legal memos by White House counsel providing rationale for torture, the USA Patriot Act), and in part because of the rapid development of communications and computing technologies that enable both greater connectivity among people and the greater ability to collect information about those connections.
One important way in which these questions intersect is in the controversy surrounding bulk collection of telephone metadata by the U.S. National Security Agency. The bulk metadata program (the “metadata program” or “program”) involved court orders under section 215 of the USA Patriot Act requiring telecommunications companies to provide records about all calls the companies handled and the creation of database that the NSA could search. The program was revealed to the general public in June 2013 as part of the large document leak by Edward Snowden, a former contractor for the NSA.
A fair amount has been written about section 215 and the bulk metadata program. Much of the commentary has focused on three discrete issues. First is whether the program is legal; that is, does the program comport with the language of the statute and is it consistent with Fourth Amendment protections against unreasonable searches and seizures? Second is whether the program infringes privacy rights; that is, does bulk metadata collection diminish individual privacy in a way that rises to the level that it infringes persons’ rights to privacy? Third is whether the secrecy of the program is inconsistent with democratic accountability. After all, people in the general public only became aware of the metadata program via the Snowden leaks; absent those leaks, there would have not likely been the sort of political backlash and investigation necessary to provide some kind of accountability.
In this paper I argue that we need to look at these not as discrete questions, but as intersecting ones. The metadata program is not simply a legal problem (though it is one); it is not simply a privacy problem (though it is one); and it is not simply a secrecy problem (though it is one). Instead, the importance of the metadata program is the way in which these problems intersect and reinforce one another. Specifically, I will argue that the intersection of the questions undermines the value of rights, and that this is a deeper and more far-reaching moral problem than each of the component questions.
This is a study of the treatment of library patron privacy in licenses for electronic journals in academic libraries. We begin by distinguishing four facets of privacy and intellectual freedom based on the LIS and philosophical literature. Next, we perform a content analysis of 42 license agreements for electronic journals, focusing on terms for enforcing authorized use and collection and sharing of user data. We compare our findings to model licenses, to recommendations proposed in a recent treatise on licenses, and to our account of the four facets of intellectual freedom. We find important conflicts with each.
Public and research libraries have long provided resources in electronic formats, and the tension between providing electronic resources and patron privacy is widely recognized. But assessing trade-offs between privacy and access to electronic resources remains difficult. One reason is a conceptual problem regarding intellectual freedom. Traditionally, the LIS literature has plausibly understood privacy as a facet of intellectual freedom. However, while certain types of electronic resource use may diminish patron privacy, thereby diminishing intellectual freedom, the opportunities created by such resources also appear liberty enhancing. Adjudicating between privacy loss and enhanced opportunities on intellectual freedom grounds must therefore provide an account of intellectual freedom capable of addressing both privacy and opportunity. I will argue that intellectual freedom is a form of positive freedom, where a person’s freedom is a function of the quality of her agency. Using this view as the lodestar, I articulate several principles for assessing adoption of electronic resources and privacy protections.
Abstract: The purpose of this proposed chapter is to provide a conceptual framework for understanding privacy issues that can be deployed for a variety of information technologies, an overview of the different views from the moral and political philosophy regarding the nature and foundations of privacy rights, and an examination of various privacy issues attendant to omnipresent surveillance technologies in light of those philosophical views. Put another way, my goal in the chapter is pick out important themes from the philosophical literature on privacy and surveillance and explain them in light of omnipresent surveillance technologies, while at the same time providing a philosophically informed analysis of the privacy implications of those technologies. The broader purpose of providing this framework and analysis is to make it easier for people developing, implementing, and forming policy about technologies, information collection efforts, and monitoring schemes to (a) see how various possible futures implicate important moral concerns and (b) recognize a broad array of reasons and arguments about uberveillance and privacy claims.
Privacy depends on the degree to which others can access information about, observe, and make inferences regarding a person or persons. People considering the societal implications of nanotechnology recognized early on that nanotechnology was likely to have profound effects upon privacy. Some of these effects stem from increased computing power. Others result from smaller, stronger, and more energy-efficient surveillance devices and improved sensor technologies. More speculatively, nanotechnology may open up new areas of surveillance and information gathering, including monitoring of brain states. Regardless of the precise ways nanotechnology affects information gathering and analysis, there are persistent social, legal, and moral questions. These include the extent to which persons have rights to privacy, the possibility of widespread surreptitious surveillance, who has access to privacy-affecting technologies, how such technologies will be treated legally, and the possibility that developing technologies will change our understanding of privacy itself.
A suite of technological advances based on nanotechnology has received substantial attention for its potential to affect privacy. Reports of the National Nanotechnology Initiative have recognized that the societal implications of nanotechnology will include better surveillance and information gathering technologies, and there are a variety of academic and popular publications explaining potential effects of nanotechnology on privacy. My focus in this paper is on the privacy effects of one potential application of nanotechnology, sensors capable of detecting weapons agents or drugs- – nanosensors or sensors for short. Nanotechnology may make possible small, accurate, and easy-to-use sensors to detect a variety of substances, including chemical, biological, radiological, and explosive agents, as well as drugs. I argue that if sensors fulfill their technological promise, there will be few legal barriers to use and the relevant Constitutional law makes it likely that police sensor use will become pervasive. More importantly, I use the possibility of pervasive sensing to analyze the nature of privacy rights. I set forth the Legitimate Interest Argument, according to which one has no right to privacy regarding information with respect to the state if, and only if (a) the state has a legitimate interest in the information, and (b) the state does not garner the information arbitrarily. On this view, pervasive use would not impinge rights to privacy. Rather, it presents an opportunity to protect privacy rights.
The target article argues that current efforts to ban trans fats from restaurant foods are problematic because they risk further bans on unhealthy foods, which would be an unjustified restriction of an important personal freedom: “The freedom to choose what we eat.” This, as Resnik notes, is an empirical slippery slope argument; it is based on a hypothesis regarding the likelihood that further food bans would occur in the wake of trans fat bans. This commentary argues that there are important limitations to the argument. Including empirical differences between trans fats and other restrictions, limitations on the regulation, and proper understanding of consumer autonomy.
The current debate about labeling genetically engineered (GE) food focuses on food derived from GE crops, neglecting food derived from GE animals. This is not surprising, as GE animal products have not yet reached the market. Participants in the debate may also be assuming that conclusions about GE crops automatically extend to GE animals. But there are two GE animals – the Enviropig and the AquAdvantage Bred salmon – that are approaching the market, animals raise more ethical issues than plants, and U.S. regulations treat animal products differently from crops. This paper therefore examines the specific question of whether there should be mandatory labeling on all food products derived from GE animals. We examine the likely regulatory pathways, salient differences between GE animals and GE crops, and relevant social science research on consumers’ attitudes. We argue that on any of the likely pathways, the relevant agency has a democratic obligation to require labeling for all GE animal food products.
In her recent article, “Does autonomy count in favor of labeling genetically modified food?,” Kirsten Hansen argues that in Europe, voluntary negative labeling of non-GM foods respects consumer autonomy just as well as mandatory positive labeling of foods with GM content. She also argues that because negative labeling places labeling costs upon those consumers that want to know whether food is GM, negative labeling is better policy than positive labeling. In this paper, we argue that Hansen’s arguments are mistaken in several respects. Most importantly, she underestimates the demands of respecting autonomy and overestimates the cost of positive labeling. Moreover, she mistakenly implies that only a small minority of people desire information about GM content. We also explore the extent to which her arguments would apply to the US context, and argue that any discussion of the relationship between autonomy and labeling should include not just considerations of consumer autonomy but also considerations of what we call citizen autonomy.
Despite the fact that public opinion overwhelmingly supports mandatory labeling for genetically engineered foods, the FDA recently reaffirmed its original 1992 decision not to require labels, claiming that there is no scientific basis for concluding that GE food are less healthful than others foods. In this paper, we give two arguments about how this conflict between public opinion and the FDA ought to be resolved. The first is the Consumer Autonomy Argument, which applies to the FDA and appeals to moral principles about how public agencies within a democracy should exercise their discretion. We argue that the Food, Drug, and Cosmetic Act (FDCA) gives the FDA the discretion to require labels, and that the FDA has a moral and democratic obligation to exercise that discretion so as to require labeling. The second is the Democratic Equality Argument, which applies to Congress and concerns its democratic responsibility to defer to public opinion on certain kinds of issues. We conclude that if the FDA fails to require labeling, Congress should.