Publication Types:

Privacy and ethics in brain-computer interface research

BioethicsPrivacy and surveillanceTechnology and ethics
Eran Klein, Alan Rubel
Chang S. Nam, Anton Nijholt, and Fabien Lotte (eds) Brain–Computer Interfaces Handbook: Technological and Theoretical Advances (CRC Press: 2018):
Publication year: 2018

Neural engineers and clinicians are starting to translate advances in electrodes, neural computation, and signal processing into clinically useful devices to allow control of wheelchairs, spellers, prostheses, and other devices. In the process, large amounts of brain data are being generated from participants, including intracortical, subdural and extracranial sources. Brain data is a vital resource for BCI research but there are concerns about whether the collection and use of this data generates risk to privacy. Further, the nature of BCI research involves understanding and making inferences about device users’ mental states, thoughts, and intentions. This, too, raises privacy concerns by providing otherwise unavailable direct or privileged access to individuals mental lives. And BCI-controlled prostheses may change the way clinical care is provided and the type of physical access caregivers have to patients. This, too, has important privacy implications. I In this chapter we examine several of these privacy concerns in light of prominent views of the nature and value of privacy. We argue that increased scrutiny needs to be paid to privacy concerns arising from Big Data and decoding of mental states, but that BCI research may also provide opportunity for individuals to enhance their privacy.

Legal Archetypes and Metadata Collection

Privacy and securityPrivacy and surveillanceTechnology and ethics
Alan Rubel
Wisconsin International Law Journal 34(4) (2017): 823-853
Publication year: 2017

In discussions of state surveillance, the values of privacy and security are often set against one another, and people often ask whether privacy is more important than national security.2 I will argue that in one sense privacy is more important than national security. Just what more important means is its own question, though, so I will be more precise. I will argue that national security rationales cannot by themselves justify some kinds of encroachments on individual privacy (including some kinds that the United States has conducted). Specifically, I turn my attention to a recent, well publicized, and recently amended statute (section 215 of the USA Patriot Act3), a surveillance program based on that statute (the National Security Agency’s bulk metadata collection program), and a recent change to that statute that addresses some of the public controversy surrounding the surveillance program (the USA Freedom Act).4 That process (a statute enabling surveillance, a program abiding by that statute, a public controversy, and a change in the law) looks like a paradigm case of law working as it should; but I am not so sure. While the program was plausibly legal, I will argue that it was morally and legally unjustifiable. Specifically, I will argue that the interpretations of section 215 that supported the program violate what Jeremy Waldron calls “legal archetypes,”5 and that changes to the law illustrate one of the central features of legal archetypes and violation of legal archetypes.
The paper proceeds as follows: I begin in Part 1 by setting out what I call the “basic argument” in favor of surveillance programs. This is strictly a moral argument about the conditions under which surveillance in the service of national security can be justified. In Part 2, I turn to section 215 and the bulk metadata surveillance program based on that section. I will argue that the program was plausibly legal, though based on an aggressive, envelope-pushing interpretation of the statute. I conclude Part 2 by describing the USA Freedom Act, which amends section 215 in important ways. In Part 3, I change tack. Rather than offering an argument for the conditions under which surveillance is justified (as in Part 1), I use the discussion of the legal interpretations underlying the metadata program to describe a key ambiguity in the basic argument, and to explain a distinct concern in the program. Specifically that it undermines a legal archetype. Moreover, while the USA Freedom Act does not violate legal archetypes, and hence meets a condition for justifiability, it helps illustrate why the bulk metadata program did violate archetypes.

Four Ethical Priorities for Neurotechnologies and AI

BioethicsPrivacy and surveillanceTechnology and ethics
Rafael Yuste, Sarah Goering, et al
Nature 551 (7679) (November 9, 2017): 159-163
Publication year: 2017

Artificial intelligence and brain–computer interfaces must respect and preserve people’s privacy, identity, agency and equality, say Rafael Yuste, Sara Goering and colleagues:

Blaise Agüera y Arcas, Guoqiang Bi, Jose M. Carmena, Adrian Carter, Joseph J. Fins, Phoebe Friesen, Jack Gallant, Jane E. Huggins, Judy Illes, Philipp Kellmeyer, Eran Klein, Adam Marblestone, Christine Mitchell, Erik Parens, Michelle Pham, Alan Rubel, Norihiro Sadato, Laura Specker Sullivan, Mina Teicher, David Wasserman, Anna Wexler, Meredith Whittaker& Jonathan Wolpaw

Data Analytics in Higher Education: Key Concerns and Open Questions

Learning analytics and privacyPrivacy and surveillanceTechnology and ethics
Alan Rubel and Kyle Jones
The University of St. Thomas Journal of Law and Public Policy 11(1) (2017): 25-44
Publication year: 2017

Abstract: “Big Data” and data analytics affect all of us. Data collection, analysis, and use on a large scale is an important and growing part of commerce, governance, communication, law enforcement, security, finance, medicine, and research. And the theme of this symposium, “Individual and Informational Privacy in the Age of Big Data,” is expansive; we could have long and fruitful discussions about practices, laws, and concerns in any of these domains. But a big part of the audience for this symposium is students and faculty in higher education institutions (HEIs), and the subject of this paper is data analytics in our own backyards. Higher education learning analytics (LA) is something that most of us involved in this symposium are familiar with. Students have encountered LA in their courses, in their interactions with their law school or with their undergraduate institutions, instructors use systems that collect information about their students, and administrators use information to help understand and steer their institutions. More importantly, though, data analytics in higher education is something that those of us participating in the symposium can actually control. Students can put pressure on administrators, and faculty often participate in university governance. Moreover, the systems in place in HEIs are more easily comprehensible to many of us because we work with them on a day-to-day basis. Students use systems as part of their course work, in their residences, in their libraries, and elsewhere. Faculty deploy course management systems (CMS) such as Desire2Learn, Moodle, Blackboard, and Canvas to structure their courses, and administrators use information gleaned from analytics systems to make operational decisions. If we (the participants in the symposium) indeed care about Individual and Informational Privacy in the Age of Big Data, the topic of this paper is a pretty good place to hone our thinking and put into practice our ideas.

Student Privacy in Learning Analytics: An Information Ethics Perspective

Learning analytics and privacyLibraries and privacyPrivacy and surveillanceTechnology and ethics
Alan Rubel and Kyle Jones
The Information Society 32(2) (Spring 2016): 143-159
Publication year: 2016

Abstract: In recent years, educational institutions have started using the tools of commercial data analytics in higher education. By gathering information about students as they navigate campus information systems, learning analytics “uses analytic techniques to help target instructional, curricular, and support resources” to examine student learning behaviors and change students’ learning environments. As a result, the information educators and educational institutions have at their disposal is no longer demarcated by course content and assessments, and old boundaries between information used for assessment and information about how students live and work are blurring. Our goal in this paper is to provide a systematic discussion of the ways in which privacy and learning analytics conflict and to provide a framework for understanding those conflicts.

We argue that there are five crucial issues about student privacy that we must address in order to ensure that whatever the laudable goals and gains of learning analytics, they are commensurate with respecting students’ privacy and associated rights, including (but not limited to) autonomy interests. First, we argue that we must distinguish among different entities with respect to whom students have, or lack, privacy. Second, we argue that we need clear criteria for what information may justifiably be collected in the name of learning analytics. Third, we need to address whether purported consequences of learning analytics (e.g., better learning outcomes) are justified and what the distributions of those consequences are. Fourth, we argue that regardless of how robust the benefits of learning analytics turn out to be, students have important autonomy interests in how information about them is collected. Finally, we argue that it is an open question whether the goods that justify higher education are advanced by learning analytics, or whether collection of information actually runs counter to those goods.

Privacy, Transparency, and Accountability in the NSA’s Bulk Metadata Program

Privacy and securityPrivacy and surveillanceTechnology and ethics
Alan Rubel
Privacy, Security, and Accountability (Adam Moore, ed.) (Rowman & Littlefield International, 2016)
Publication year: 2016

Disputes at the intersection of national security, surveillance, civil liberties, and transparency are nothing new, but they have become a particularly prominent part of public discourse in the years since the attacks on the World Trade Center in September 2001. This is in part due to the dramatic nature of those attacks, in part based on significant legal developments after the attacks (classifying persons as “enemy combatants” outside the scope of traditional Geneva protections, legal memos by White House counsel providing rationale for torture, the USA Patriot Act), and in part because of the rapid development of communications and computing technologies that enable both greater connectivity among people and the greater ability to collect information about those connections.
One important way in which these questions intersect is in the controversy surrounding bulk collection of telephone metadata by the U.S. National Security Agency. The bulk metadata program (the “metadata program” or “program”) involved court orders under section 215 of the USA Patriot Act requiring telecommunications companies to provide records about all calls the companies handled and the creation of database that the NSA could search. The program was revealed to the general public in June 2013 as part of the large document leak by Edward Snowden, a former contractor for the NSA.
A fair amount has been written about section 215 and the bulk metadata program. Much of the commentary has focused on three discrete issues. First is whether the program is legal; that is, does the program comport with the language of the statute and is it consistent with Fourth Amendment protections against unreasonable searches and seizures? Second is whether the program infringes privacy rights; that is, does bulk metadata collection diminish individual privacy in a way that rises to the level that it infringes persons’ rights to privacy? Third is whether the secrecy of the program is inconsistent with democratic accountability. After all, people in the general public only became aware of the metadata program via the Snowden leaks; absent those leaks, there would have not likely been the sort of political backlash and investigation necessary to provide some kind of accountability.
In this paper I argue that we need to look at these not as discrete questions, but as intersecting ones. The metadata program is not simply a legal problem (though it is one); it is not simply a privacy problem (though it is one); and it is not simply a secrecy problem (though it is one). Instead, the importance of the metadata program is the way in which these problems intersect and reinforce one another. Specifically, I will argue that the intersection of the questions undermines the value of rights, and that this is a deeper and more far-reaching moral problem than each of the component questions.

Privacy and Confidentiality in Pragmatic Clinical Trials

BioethicsPrivacy and surveillance
Deven McGraw, Sarah M. Greene, Caroline S. Miner, Karen L. Staman, Mary Jane Welch, and Alan Rubel
Clinical Trials 12(5) (October 2015): 520-52
Publication year: 2015

Abstract: With pragmatic clinical trials (PCTs) an opportunity exists to answer important questions about the relative risks, burdens, and benefits of therapeutic interventions. However, concerns about protecting the privacy of this information are significant and must be balanced with the imperative to learn from the data gathered in routine clinical practice. Traditional privacy protections for research uses of identifiable information rely disproportionately on informed consent or authorizations, based on a presumption that this is necessary to fulfill ethical principles of respect for persons. But frequently the ideal of informed consent is not realized in its implementation. Moreover, the principle of respect for persons,—which encompasses their interests in health information privacy,—can be honored through other mechanisms. Data anonymization also plays a role in protecting privacy but is not suitable for all research, particularly PCTs. In this paper we explore both the ethical foundation and regulatory framework intended to protect privacy in PCTs. We then review examples of novel approaches to respecting persons in research that may have the added benefit of honoring patient privacy considerations.

Four Facets of Privacy and Intellectual Freedom in Licensing Contracts for Electronic Journals

Libraries and privacyPrivacy and surveillanceTechnology and ethics
Alan Rubel and Mei Zhang
College & Research Libraries 76(4) (May 2015): 427-449
Publication year: 2015

This is a study of the treatment of library patron privacy in licenses for electronic journals in academic libraries. We begin by distinguishing four facets of privacy and intellectual freedom based on the LIS and philosophical literature. Next, we perform a content analysis of 42 license agreements for electronic journals, focusing on terms for enforcing authorized use and collection and sharing of user data. We compare our findings to model licenses, to recommendations proposed in a recent treatise on licenses, and to our account of the four facets of intellectual freedom. We find important conflicts with each.

Privacy and Positive Intellectual Freedom

Libraries and privacyPrivacy and surveillance
Alan Rubel
Journal of Social Philosophy (special issue on Technology and New Challenges for Privacy) 45(3) (Fall 2014): 390-407
Publication year: 2014

Privacy is often linked to freedom. Protection against unreasonable searches and seizures is a hallmark of a free society, and pervasive state‐sponsored surveillance is generally considered to correlate closely with authoritarianism. One link between privacy and freedom is prominent in the library and information studies field and has recently been receiving attention in legal and philosophical scholarship. Specifically, scholars and professionals argue that privacy is an essential component of intellectual freedom. However, the nature of intellectual freedom and its link to privacy are not entirely clear. My aim in this article is to offer an account of intellectual freedom as a type of positive freedom. I will argue that a full account of intellectual freedom must involve more than an absence of constraints. Rather, intellectual freedom is at least partly a function of the quality of persons’ agency with respect to intellectual endeavors. Such an account best explains the relation between intellectual freedom and privacy and avoids problems with conceptions of intellectual freedom based solely on constraints.

Libraries, Electronic Resources, and Privacy: The Case for Positive Intellectual Freedom

Libraries and privacyPrivacy and surveillanceTechnology and ethics
Alan Rubel
Library Quarterly 84(2) (April 2014):183-208
Publication year: 2014

Public and research libraries have long provided resources in electronic formats, and the tension between providing electronic resources and patron privacy is widely recognized. But assessing trade-offs between privacy and access to electronic resources remains difficult. One reason is a conceptual problem regarding intellectual freedom. Traditionally, the LIS literature has plausibly understood privacy as a facet of intellectual freedom. However, while certain types of electronic resource use may diminish patron privacy, thereby diminishing intellectual freedom, the opportunities created by such resources also appear liberty enhancing. Adjudicating between privacy loss and enhanced opportunities on intellectual freedom grounds must therefore provide an account of intellectual freedom capable of addressing both privacy and opportunity. I will argue that intellectual freedom is a form of positive freedom, where a person’s freedom is a function of the quality of her agency. Using this view as the lodestar, I articulate several principles for assessing adoption of electronic resources and privacy protections.

Autonomy, Surveillance, and Privacy

BioethicsPrivacy and surveillance
Alan Rubel
The Routledge Companion to Bioethics (John Arras, Rebecca Kukla, and Elizabeth Fenton, eds.,) (Routledge 2014): 312-324
Publication year: 2014

A Framework for Analyzing and Comparing Privacy States

Privacy and surveillance
Alan Rubel and Ryan Biava
JASIST: The Journal of the American Society for Information Science and Technology 65(12) (December 2014): 2422-2431
Publication year: 2014

Abstract: This article develops a framework for analyzing and comparing privacy and privacy protections across (inter alia) time, place, and polity and for examining factors that affect privacy and privacy protection. This framework provides a method to describe precisely aspects of privacy and context and a flexible vocabulary and notation for such descriptions and comparisons. Moreover, it links philosophical and conceptual work on privacy to social science and policy work and accommodates different conceptions of the nature and value of privacy. The article begins with an outline of the framework. It then refines the view by describing a hypothetical application. Finally, it applies the framework to a real‐world privacy issue—campaign finance disclosure laws in the United States and France. The article concludes with an argument that the framework offers important advantages to privacy scholarship and for privacy policy makers.

Profiling, Information Collection, and the Value of Rights Argument

Privacy and securityPrivacy and surveillance
Alan Rubel
Criminal Justice Ethics 32(3)(2013): 210-230
Publication year: 2013

Abstract: In the United States and elsewhere, there is substantial controversy regarding the use of race and ethnicity by police in determining whom to stop, question, and investigate in relation to crime and security issues. In the ethics literature, the debate about profiling largely focuses on the nature of profiling and when (if ever) profiling is morally justifiable. This essay addresses the related, but distinct, issue of whether states have a duty to collect information about the race and ethnicity of persons stopped by police. I argue that states in the U.S. do have such a duty on the grounds that such information collection would help secure the value of persons’ human rights against discrimination and unfair policing. Nonetheless, a large number of states do not require it. I begin by distinguishing rights from the value of rights, and arguing that under certain conditions persons have claims to the value of rights themselves, and that states have duties to secure that value. I then turn to the issue of profiling and offer the value of rights argument in favor of information collection about the race and ethnicity of persons stopped by police.

Privacy and Pervasive Surveillance

Privacy and surveillanceTechnology and ethics
Alan Rubel
Uberveillance and the Social Implications of Microchip Implants: Emerging Technologies (K. Michael and MG Michael, eds.) (IGI Global 2013): 319-333
Publication year: 2013

Abstract: The purpose of this proposed chapter is to provide a conceptual framework for understanding privacy issues that can be deployed for a variety of information technologies, an overview of the different views from the moral and political philosophy regarding the nature and foundations of privacy rights, and an examination of various privacy issues attendant to omnipresent surveillance technologies in light of those philosophical views. Put another way, my goal in the chapter is pick out important themes from the philosophical literature on privacy and surveillance and explain them in light of omnipresent surveillance technologies, while at the same time providing a philosophically informed analysis of the privacy implications of those technologies. The broader purpose of providing this framework and analysis is to make it easier for people developing, implementing, and forming policy about technologies, information collection efforts, and monitoring schemes to (a) see how various possible futures implicate important moral concerns and (b) recognize a broad array of reasons and arguments about uberveillance and privacy claims.

Justifying Public Health Surveillance: Basic Interests, Unreasonable Exercise, and Privacy

BioethicsPrivacy and surveillance
Alan Rubel
Kennedy Institute of Ethics Journal 22(1): 1-33 (2012)
Publication year: 2012

Abstract: Surveillance plays a crucial role in public health and for obvious reasons conflicts with individual privacy. This article argues that the predominant approach to the conflict—relying on a conceptual distinction between research and practice—is problematic and then offers an alternative. It outlines a basic interests approach to public health measures and an unreasonable exercise argument, which sets forth conditions under which individuals may justifiably exercise individual privacy claims that conflict with public health goals. The view articulated is compatible with a broad range of conceptions of the value of health.

The Particularized Judgment Account of Privacy

Privacy and surveillance
Alan Rubel
Res Publica 17(3): 275-290 (2011)
Publication year: 2011

Abstract: Questions of privacy have become particularly salient in recent years due, in part, to information-gathering initiatives precipitated by the 2001 World Trade Center attacks, increasing power of surveillance and computing technologies, and massive data collection about individuals for commercial purposes. While privacy is not new to the philosophical and legal literature, there is much to say about the nature and value of privacy. My focus here is on the nature of informational privacy. I argue that the predominant accounts of privacy are unsatisfactory and offer an alternative: for a person to have informational privacy is for there to be limits on the particularized judgments that others are able to reasonably make about that person.

Privacy (encyclopedia entry)

Privacy and securityPrivacy and surveillanceTechnology and ethics
Alan Rubel
Encyclopedia of Nanoscience and Society, (David Guston, ed., Sage Press) (2010)
Publication year: 2010

Privacy depends on the degree to which others can access information about, observe, and make inferences regarding a person or persons. People considering the societal implications of nanotechnology recognized early on that nanotechnology was likely to have profound effects upon privacy. Some of these effects stem from increased computing power. Others result from smaller, stronger, and more energy-efficient surveillance devices and improved sensor technologies. More speculatively, nanotechnology may open up new areas of surveillance and information gathering, including monitoring of brain states. Regardless of the precise ways nanotechnology affects information gathering and analysis, there are persistent social, legal, and moral questions. These include the extent to which persons have rights to privacy, the possibility of widespread surreptitious surveillance, who has access to privacy-affecting technologies, how such technologies will be treated legally, and the possibility that developing technologies will change our understanding of privacy itself.

Nanotechnology, Sensors, and Rights to Privacy

Privacy and securityPrivacy and surveillanceTechnology and ethics
Alan Rubel
Public Affairs Quarterly 24(2): 131-153 (2010)
Publication year: 2010

A suite of technological advances based on nanotechnology has received substantial attention for its potential to affect privacy. Reports of the National Nanotechnology Initiative have recognized that the societal implications of nanotechnology will include better surveillance and information gathering technologies, and there are a variety of academic and popular publications explaining potential effects of nanotechnology on privacy. My focus in this paper is on the privacy effects of one potential application of nanotechnology, sensors capable of detecting weapons agents or drugs- – nanosensors or sensors for short. Nanotechnology may make possible small, accurate, and easy-to-use sensors to detect a variety of substances, including chemical, biological, radiological, and explosive agents, as well as drugs. I argue that if sensors fulfill their technological promise, there will be few legal barriers to use and the relevant Constitutional law makes it likely that police sensor use will become pervasive. More importantly, I use the possibility of pervasive sensing to analyze the nature of privacy rights. I set forth the Legitimate Interest Argument, according to which one has no right to privacy regarding information with respect to the state if, and only if (a) the state has a legitimate interest in the information, and (b) the state does not garner the information arbitrarily. On this view, pervasive use would not impinge rights to privacy. Rather, it presents an opportunity to protect privacy rights.

Some Questions for the Barrier Theory: Comments on ‘The Right to Privacy Unveiled

Privacy and surveillance
Alan Rubel
San Diego Law Review 44(4) 801-808 (2007)
Publication year: 2007

In articulating his novel theory of a right to privacy, Professor Rickless takes as a starting point several paradigm cases in which privacy rights are violated and paradigm cases in which privacy rights are not violated. These include The Loud Fight, The Quiet Fight, The Pornographic Picture in the wall-safe, and The Subway Map obscured by a raincoat. n1 Rickless allows that standard views of the right to privacy – “control” accounts, “inaccessibility” accounts, William Parent’s “information-based” account, and Judith Jarvis Thomson’s reductionist account – can successfully explain why the paradigm cases are, or are not, violations of the right to privacy. This response raises some questions for Rickless’s theory.

Privacy and USA Patriot Act: Rights, the Value of Rights, and Autonomy

Privacy and securityPrivacy and surveillance
Alan Rubel
Law and Philosophy 26(2): 119-159 (2007)
Publication year: 2007

Civil liberty and privacy advocates have criticized the USA PATRIOT Act (Act) on numerous grounds since it was passed in the wake of the World Trade Center attacks in 2001. Two of the primary targets of those criticisms are the Act’s sneak-and-peek search provision, which allows law enforcement agents to conduct searches without informing the search’s subjects, and the business records provision, which allows agents to secretly subpoena a variety of information – most notoriously, library borrowing records. Without attending to all of the ways that critics claim the Act burdens privacy, I examine whether those two controversial parts of the Act, the section 213 sneak-and-peak search and the section 215 business records gag-rule provisions, burden privacy as critics charge. I begin by describing the two provisions. Next, I explain why those provisions don’t burden privacy on standard philosophical accounts. Moreover, I argue that they need not conflict with the justifications for people’s claims to privacy, nor do they undermine the value of privacy on the standard accounts. However, rather than simply concluding that the sections don’t burden privacy, I argue that those provisions are problematic on the grounds that they undermine the value of whatever rights to privacy people have. Specifically, I argue that it is important to distinguish rights themselves from the value that those rights have to the rights-holders, and that an essential element of privacy rights having value is that privacy right-holders be able to tell the extent to which they actually have privacy. This element, which is justified by the right-holders’ autonomy interests, is harmed by the two provisions.

Claims to Privacy and the Distributed Value View

Privacy and surveillance
Alan Rubel
San Diego Law Review 44(4): 921-956 (2007)
Publication year: 2007

Consider three cases. In the first, a distracted customer leaves his credit card behind in a restaurant, where an opportunistic fraud artist finds it. In the second, an employer begins closely monitoring an employee’s home and family life without her knowledge or consent, and with only the most spurious justification. In the third, as a person walks down her city street, she passes by dozens of people who notice her presence. In each case, the person’s privacy decreases. In the first two cases, it seems clear that the loss to privacy impinges a claim. n1 It is equally clear that in the third case the loss impinges no claim at all. Moreover, the sort of claim implicated in the first two cases appears markedly different. The claim to privacy in one’s credit card number primarily protects individuals from financial harm – it is a claim based on instrumental value. The claim to privacy in one’s home and family life with respect to one’s employer, however, is not clearly based on instrumental value, especially where the surveillance is surreptitious and unlikely to lead to adverse actions. The value of privacy in each of the three cases is different: instrumentally good, noninstrumentally good, and morally neutral.

This creates a puzzle with respect to claims to privacy. If privacy has very different value in different contexts, it will be difficult to determine where persons have claims to privacy. That is, if privacy does not have a unified value, it will not make sense to simply argue that a person has a claim to privacy solely in virtue of the fact that her privacy has been diminished. Compare this with other classic liberties: the diminution of one’s freedom of conscience or freedom of speech is not morally neutral and is problematic independent of the instrumental value of that liberty. That justifies the view that people have claims to freedom of conscience and freedom of speech, and that diminution of those liberties demands at least some justification. This is not so with privacy.

My contention is that privacy’s value is not unary, but distributed, and that whether privacy has value, and what kind of value it has, will depend on the privacy relation involved. However, I think that particular sorts of privacy relations do have value, and that such relations may be constituent parts of intrinsically valuable states of affairs. Accordingly, I [*923] think that we can articulate some general principles regarding the conditions under which the value of privacy underwrites claims to privacy.

Medical Privacy and the Public’s Right to Vote: What Presidential Candidates Should Disclose

BioethicsPrivacy and surveillance
Robert Streiffer, Julie Fagan, and Alan Rubel
Journal of Medicine and Philosophy 31(4): 417-439 (2006)
Publication year: 2006

We argue that while presidential candidates have the right to medical privacy, the public nature and importance of the presidency generate a moral requirement that candidates waive those rights in certain circumstances. Specifically, candidates are required to disclose information about medical conditions that are likely to seriously undermine their ability to fulfill what we call the core functions of the office of the presidency. This requirement exists because (1) people have the right to be governed only with their consent, (2) people’s consent is meaningful only when they have access to information necessary for making informed voting decisions, (3) such information is necessary for making informed voting decisions, and (4) there are no countervailing reasons sufficiently strong to override this right. We also investigate alternative mechanisms for legally encouraging or requiring disclosure. Protecting the public’s right to this information is of particular importance because of the documented history of deception and secrecy regarding the health of presidents and presidential candidates.