Four Principles of Research Ethics
Contents
14.2. Four Principles of Research Ethics#
The four principles of research ethics are:
Respect for Persons: this is the recognition that people (subjects) are autonomous and their wishes should be respected
Beneficence: research must strike the right balance (particularly for the subjects) between risks and benefits
Justice: the risks and benefits of research more generally must be distributed fairly across society
Respect for law and public interest: beneficence for all stakeholders affected by research, beyond the subjects themselves
We cover each in turn.
1. Respect for Persons#
The basic idea is that subjects should be treated as autonomous. By this we mean that it is the subject themselves, not the researchers, who decide what happens to them and their lives. This is important even if the study is harmless or beneficial to the subjects. Though not identical, the central idea here is informed consent. That is the notion that
potential subjects receive relevant information in a comprehensible format followed by a voluntary agreement to participate.
In typical case, subjects should expect to get information on
purpose of the study
what the study does/its procedures
how long the study lasts
the risks of the study
the potential benefits of the study
To the extent a study deals with subjects who have diminished autonomy, it should expect to use greater protections for them. That is, for “vulnerable populations”—who are inherently less able to make informed and free decisions about costs and benefits to themselves—we should expect to take more steps in an attempt to follow the Respect for Persons principles.
Examples of such groups include:
Minors
Prisoners
Less educated people
Non-native speakers of whatever language the informed consent information is written and in which the experiment is being conducted
HIV/AIDS patients
Perhaps unsurprisingly, there is a troubling history of experimenting on such people without proper thought about whether they can truly consent to being subjects. See for example, the Statesville Malaria Study.
2. Beneficence#
Beneficence is the principle that, as regards a study, one should
maximize the possible benefits and minimize the possible harms
Notice that there is no requirement to do no harm: indeed, if one could not impose any risks at all, many important experiments (including vaccine trials) could not happen.
Generally, the Beneficence is assessed in two linked stages: first, a technical assessment is made as regards the potential risks and benefits to subjects. Second, an ethical assessment is made as to whether the technical risk/benefit ratio is normatively reasonable.
Assessing Risk (Technical)#
Risk is a product of
probability of adverse event \(\times\) severity of adverse event
The two parts of this equation can be affected by changes to different parts of the experiment design, and the trade-offs may not be trivial.
The other important issue here is that risk is not simply to subjects: as in the Tuskegee case, their are potential affects on non-participants and social systems around the experiment.
Assessing Ethics#
Once the technical assessment is done, the ethical assessment can take place. The framework we choose for this assessment is important and some simple approaches are obviously problematic. For example, it may be tempting to use a Utilitarian framework that allows the research design to go ahead if benefits are greater than costs. But this is dangerous: we generally think certain designs—even if they have large potential benefits on average—are just morally wrong (perhaps because they impose costs that are too large on some participants). To provide checks on this, IRBs bring in researchers from outside the field of the experiment to avoid “group think” on what is or is not appropriate.
Given the above, it is somewhat ambiguous as to how the modern era of data science studies is affecting levels of Beneficence. On the one hand, we can imagine that larger scale data gathering and expectations of sharing increases benefits because the same data can be used for multiple projects across institutions. On the other hand, this also adds to the risk of subsequent exploitation by researcher with malign intentions.
3. Justice#
The idea of Justice is that the broader benefits and burdens of research should fall in a fair way on society. It is different to Beneficence in that Beneficence is focused more narrowly on the cost/benefit analysis of a particular experiment for those who participate in it (or are close to those that do). Justice about is making sure that we don’t exploit historically disadvantaged groups for the benefit of others.
It takes two main forms. Historically it meant no exclusively experimenting on marginalized groups (like poor people, minority populations, or children) for the benefit of the most powerful. A representative case might be a poor African American woman, Henrietta Lacks, whose visited Johns Hopkins hospital in 1951 and ultimately underwent a biopsy. Her cells were collected, preserved and used for research. Lacks never gave informed consent for this, nor was she compensated.
Today, Justice is often more concerned with not excluding groups from studies such that the lessons of those studies can help them too. An example would be the historical exclusion of women from experiments on cardiovascular health.
In any case, Justice implies that paying subjects appropriately is a reasonable requirement.
4. Respect for Law and Public Interest#
Beneficence concerns the risks and benefits to participants: Respect for Law and Public Interest makes this idea broader, and extends it to other stakeholders. It has two main parts: Compliance and Transparency-Based Accountability.
1. Compliance#
Compliance says that, generally speaking, researchers should try to identify and follow the relevant laws, contracts and terms of service. That is, doing research does not give one the automatic right to break legal rules. Followed to the letter though, this may be quite restrictive. For example, in a 2017 study, Kevin Munger created Twitter bots to automatically ask tweeters using racially offensive terms to stop doing so, with various justifications for this request. The use of automated bots is, in fact, not allowed by Twitter’s terms of service. Nonetheless, the study was approved by the relevant IRB, presumably because the scientific merits outweighed the risks.
A more controversial example is provided by a set of researchers from Stanford and Dartmouth who sent out mailers to Montana voters with information on an upcoming election. The aim was to see if receiving such a mailer made one more likely to vote. There were various concerns raised with the study, but one particular compliance issue was in terms of the State of Montana’s Seal. This may technically have required explicit permission to appear on the mailer, which the researchers did not ask for.
2. Transparency-Based Accountability#
As its name suggests, the central idea here is to open about the goals, methods and results of our studies. That is, to take responsibility for our experiments, including mistakes that are made. Recent discussions of two older (but well known) studies make this point:
In the Stanford Prison Experiment, students were recruited to act as either guards or prisoners, with a view to understanding how behaviors change after simply being ascribed as a given role in a social situation. Separate to the fact that the experiment was stopped early for ethical reasons, there have been allegations that the study authors in fact coached the “guards” to produce particular results. The lead investigator, Philip Zimbardo, denies these claims.
In the Rosenhan Experiment, a team of researchers acted as if they had hallucinations such that they were admitted to mental hospitals. They then acted normally, but were diagnosed with psychiatric illnesses, and given medication for treating the same. The results of the experiment lead to skepticism about the reliability of psychiatric diagnosis in general. In recent times, however, allegations have been made that, in fact, Rosenhan fabricated data (including about himself). For example, Rosenhan used himself was a psuedo-patient (itself a questionable decision for a researcher) and appears to have presented to the relevant doctor as having much more severe symptoms than his subsequent write-up implies.
Ethical Frameworks#
Ultimately, observing our ethical principles depend on particular frameworks: that is, particular ways to thinking about how to trade-off costs and benefits. There are two broad schools of thought:
Consequentialism: in which we ask about the ends of the study
Deontology: in which we focus on the means to the ends of the study
1. Consequentialism#
In Consequentialism whether a study is ethical or not depends on its ends: its consequences. The classic example is Utilitarianism, associated with philosophers like Bentham and Mill. Here, the idea that
an action (a study) is ethically permissible if it improves the world (net of any costs)
Beneficence, which requires us to explicitly consider costs and benefits, has a utilitarian feel. Obviously though, Consequentialism taken to extremes might lead to undesirable outcomes. For example, it might mean that subjects should expect to die with high probability, because the study will lead to very large benefits for (at least) some members of society.
2. Deontology#
Deontology says that a study is ethical if
we act in an ethical way while performing the study.
In that sense, it is not depend on the consequences of the experiment at all: all that matters is the morality of how it is carried out. This idea is associated with Kantianism (the efforts of Emmanuel Kant). Respect for Persons is a deontological principle: we must respect autonomy because it the right and moral way to treat subjects, whatever our end goals. Like Consequentialism, Deontology can be taken to extremes: it might mean, for example, that we could never use deception in a study because we deem lying to subjects to be morally impermissible.
Obviously, a given experimental practice can be justified by either/both schools of thought. For example, we might think that informed consent is good in the Consequentialist tradition because it leads to subjects (and thus experimenters) thinking carefully about costs and benefits. Meanwhile, we can justify the same practice under deontology because giving people autonomy is axiomatically the right thing to do. That said, there are many situations where the schools clash on what is permissible, and this clashing is not easily resolved.