Algorithmic “fairness”
Contents
14.4. Algorithmic “fairness”#
It’s unlikely that you’ve heard about algorithms without also hearing at least a bit about “algorithmic fairness” and “black box AI”. But what does it mean, and why is it such a problem?
Ethics in the wild#
So far, our discussion has been (mostly) on planning ethical studies and interventions. But many(!) organizations are already not just using data, but also machine learning and other algorithms to make decisions that impact people’s lives, such as:
who gets approved for a credit card
who gets to take out a mortgage, and at what rate
whether someone gets granted parole
who gets hired at a company
what videos a person will get recommended
what advertisements a person is served online
It’s very tempting to think that because we are using algorithms to make these decisions, these decisions must not be biased. While it is technically true that a computer on its own that simply gets information and conducts calculations on that information is not biased (i.e., as far as I know there’s no hardware in my laptop that, say, stereotypes people based on where they’re from), the algorithms that we use to tell the computer what to do are built by humans who are biased, and who have biased ideas about what variables matter, and biased ideas about what aspects of a human to pay attention to, and who likely provide those unbiased computers with data to learn from that is itself the result and codification of (you guessed it) bias.
To build some intuition, consider the case of using algorithms to make decisions about whom to hire to work at a company.
Algorithms for hiring#
One example that you may have come across includes companies using machine learning algorithms to determine whom to hire. For a very large company, hiring is no trivial matter – you have to sift through thousands (if not more) applications, and there are entire industries around recruiting, vetting applications, interviewing applicants, and making decisions. You can imagine the appeal when algorithms came along that offered the possibility of churning through all the data in the thousands of applications and deciding for us whom to extend a job offer. Not only would this be easier and cheaper than having expensive, slow humans do this, but also, we’re surely magically erasing bias – computers aren’t biased! It’s just doing math! We’re giving it data! WE DID IT!
Even if you haven’t come across this type of story, you will not be shocked to learn that, alas, it did not turn out this way. Companies that implemented these sorts of algorithms started to notice some disturbing results, such as that white, male candidates were all being recommended much more than other candidates. And not just because more of the candidates had those backgrounds, or even that those candidates were stronger on the basis of, for instance, previous job experience or related skills. Rather, because the algorithms were fed information about what a “good” employee might look like (for example, what the hiring qualifications were of those who are currently in leadership positions at the company), and precisely because the algorithm doesn’t have any reason not to consider all the information the humans give it, it turned out that the models kept “learning” that the candidates most likely to be successful were those who had similar backgrounds on paper to current leaders, including in terms of their race and gender. Even more discouragingly, removing sensitive information from the analysis, such as explicit identifiers around gender, did not seem to solve this problem.
To see why, it’s helpful to now consider one of the most famous cases on this subject.
COMPAS#
In 2016 journalists at ProPublica published the results of a detailed investigation they conducted on an algorithm called COMPAS, which stands for Correctional Offender Management Profiling for Alternative Sanctions. The full report is available here and we highly recommend it. Here, we provide a brief summary of the key points.
COMPAS is an algorithm used in the criminal justice system to assess the risk of someone re-offending. It is used to inform judges when they make decisions about bail, remand, and sometimes even sentencing. This is not the only algorithm used in the criminal justice system, but it was a widely used one.
COMPAS used many data inputs to determine whether someone was likely to re-offend. Race was not one of them, and yet the journalists at ProPublica determined that there was evidence the algorithm was biased against Black people in the dataset. Specifically, they found:
Black defendants were almost twice as likely to be labeled as “high risk” but then not re-offend
White defendants were more likely to be labeled as “low risk” but then go on to commit more crimes
Specifically, ProPublica argued that there was a bias in the false positive rates: Black defendants were more likely to be falsely put in the high-risk category when they are low risk. Specifically, they identified a base rate problem, based on the observation that the actual re-arrest rates in the data are higher for Black defendants compared to white defendants. There’s of course a separate (and important) conversation here about the data-generating process behind this, including around policing, resources, and other relevant contexts, but we will set it aside for now other than noting this is an example of the data itself codifying bias, which exacerbates the problem of algorithmic bias.
Thus, because Black defendants have a higher re-arrest data, they are then more likely to be assigned to the high risk class (58%) compared with white defendants (39%). Then, if there are more Black defendants who are labeled “high risk”, this will drive up the percentage of false positives, because this category is larger for Black defendants than white defendants.
This implies that in order to decrease this base rate problem, one would need to increase the risk scores for white defendants who are otherwise equally risky.
And this reveals a deep problem with algorithmic fairness: the impossibility result, which states that we cannot have both group and individual fairness. In order to achieve group outcome fairness – in this case, that Black and white defendants are equally likely to be labeled as low risk vs. high risk – we have to sacrifice individual fairness – that two defendants with equal records are treated the same regardless of race.
There is no easy answer to this: It might be tempting to lean towards, for example, individual fairness, but then we risk reinforcing existing social disparities even further through our decisions. And aiming for a more fair outcome at the group level may also sound like the ethical move, but then we must hold people of different groups to different standards. This problem, by the way, is not limited to algorithms – it’s been around for a long time in, for example, affirmative action lawsuits and other public debates. But seeing it happen in machines helps us, in a way, see how deep a challenge (indeed, an impossible one) it really is.
What to do?#
So what do we do about this? COMPAS didn’t even include race variables in its dataset, so how do you proceed? Do you scrap the whole thing? Do you find different data? Do you make hard decisions about what counts as “fair”? While there are no simple answers, there are some guidelines that can help.
2. Be careful with annotation and labeling#
An area where humans really exert our bias is when we label or annotate data. As we know, if we want to build a supervised machine learning algorithm, we need labeled data, which generally means someone has to label it. Labeling whether someone has Chronic Kidney Disease is (probably) not (that) problematic (though there is evidence that medical diagnoses do differ based on identity characteristics), but consider the problem of labeling a photo as, e.g., “feminine” or “successful”. Indeed, an excellent way to observe the influence of human bias in action is to conduct a Google image search for any of those three words. We expect that unless something has really been solved since the time of writing, that, depressingly, you’ll get a lot of women covered in flowers and vines for the former and people cheering while at an office job in the latter.
3. De-biasing techniques#
While removing sensitive attributes (such as gender, race, age, and so on) from a dataset is a first step, it often is not enough to make a differnce, because many variables correlate with those attributes.
Another strategy might be to include a fairness cost function, by which we punish the model for straying too far from differences in how classes are organized or treated, but this again runs into some of the ethical dilemmas and difficult decisions described above. We might also implement adversarial methods, where we maximize our ability to predict our outcome of interest, but minimize our ability to predict a sensitive attribute. This can help us make sure that our primary model is not using variables associated with, e.g., race, to predict outcomes.
Transparency & accountability#
Transparency is another serious barrier – most algorithms are so complicated that even the researchers and people who built them don’t know how they work. It can be difficult to de-bias algorithms if we don’t really understand how they are getting to their results.
Finally, who should be held accountable for biased algorithms? In the case of COMPAS, is it the company that made COMPAS? Or the actors in the justice system who used it? Or someone else? Or no one? The debate is still very much open on who should be held responisible for the results of algorithms. For example, who should be responsible for the spread of misinformation, or hate speech, or videos that incite violence online? The platform? The maker of the content? The team that build the algorithm? The person who consumed it?
Public discourse#
All of these problems are serious and there are no easy answers. A big barrier to improvement in this area is the fact that many people still consider algorithms to be unbiased, or at least less biased than humans. The more awareness is built around both the problems and the urgency, the more we can at least have conversations around solving the problems, and hopefully continue to develop methodologies, protections, and systems to make our algorithms more fair going forward.