## Highlights Thus, evidence-based decision-making is only as reliable as the evidence on which it is based, and high quality examples are critically important to machine learning. The fact that machine learning is “evidence-based” by no means ensures that it will lead to accurate, reliable, or fair decisions. ([View Highlight](https://read.readwise.io/read/01j7a5gwdkv0dcnx40pt1f6px6)) --- Humans are also unlikely to make decisions that are obviously absurd, but this could happen with automated decision making, perhaps due to erroneous data. ([View Highlight](https://read.readwise.io/read/01j7a5jhmz19sdk6v2s9qcky95)) --- Amazon uses a data-driven system to determine the neighborhoods in which to offer free same-day delivery. A 2016 investigation found stark disparities in the demographic makeup of these neighborhoods: in many U.S. cities, White residents were more than twice as likely as Black residents to live in one of the qualifying neighborhoods. ([View Highlight](https://read.readwise.io/read/01j7a6bcmsys6tf57tpjcs49kh)) --- When we observe disparities, it doesn’t imply that the designer of the system intended for such inequalities to arise. Looking beyond intent, it’s important to understand when observed disparities can be considered to be discrimination. In turn, two key questions to ask are whether the disparities are justified and whether they are harmful. ([View Highlight](https://read.readwise.io/read/01j7a6dr4m10epnj0esnkv2ggg)) --- Prediction can take the form of classification (determine whether a piece of email is spam), regression (assigning risk scores to defendants), or information retrieval (finding documents that best match a search query). ([View Highlight](https://read.readwise.io/read/01j7a7ze2grd0y27nwdhnrepkr)) --- A follow-up paper built on this idea and showed mathematically how feedback loops occur when data discovered on the basis of predictions are used to update the model.Danielle Ensign et al., “Runaway Feedback Loops in Predictive Policing,” *arXiv Preprint arXiv:1706.09847*, 2017. The paper also shows how to tweak the model to avoid feedback loops in a simulated setting: by quantifying how surprising an observation of crime is given the predictions, and only updating the model in response to surprising events. ([View Highlight](https://read.readwise.io/read/01j7a9b34n2f27wrxghac99hkt)) --- In many cases, we cannot achieve any reasonable notion of fairness through changes to decision-making alone; we need to change the conditions under which these decisions are made. In other cases, the very purpose of the system might be oppressive, and we should ask whether it should be deployed at all. ([View Highlight](https://read.readwise.io/read/01j7a9sgz2nb0ct3n4826h964d)) --- We can learn a lot from the so-called social model of disability, which views a predicted difference in a disabled person’s ability to excel on the job as the result of a lack of appropriate accommodations (an accessible workplace, necessary equipment, flexible working arrangements) rather than any inherent capacity of the person. A person is only disabled in the sense that we have not built physical environments or adopted appropriate policies to ensure their equal participation. ([View Highlight](https://read.readwise.io/read/01j7a9vnbawn4ycx54zw13zekk)) --- It may not be ethical to deploy an automated decision-making system at all if the underlying conditions are unjust and the automated system would only serve to reify it. Or a system may be ill-conceived, and its intended purpose may be unjust, even if it were to work flawlessly and perform equally well for everyone. The question of which automated systems should be deployed shouldn’t be left to the logic (and whims) of the marketplace. For example, we may want to regulate the police’s access to facial recognition. Our civil rights—freedom or movement and association—are threatened by these technologies both when they fail and when they work well. ([View Highlight](https://read.readwise.io/read/01j7a9wxrqnxrhzq5hs67gh5f7)) --- A talk by Kate Crawford lays out the differences.Kate Crawford, “The Trouble with Bias” (NeurIPS Keynote [https://www.youtube.com/watch?v=fMym_BKWQzk](https://www.youtube.com/watch?v=fMym_BKWQzk), 2017). When decision-making systems in criminal justice, health care, etc. are discriminatory, they create *allocative harms*, which are caused when a system withholds certain groups an opportunity or a resource. In contrast, the other examples—stereotype perpetuation and cultural denigration—are examples of *representational harms*, which occur when systems reinforce the subordination of some groups along the lines of identity—race, class, gender, etc. ([View Highlight](https://read.readwise.io/read/01j7a9z0kcyjr0cwag1r5gzh2q)) ---