Popular media loves to talk about “artificial intelligence” these days. It is a hot topic, one riddled with misunderstanding.

Artificial intelligence is a misnomer in our field – professionals call this emerging interest “machine learning” because the jury is out as to whether there is any real intelligence behind it.

Ask any researcher about how easy it is to bias a survey through what question we ask, how we ask it, what questions we ask before and after it, and you will begin to understand the mechanisms through which decisions made by computers can be similarly biased. Simply put, I can ask you “Are you feeling well?” or I can ask you “How are you feeling?”, the first question anchors your context so your answer relates to “well”. The second question, because of the lack of context could elicit answers like “hungry”, or “with my fingers.”

In this way, machine learning models blatantly inherit, and worse can magnify the systemic oppressions inherent in how the person who designed it thinks.

That’s right, our machine learning models are racist, classist, ageist, misogynistic, heteronormative and cisnormative just like we are.

The software field is suffering a diversity crisis. Small monocultures have a tremendous impact on how we interact with the world through technology, and the effects it has on us. Everything from the digital representations of our personhood in cyberspace and therefore how others perceive us, to how our own memories are impacted by algorithms that seek to remind or raise attention to a similarly distorted view of our past.

For developers wishing to raise their awareness and avoid some of these traps, I recommend two talks by Carina Zona. “Schemas for the Real World” discusses how we codify oppression in our databases, and “Consequences of an Insightful Algorithm” discusses examples of how naive algorithms at our favourite Internet companies are violating privacy and further oppressing people in the world around us, often without our direct knowledge.

Categories: Developer