Coded Bias Film Reveals the Faults in Our Technology

If artificial intelligence is the future, why does AI seemingly fall short when it comes to biases embedded in its algorithms? 

Coded Bias’ Director Shalini Kantayya took to the Spark! virtual stage this month to explore this question and more during a Q&A following the exclusive screening of her documentary that premiered at Sundance Film Festival.

“The startingly thing that I learned in making this film is that is that we’re not only outsourcing our decision making to machines, but these same machines have not been vetted for racial bias, gender bias, for discrimination, or that they’ll not do unintended harm,” she said.

The film spotlights MIT Media Lab researcher, Joy Buolamwini, and her discovery that facial recognition does not see dark-skinned faces accurately. Not only was Buolamwini’s revelation highlighted, but also her push for the first-ever legislation in the U.S. to govern against bias in algorithms.

Fast forward to present day, and Kantayya revealed what she believes would be the next step towards algorithmic justice – an FDA for algorithms, “some way of vetting products before they go to scale, for proving that they’ll do no harm before they’re commercially available,” she said.

As for what we can do? Kantayya’s advice is to always trust your moral compass.

“The most amazing thing about Coded Bias is that I’m reminded that everyday people can make a difference … everyday people challenged these technologies and won.”

If you missed the roundup of questions from students, faculty, staff, and alumni, you can watch the recap here.