Screening Free For Vermonters: Documentary Explores Bias Coded In Algorithms, Technology
Racial bias — implicit, subconscious or out in the open — is a serious human problem. So serious that it's been detected in an unexpected place: the world of artificial intelligence, computers and facial recognition technology. A documentary that's screening free for Vermonters through March 8 delves into the problem.VPR’s Mitch Wertlieb discussed the documentary Coded Bias with Traci Griffith, an associate professor of media studies, journalism and digital arts at St. Michael's College. She moderated a https://youtu.be/awpfq7R6p20?t=286">December 2020 live-streamed panel discussion with Coded Bias producer/director Shalini Kantayya, cast member and NYU journalism professor Meredith Broussard and UVM philosophy professor Randall Harp. A transcript of their conversation has been edited for clarity.
Mitch Wertlieb: Coded Bias follows an MIT Media Lab researcher [Joy Buolamwini] who has discovered that many facial recognition technologies fail more often on darker-skinned faces. What prompted her to look into this problem in the first place?
Traci Griffith: [Buolamwini] was working in the MIT Media Lab as a researcher, and part of her job entailed using facial recognition technology. And she is a Black woman. She figured out that it was failing to recognize her face, and she realized this was a problem with the technology. It was inherent in the technology.
How did she find out that it was having trouble reading her face?
Well, it's interesting. When you view the documentary, she appears in front of the screen and it fails to “read” her face. She then places a white mask — it's kind of like a mime mask, that's the best way I could describe it — but she places this mask in front of her face, and then the technology reads it. It's amazing when you see it actually happening on screen.
What surprises me so much about this is — I guess it was an assumption that I had — that when human biases are taken out of the equation, things like racial bias should not be an issue. But I'm guessing that these programs are created by humans with their own biases, and I'm wondering if that's what this researcher discovered. Kind of that old “garbage in, garbage out” kind of situation, when it comes to computers. Is it that simple, or is it more complex than that?
I think your assumption is very correct, in that you would feel that maybe the use of machines might even out the playing field. The problem is that artificial intelligence is man-made, right? It's man-made! And the vast number of programmers, and those who are creating this technology, are white males. And so the technology is created through their lens.
Our inherent biases, even those we don't recognize, are then built into the process of the machine. And the machine-learned algorithms that are created by humans, are just as biased as we are.
What did this researcher do after she discovered this problem? Did she bring it to the attention of people at MIT?
Not just MIT. She actually brought it to the attention of Congress, because a lot of these systems are being used by our government.
Facial recognition technology is rampant across the United States. We are often being surveilled without our knowledge, or without our understanding of what exactly that could mean for us.
And so as we walk around town, as we walk around cities, there are biometric systems in place used for general surveillance, and we don't even recognize that it's happening. And so it doesn't require our knowledge. It doesn't require our consent. It doesn't require our participation. [Through] our simply being, walking around town, we are being surveilled. And in these biometric systems, facial recognition in this situation, is being used to identify people, and your whereabouts, and when you're there and how often you go there, et cetera.
These surveillance systems know what we're doing. And they're being used largely by our government, but mainly for things like [by] police, in monitoring where people might go, or where they might be.
We see a lot of this after the Jan. 6 uprising at the Capitol. Facial recognition technology is being used to identify people who were there.
Now, some could say that's good. Some could say that's bad. But we need to consider the bias that is inherent in this technology if we're going to be using it for such kinds of of situations.
One of the issues that Black Lives Matter protesters have continually brought up is, we need to be seen. We need to be seen as citizens of this country who have the same rights as white people. We’re not being seen.
And it seems to me, this problem here, with this facial recognition technology not even being able to acknowledge this Black woman's face, shows that this problem goes well beyond Black Lives Matter.
It absolutely does. And it's not just about being seen, but it's being seen for who you are. Because we've also found that there's a lot of fallibility in this facial recognition technology. So even if you are seen, a number of problems come about with particular groups of people — and in this situation, Black people in particular — that particular groups are not seen … [or if they are, they are seen as] “the inaccuracy.” That's part of the issue as well. So you're seen, but you're not seen for who you actually are. You're misidentified. So it compounds the problem.
Yes, the Black Lives Matter movement is pushing the idea of being seen as being recognized, but it's also about being recognized for who you are and not misidentified. [So you don't] get the knock on the door from the police officer, because your face has been recognized as being someplace that you weren't, because you've been misidentified. And so it is about accuracy, it's about being seen, but it's also about being seen for who you are, in a way that recognizes you as the individual that you are.
This film is absolutely amazing. I will tell you, there's so many aspects of it that you just don't even think about, or recognize. And it opens your eyes to some of the problems with this AI technology. It's very, very interesting.
We've closed our comments. Read about ways to get in touch here.