SCIENCE FOR PEOPLE WHO GIVE A SHIT
July 8, 2024

Error 404: AI Ethics Not Found

When is a cancer scare, a rejected mortgage loan, a false arrest, or predictive grading, more than a glitch in A.I.?

That's today's big question, and my guest is Meredith Broussard.

Meredith is a data journalist and associate professor at the Arthur L. Carter Journalism Institute of New York University, Research Director at the NYU Alliance for Public Interest Technology and the author of several books I loved, including More Than a Glitch: Confronting Race, Gender and Ability Bias in Tech, and Artificial Unintelligence: How Computers Misunderstand the World.

Her academic research focuses on A.I. in investigative reporting and ethical A.I., with a particular interest in using data analysis for social good.

She's a former features editor at the Philadelphia Inquirer. She's also worked as a software developer at AT&T Bell Labs and at the MIT Media Lab. Meredith's features and essays have appeared in The Atlantic, The New York Times, Slate, and other outlets.

If you have ever turned on a computer or used the internet in some way to apply for something, or literally anything, this one is for you.

-----------

Have feedback or questions? Tweet us, or send a message to questions@importantnotimportant.com

New here? Get started with our fan favorite episodes at podcast.importantnotimportant.com.

-----------

INI Book Club:

 

Links:

 

Follow us:

 

Advertise with us: importantnotimportant.com/c/sponsors

Mentioned in this episode:

Support Our Work

Transcript

Quinn: [00:00:00] When is a cancer scare, a rejected mortgage loan, a false arrest, or predictive grading, more than a glitch in A.I.? That's today's big question, and my guest is Meredith Broussard. Meredith is a data journalist and associate professor at the Arthur L. Carter Journalism Institute of New York University, Research Director at the NYU Alliance for Public Interest Technology and the author of several books I loved, including More Than a Glitch, Confronting Race, Gender and Ability Bias in Tech, and Artificial Unintelligence: How Computers Misunderstand the World. Her academic research focuses on A.I. in investigative reporting and ethical A.I., with a particular interest in using data analysis for social good. Which would really just be great. Meredith has appeared in the 2020 documentary Coded Bias, an official [00:01:00] selection of the Sundance Film Festival. She's an affiliate faculty member at the Moore Sloan Data Science Environment at the NYU Center for Data Science and a 2019 Reynolds Journalism Institute fellow.

 

She's a former features editor at the Philadelphia Inquirer. She's also worked as a software developer at AT&T Bell Labs and at the MIT Media Lab. Meredith's features and essays have appeared in The Atlantic, The New York Times, Slate, and other outlets. If you have ever gone out into the world, or even just stayed at home, and turned on a computer, or used the internet in some way to apply for something, or literally anything, this one is for you.

 

Welcome to Important Not Important. My name is Quinn Emmett, and this is science for people who give a shit. Our mission is to understand and unfuck the future, and our goal is to help you answer the question, what can I do?[00:02:00]

 

Meredith, welcome to the show.

 

Meredith Broussard: Thank you so much for having me. It's great to be with you.

 

Quinn: A hundred percent. Like I said, I will try not to spend the entire time trying to pick apart what's on your bookshelves, but it looks interesting. Is that a mushroom up there?

 

Meredith Broussard: Entirely possible. Up here?

 

Quinn: Yeah.

 

Meredith Broussard: Oh,yeah. That's a paper flower. The little girls who live down the hall from me on one rainy day decided they were having a craft sale. So I bought a paper flower from the little girls down the hall.

 

Quinn: That's wonderful. I mean, you couldn't spend money in a better way. That's amazing.

 

Meredith Broussard: Yeah, exactly. It's the best 50 cents I ever spent.

 

Quinn: 100%. Two stories and then we'll get to, you know, A.I. ethics. One, my uncle has a story about just walking out the door when he was little with all of, I mean, not many, they were pretty poor, but my grandmother's candles and I guess candelabras and such and trying to sell them to the neighbors who immediately knew what was going on because he was five.

 

I don't know, something like that. [00:03:00] So sure. And then my daughter recently, similarly she's out at camp, like in the woods somewhere in town. And her and her friend have decided they're going to start making and selling, you know, friendship bracelets at the camp store. She's gone full capitalist very quickly.

 

Meredith Broussard: Good for her.

 

Quinn: As is typical, one of my sons is trying to figure out why he doesn't get a cut of it without participating at all. And I'm like, this is the thing. This is the problem.

 

Meredith Broussard: Yeah, yeah.

 

Quinn: Pretty standard.

 

Meredith Broussard: Have you taught him about management yet? And you know, possibly about algorithmic surveillance?

 

So he can convince his sister and her friend to join a platform that he started, and he can take a cut.

 

Quinn: Right, for doing what work, in particular? Right, yeah it's coming, they’re killing me. Anyways, this is so great to have you, I love your paper flower. So, Meredith, we usually start with one ridiculous question to get things going, and I've asked it almost 200 times, and at some point, I keep [00:04:00] feeling like I should retire it, but because it's pretty tongue in cheek, but at the same time, it's fun, so why not?

 

Meredith, why are you vital to the survival of the species, and I encourage you to be bold and honest.

 

Meredith Broussard: Well, one of the things that I really care about is empowering people around technology. Algorithms are increasingly being used to make decisions on our behalf, and those decisions are often unfair or unjust.

 

And you need to know a little bit about algorithms and how A.I. works in order to push back against those unfair algorithmic decisions. And so one of the things that I do in my work as a data journalist and an algorithmic accountability reporter is I help people to understand the inner workings and outer limits of A.I. and algorithms.

 

So that more people can enter into the conversation and we can hopefully get rid [00:05:00] of a lot of the algorithms that are making unfair decisions about people's lives.

 

Quinn: Well, I think that's fantastic. Thank you for sharing that off the cuff. That was impressive. Some people just laugh at me and then usually we get something profound out of it, but that was great.

 

And I guess that really, that really gets to the crux of it, right? We need more people realizing they have been using versions of these tools for 10 years, and so they need to be part of the conversation because right now the conversation is mostly a very specific group of people who are making and profiting from these things and, you know, benefiting some more folks, but mostly hurting other folks.

 

Do I have that close?

 

Meredith Broussard: Yeah, absolutely. Absolutely. Algorithms have been held up for a long time as being kind of more fair or more objective or more neutral, and that is not at all the case. One of the things that Cathy O'Neil, who wrote Weapons of Math Destruction, has often said is that algorithms are opinions embedded in code.

 

Quinn: So [00:06:00] good. It sums the whole thing up.

 

Meredith Broussard: Yeah, they’re opinions. And so we do not benefit by putting all of our faith into algorithms. The ideal situation is human decision making coupled with algorithmic decision making, not one or the other.

 

Quinn: So I like to for, less so for you, more for me and for our listeners who run the gamut of, you know, young students to wealthy philanthropists to policymakers and bartenders.

 

It doesn't matter. But I find it does help a little bit to set the tone. And I try to do it in my own writing as well. And one on one conversations I have with just, you know, a brief sort of cliff notes understanding of why everyday things are the way they are, at least that someone might be trying to interact with, right?

 

If they're like, I want to donate to this. Why is it so hard to do this? And we talk about it briefly, how we got to the systems we have today, who has the power behind them. Like you're saying, who gets to be in the [00:07:00] room, who benefits from them. Who is hurt, and what the costs are, whether we're paying them or not, and obviously the climate is a big part of that.

 

And there was such a great specific example, I mean, among a million of them in your most recent book, More Than a Glitch. I wonder if you can talk about What is Y2Gay, and why is it such a pain in the ass, but also emblematic of these systems as a whole?

 

Meredith Broussard: The Y2Gay example is in the chapter of More Than a Glitch, where I talk about gender and databases.

 

So, one of the things that most people don't realize is that databases that we use today have these 1950s ideas about gender embedded in them, right? So computers came to prominence in the 60s really and that's when a lot of the systems that we use today, still use today, got developed, right?

 

So, computers used to be these giant machines that would [00:08:00] take up, you know, multiple rooms, and now they're, you know, so small they can fit inside your watch. That's really cool. But the thing is, computers still function the same basic way, right? It's still about electricity and gates and binary, and then there's a whole bunch of other stuff running on top of it.

 

So my last book, Artificial Unintelligence, has a whole section about how does the physical reality of a computer work, and then More Than A Glitch is about race, gender, and ability bias in tech. And I started thinking about the way that again, 1950s ideas about gender get embedded in databases.

 

So when I was taught to make databases back in the 80s and 90s. I was taught to optimize my code for a certain kind of normative aesthetic. This was not just aesthetic, it was also about memory, because memory used to be [00:09:00] expensive. Computers used to be expensive. So one of the ideas behind optimizing your code was to make it run in as small a package as possible, because that was the most economical way to do it, because if you ate up all the memory with your code, then, you know, your computer will crash and your program wouldn't work. One of the things you think about when you're trying to optimize your code is you think about how much space does a variable take up, right?

 

Because code depends on variables and the variables are of different types. You know, you can have a number, you can have a string, you can have a binary. And these variables take up different amounts of space on the computer, different amounts of memory space. So a string would take up like this much space and then a number would take up this much space and then a binary, zero or one, would take up this much space.

 

So when you thought about the entries in your database, the individual records, you had to decide, okay, what kind of [00:10:00] record is this going to be? Like what type of variable is going to go into this record and gender generally got recorded as a binary because in the 1950s and then in the 1960s, people thought that gender was binary.

 

Well, now we know that gender is a spectrum. And if you're going to have gender in a database, you're going to need to make it an editable field. But the systems that we're using, the computational systems again were designed in the 1960s. So, you know, your bank application is pretty much still running, there are some levels of your banking application that are still running on Fortran, which is a programming language that maybe you've never heard of. It's really, really old. Fortran programmers get paid so much money right now because it's such an old language that nobody uses it anymore. But there are still things that have not been rewritten in bank systems and insurance systems.

 

They're still written in Fortran. It's a, [00:11:00] you know, it's a major technical debt issue, but anyway, we are working with these very old systems. And so when we make new code, it gets stapled on top of the old systems. And you have to think about, how does the old system talk to the new system? Well, you got to think about these variables.

 

And if you try and feed a string, you know, a sequence of letters to a program that is expecting a binary, it's not going to work. Right, so the idea that gender is binary goes back a very long way and it is hard coded into a lot of systems. So, the Y2Gay issue came up as a result of marriage equality.

 

Because in the same way that gender was hard coded as a binary, marital relationships in software systems were encoded as being between one [00:12:00] man and one woman, right? It was a compulsory heterosexuality situation. And once gay marriage became legal, across the U.S. People had to redesign their software systems because you know, there were all these rules in place saying, Nope, we're not going to have a marriage relationship between these two users unless one of them is a man and one of them is a woman.

 

Well, okay, marriage equality. Now we can have marriages between two men or two women, but we needed to redesign the software systems. So it was called Y2Gay as a kind of nod to Y2K, which was the previous time that people discovered that there was a kind of shortsighted way that our databases have been constructed.

 

So until 1999 people were encoding date as the year. As only [00:13:00] two numbers, and they realized that when we switched over to the year 2000, the prefix 19 was no longer going to be the default. And computers were going to think that the year was 1900, not 2000, and that was going to break a whole bunch of things.

 

So it was a big software programming nightmare. So anyway, the big idea here is that human values are coded into software systems in all kinds of ways that we don't necessarily expect, and we need to reflect on those human values, and we need to update our computer systems accordingly.

 

Quinn: I love that.

 

Thank you for that. I remember where I was sitting for Y2K, for both Y2K and Y2Gay. I remember Y2K sitting there in the night at a local concert or something and thinking, well, here we go, Let's find out. And of course, Y2K is both never as simple in the buildup or in the [00:14:00] aftermath of, you know, the White House shining rainbow lights that night, right?

 

It's decades and decades and decades of work and loss and suffering and struggle to get to that point. And then again, everything, even the most mundane things that, like you said, are hard coded into how we operate as a society and an economy have to be rewritten. And I noticed in early, both in the book and what you were just commenting on, it's not just that we suddenly had to offer more options, right, than male or female or man and woman.

 

It was that it actually needs to be editable. And I mean, I think that's obvious and you spend some time thinking out in the book, but let's dive right into that about why does it need to be editable? And why is that such a difficult technical challenge as well as a societal challenge in so many places?

 

Obviously, we've got deadnaming, we've got any number of just [00:15:00] things that are kind of horrific, we've decided and understandably so on the societal scale and on the ethical scale but are difficult to change, but we need to do them. So I wonder if you can talk about that version of it because again, that is a technical challenge, is it not?

 

Meredith Broussard: I got interested in this kind of accidentally because I was trying to do something that was not particularly pro-social. So I was trying to ride the train using my husband's rail pass. So we lived in Philadelphia at the time, and he would get this monthly Septa pass, Septa is the train system, the rail system, and you could get this monthly pass to ride the train an unlimited number of times, but it would have a sticker on it that said M or F.

 

And I wanted to use my husband's train pass, but he said, no, you're probably not going to be able to pass as an M and I was kind of annoyed by this. And then I wondered, well, all right, I'm annoyed by this, but I should think [00:16:00] about what is the experience like for people who are not like me. And so I found out that there was an activist group called Philly R.AG.E, Riders Advocating for Gender Equality.

 

Quinn: Incredible name

 

Meredith Broussard: I know it's such a great acronym, right? They were trying to eliminate the gender stickers on the Septa passes. I thought this was a great idea, but trans and non binary folks and gender non conforming folks would have this problem when they went to get their rail passes where they would face microaggressions because, you know, the person at the Septa window would give them a hard time about which sticker they were going to get, or would give them both stickers.

 

And then they had to, you know, get a hard time for the conductor. And it was just this situation that did not need to happen. There was all of this confusion and all these bad feelings that there was no reason for people to have to go through this particular pain, because what are we talking about?

 

We're [00:17:00] talking about a totally unnecessary sticker on a train pass. And yes, more people should be riding public transit, period all the time. And yes, we need more funding for public transit in general. This was how I got interested in the issue of gender and software systems. Right. Because I was thinking about the kind of mechanisms of this rail pass.

 

I started thinking about what is it like for people who are not me? And I discovered this whole world of activism. And of course, because I think about how software systems are made, I started thinking about, okay, what are the backend changes that Septa would need to make? Cause it's not just about the sticker.

 

There's also always some kind of backend system computationally that is driving people who make decisions about forms and stickers and what have you. So, and the thing is about computers, is [00:18:00] computers are all about bureaucracy. And I'm a data journalist. I find stories and numbers. I use numbers to tell stories.

 

A lot of my stories have to do with bureaucracy and understanding it and understanding kind of technocratic decision making, which is way more interesting than it sounds.

 

Quinn: It is much more interesting. And we can dive into all that. I wonder though, if we can laterally and in the same world, you ran into a very similar, you're so good in your books and you're writing about sharing your own personal stories.

 

And you wrote about, at the doctor's office, having a similar experience, but less about gender and more about racism. Could you explain that? Because again, I imagine same technical issues on the back end.

 

Meredith Broussard: Yeah, same kind of issue. One of the things that I did in one of the chapters in More Than a Glitch is I took my own mammograms.

 

And I ran them through an open source cancer detection A.I. in order to write about the state of the [00:19:00] art in A.I. based cancer detection. So if you were to read the mainstream media, you might have developed the impression that A.I. based cancer detection is right around the corner. Right. And the reality is way more complicated than that.

 

I got interested in this because I was diagnosed with breast cancer a few years ago. I'm totally fine now. I'm really grateful to all of the doctors, the medical professionals who took care of me. But one of the ways that I reacted when I got this news is I got really obsessed about reading everything I could, and especially about reading my entire electronic medical record.

 

And so buried deep down in the electronic medical record. I found this note that said this scan was read by Dr. So and So and also by an A.I.. I thought, what? What did this A.I. find? And who wrote this A.I.? What kind of bias is embedded in it? Right. Because all A.I. has human bias [00:20:00] embedded in it.

 

You know, what are the implications for my care? I had a lot of questions. But I kind of didn't pursue it at the time because I had cancer. I was kind of busy.

 

Quinn: In the middle of COVID, right?

 

Meredith Broussard: Yeah, I got diagnosed, I had surgery and then there was COVID and it was just like a huge nightmare.

 

Yes. Disaster on so many levels. So what I did, in the middle of all of this when I was feeling a little better is I decided to do this experiment. It was a replication experiment with an N of one where I was going to take my mammograms, which I knew there was cancer in them and I was going to run them through this A.I. in order to write about the state of the art.

 

Well, it turns out that there were a couple of things going on. I was completely wrong about how I imagined that the A.I. would work. Which was really interesting and sort of instructive because many of us have completely wrong ideas about how A.I. works. Right. So [00:21:00] I thought that it would take my entire medical record, like my whole chart, all of my tests all of my notes from all of my doctors, all of my images, you know, the flat images and the videos and the ultrasounds and the mammograms that I thought it would take all of this process it and then give me like a diagnosis.

 

Totally wrong. What it does is it takes one flat image, right, which is kind of a, I don't know, it's a semicircle with kind of blobs inside of it. It's kind of X-ray looking. What it does is it takes this single flat image and draws a circle or a square around an area of concern. And then it gives a score, not a percent chance that something is malignant, but a score between zero and one.

 

And the doctor is given this image with the circle or square and the score. Importantly, the radiologist does not have access [00:22:00] to this data until after they've entered in their own diagnosis, right? So the process is, you know, you go, you get your mammogram, the mammogram data gets sent to the radiologist, the human radiologist reads it, enters in what they think is going on, and then after they press save, they get this A.I. image that has a circle or square in it.

 

Quinn: Sure.

 

Meredith Broussard: Totally different than what I expected. So if you had gotten the idea that A.I. based cancer detection is around the corner, like you would be totally justified at that based on mainstream media coverage, but that's also wrong, right? It's not actually around the corner.

 

There are all kinds of issues that have not yet been worked out. One of the issues that most people don't realize is that all A.I. Systems because A.I. is just math, right? These are mathematical systems, and the A.I. is giving you a [00:23:00] statistical likelihood that there is something malignant on this image.

 

Well, every A.I. system is set to have either a higher rate of false positives or a higher rate of false negatives.

 

Quinn: I thought this was so interesting.

 

Meredith Broussard: It's so fascinating, right? And people usually don't think about this. We think about A.I. as, you know, as being a kind of entity or we think about the Hollywood stuff around A.I., but that's not how it works.

 

It's just math. So this particular math has a setting. Now a false positive in cancer would mean that the system says, ah, maybe cancer, but there is no cancer. And a false negative would be the system says, Nope, no cancer. And there is actually cancer. So people have agreed rightfully that the cost of a false negative and cancer detection is way higher than the cost of a false positive, right?[00:24:00]

 

We would much rather have a false positive than a false negative. And you know, both of them are terrible, by the way.

 

Quinn: Sure but getting a false positive doesn’t result in you basically going home and going about your day and it turns out you have cancer, right?

 

Meredith Broussard: Right. So false negative, you like, go home, you go about your day, then eventually, you know, it gets you.

 

But a false positive means they're like, Oh, well, there might be something here. And then you have to go into the pipeline of additional tasks. Which I can tell you is terrible pipeline and you have several weeks usually of uncertainty and fear and terror. So both situations are really bad. We also have to think at this point about the funding because there are economic incentives always when we're talking about A.I..

 

So let's think about who gets paid when radiologists read films and when A.I. reads films. So an A.I. [00:25:00] reads a film and nobody gets paid, right? A radiologist reads a film and the hospital gets paid, right? Because the radiologist is, you know, is a human being, is a laborer. And so let's think about, okay, well, who's incentivized to have more A.I. reading films over radiologist reading films. Well, it's not the patients, it's the insurance companies, because the insurance companies are the ones who are paying when you know, paying the hospitals for the radiologist to read films. The hospitals certainly are not incentivized because the hospitals get paid when the radiologists work.

 

And I can say from the perspective of a cancer patient, I do not want a medical system where, you know, you kind of step up and have your test in a machine. And then, you know, you get like a printout that tells you the result. That's terrible, I want a medical [00:26:00] professional who is interpreting the results for me, who's telling me things, who’s caring for me especially when it's a really high stakes decision like cancer.

 

So we really need to think harder about when it's appropriate to use A.I. It's not just a matter of can we. But it's a matter of, should we? Absolutely, yes, we should use all kinds of mechanisms to try and have earlier detection and better detection, but the mechanisms that we're using right now are not necessarily going to be our salvation.

 

Quinn: I really appreciate the nuance you bring to this and obviously your personal experiences enhance that and complement it. I've had a few really wonderful semi-related conversations over the past year. Quite a few in private and a few in public that I'm thinking about right now. And one is with an author Professor Susan Liautaud at Stanford, who wrote a book and teaches about basically ethics in a rapidly changing world [00:27:00] and building and using frameworks more often in your life.

 

So that when you encounter things on the margins, like we're constantly encountering, that you're more ready to apply those and be ready to tackle those. And another one with Professor Emma Pearson at Cornell Tech, I believe, who as she put it, and I'll probably mangle that, she likes to use new technology, like this and machine learning and such, to answer old hard questions.

 

It was great conversation. So I read most of your work and obviously we're talking here and you've given so many wonderful interviews, but you know, I come back to this one part of a, I think it was MIT who just in sort of a throwaway line said, despite, or I guess, probably because you're a working data scientist and journalist, you spend a lot of time thinking about problems that mathematics can't solve.

 

And I guess, as always, you know, with anything we can say satellites or whatever it might be, or carbon removal, we can obviously throw the [00:28:00] word yet at the end of that, because things do change and we pursue them, whether or not we're paying the cost along the way. But I wonder if you can help me understand that and how you do that while still not stepping fully into pessimism or doomerism or what it may be.

 

How do we practically push forward, but help people understand, like you were saying, this is not the thing yet, especially for something with so many stakes, like healthcare.

 

Meredith Broussard: Well, there's a certain way that privilege operates inside mathematics. And computer science is a descendant of mathematics.

 

And this privilege, the way it operates, is that when you are an elite mathematician or an elite computer scientist, nobody tells you no, right? Which is super weird because everywhere else in the world, people tell you no all the time, but for some reason [00:29:00] people, I mean, probably because mathematicians have put out this idea for centuries.

 

Like they've put out this idea that math is superior to other disciplines. And that, you know, to be a really good mathematician, you deserve to go off and kind of not be bothered with the petty concerns of real life, that you just need to live this life of the mind in the ivory tower.

 

And, you know, it's really great if you can do that, but you know, not everybody can. And it's also just not, it's not very responsible at a certain point. So this kind of self obsession that mathematics has turns into something that I call techno-chauvinism, the idea that technological solutions, mathematical solutions are superior to others.

 

And what I would argue is that we need to add nuance to that. We need to think about using [00:30:00] the right tool for the task, because sometimes the right tool for the task is undoubtedly a computer. And sometimes it's something simple, like a book in the hands of a child sitting on a parent's lap, you know, when it's not inherently better than the other.

 

So, because mathematicians are rarely told no, computer scientists are also rarely told no. You know, you start to think when you're in math or computer science or physics or, you know, related disciplines, you start to think that, oh, well, I am always going to have the answer for everything.

 

And that is not a really safe way to operate. You know, I mean, let's think about Shakespeare. There's this thing called hubris, and we can then look at artificial intelligence systems, and we can look at the ways that A.I. systems are actually harming people right now. So take the case of facial recognition, for example.

 

Facial recognition is a kind of A.I. Well, [00:31:00] what was discovered is that facial recognition is better at recognizing men than women. It's better at recognizing light skin than dark skin. It is best of all at recognizing men with light skin. It's worst of all at recognizing women with dark skin. And trans and non binary folks are generally not recognized at all.

 

So this is a really biased technology. The reason for this bias sometimes has to do with the training data that is used to train the facial recognition system, right? So if you don't have a sufficient variety of skin tones in the training data, then The system itself is not going to you know, it's not going to recognize a wide range of skin tones because again, we're talking about mathematical systems here that are constructed and they are constructed by people.

 

The way we make a machine learning system, like a facial recognition system is the same every single time. What we do is we take a whole bunch of data, as much data as we [00:32:00] can find, and we plunk it into the computer, and we say, computer, make a model. Computer makes a model. The model shows the mathematical patterns in the data, and then we can use that model to do all kinds of cool things, to make predictions.

 

Or decisions or generate new text or images or audio or video, right? So the facial recognition system is trained on data, you know, that shows human faces and then it recognizes human faces. And again, the problem can be that it does not recognize a sufficient range of skin tones because of the training data, right?

 

So most people would look at this and say, well, okay, let's just put in, you know, better training data. Right. And yes, this would make a facial recognition system more accurate, but this is not going to solve the problem entirely because then we have to think about the ways that facial recognition is used.

 

So we have low risk and high risk uses of facial recognition. Now, a low risk use of facial recognition is probably something like [00:33:00] using facial recognition on my phone to unlock it, usually doesn't work for me, but it's not really a big deal because I just put in my passcode and you know, I go on about my day.

 

So I would call that a low risk use, but a high risk use of facial recognition might be police using facial recognition on real time video surveillance feeds. Because what is that going to do? Well, it's going to misidentify women and people of color more often, and it's going to get more people caught up in the justice system without cause.

 

Right? So that's going to have extremely negative effects. And I would argue that this is actually a case where we should not be using facial recognition in policing at all.

 

Quinn: Yeah. I mean, it's easy to say that's one example, but at least these systems that have been sold to police departments and the TSA and things like that, they're catastrophically bad at the job they're supposed to do, like we [00:34:00] already know that but even then ones that aren't I guess as publicly evident how bad they are. Again, like you said, these systems have been designed and implemented and used at scale for years now. Again, you've got all the pre-crime type stuff.

 

Everyone's tried to do with facial recognition or you know, just general policing where they God, who is the guy who is, you wrote about him. the fellow who is commissioner of police in both LA and New York that made the whole sort of pre crime type system.

 

Meredith Broussard: Yep. Yep. One of the ways that this manifested is it actually ended up in Hollywood.

 

So if you have watched The Wire, there's this season where they focus on the police and there's a system called Com Stat. And for a data nerd like me, I really liked this this plotline because you saw the absurdity of predictive policing, right? So the idea behind predictive policing is that we take crime data and put it in the computer and say, [00:35:00] okay, computer predict, based on crime patterns in the past, predict where there's going to be a crime in the future.

 

And it creates this really toxic feedback loop because when you say, all right, where has crime happened in the past? Well, crime data is actually not crime data. It's arrest data, right? So it's your crimes, like drug crimes, for example, happen in, you know, in black and white and every community like at about the same rate. It's just that black and brown communities are over policed and have been for a long, long, long time. What the data shows is that you know, there's a lot of arrests happening in black and brown communities. When you feed that into the computer, the computer says, Oh, there's all of these arrests happening here.

 

So we should send more police to the black and brown communities, which exacerbates the problem. And then what happens when you deploy more police. well, they find [00:36:00] more crimes. So there's more arrested. Then that gets fed back in and the computer says, Oh, you need more police. It's not a beneficial system.

 

Quinn: No. And it's very easy. Again, they're very different, but you can swap those out. And we've seen this over and over again with mortgages, with other types of loans. Like, why do you think black people are coded into the system as not having more mortgages in the past? Like, how much, you know, historical research do we need to do on redlining and everything?

 

I don't want you to rehash your entire book, but you obviously talked about the incredible soap dispenser example, which seems so simple on its face, and yet it's so emblematic of everything else. But there's one that really horrified me. And look, my kids are young ish, though apparently older than I thought at this point every day.

 

And they did remote school during COVID, like everyone else, incredibly privileged to have Internet, mom and dad could work from home. They had their whatever, the little Google laptops that the school gave out, you know, all of those things. They were very lucky to have all that, which many children in the U S [00:37:00] don't, much less internationally.

 

But you told this story about a predictive grading nightmare. That I'm truly, I guess, astonished that it was used in practice. I understand everyone was really flailing, scraping the barrel, whatever you want to call it in COVID to figure out how we do these things that we had taken for granted. And again, provided the baselines for college acceptance and all this stuff for years, they had to figure out how to do something.

 

And then everybody ditches the SATs and now it swings back. But I wonder if we can talk a little bit about this predictive grading story, because again, it's emblematic of workers being monitored, students being monitored, all of these different things that once there is this type of strike on your record, again, like with the recidivism algorithms and mortgages, it's very difficult to make those go away. Just because facial recognition gets somebody wrong. Usually a black man. I mean, you told a bunch of [00:38:00] stories about that in there and we've heard many of them. And he might be released a few hours or a few days later. That’s still on his record in some way. And this predictive grading thing was so terrifying to me.

 

Could you talk a little bit about that? How that happened and why, and I don't even know, I just, I was like, we have to talk about it.

 

Meredith Broussard: It's totally horrifying isn't it? And I mean, yeah, exactly. Like I'm a professor, like I give students grades. Like I think a lot about grading. I think a lot about what is my duty of care as a professor?

 

And this particular episode seemed to violate everything that I believe about education. So what happened is during the pandemic, the International Baccalaureate Organization, which is this organization that awards a very prestigious secondary school diploma, decided that they were not going to be able to administer in person IB exams.

 

Quinn: I totally get it.

 

Meredith Broussard: Totally appropriate because no, we were not [00:39:00] like putting children in rooms with each other at that point, because they couldn't administer the in person exams, they decided they were going to use an algorithm to assign imaginary grades to real students. Now, what happened is they used a machine learning system, right?

 

So, as I said before, the way the machine learning system works is you dump in the data and then it makes a model. The model predicts the grade that the student would have gotten had they taken the exam that was canceled, right, which is already completely absurd, but you know, it's the pandemic.

 

People are making all kinds of wacky decisions, but it ran into a particular characteristic of education statistics, which, you know, if you've studied education statistics, you know that the way that it shakes out in the U.S. is that students from wealthy schools [00:40:00] tend to do well, and students from poor schools tend to do poorly.

 

Okay, and well, let's think about race and class. Who's at the wealthier schools? Well, it's white students. Who's at the poorer schools? It's black and brown students. So the machine goes in and, you know, looks at all of this data and assigns grades. What it did was it assigned a prediction of a failing grade to students who went to poorer schools and a prediction of passing grades to students who went to wealthier schools. And well, then that broke down around, you know, race and class lines. And so I wrote about the case of a young woman named Isabel Castaneda, who was a straight A student, a heritage Spanish speaker.

 

And her teachers had predicted that she was going to, you know, pass with flying colors, get great grades on all of her IB exams, and the [00:41:00] algorithm predicted that she was going to fail her Spanish exam, which is completely absurd. So, I mean, it just illustrates one of the big points that I'm trying to make in the book, which is that machines do not always make good decisions.

 

And we really need to be cautious about turning over decision making to machines. And if you are going to have a machine make a decision, one of the things that you need is you need a mechanism for redress, right? You need to design a process for when the machine goes wrong and does something wrong. And you need to, you know, have a human being who's empowered to go in, make a different decision and change the outcome and update all of the records, which by the way, is an expensive and time consuming process.

 

And you might actually be better off not firing the person who made the decision in the first place and [00:42:00] replacing them with a computer because then you're just going to have to hire a person to clean up after the computer makes a mess.

 

Quinn: Yeah, it's a horror. Obviously, any of these conditions are a nightmare.

 

Anyone being arrested, being denied a mortgage or insurance, or a loan, or whatever it might be you know, it's so easy to go down both broadly and be like, we should never use it. And on the other hand, go down rabbit holes on each of these and think of the implications that come from these things.

 

Again just the number of black men in prison and like what has that done to not just their personal lives and their ability to vote and get loans but also to redistricting. I mean, the whole thing, like we can do it all day. The kids thing always, whether it's healthcare, Medicaid, education, whatever it might be.

 

My kids were part of the LA school system for a while and they were very lucky to go to one of the few very good public schools, public elementary schools, where the teachers and administration did everything they possibly could to make this a nourishing and a fruitful experience for these kids, but I'm under [00:43:00] no I mean, I got as engaged as someone can be for a school system that's, you know, depending on how many kids are actually going to school, 500 to 600, 000 students.

 

It's a behemoth. And you know, you don't get that time back, you know, and this young woman doesn't get that test back, especially as you said, if it's bandwidth wise and profits wise easier for the company or whatever it might be, the public agency to not fix the problem, right?

 

Whether it's that individual, which then they're opening up a whole can of worms to them, or just in general, by changing like you said, changing a field from Boolean to an editable string. Like I understand that's a lot of work, but sometimes we have to do this work and that's where we are.

 

And again, it's easier for society to, it seems insane because we've had so many society three steps forward, seven steps back. But it seems sometimes that it's easier for the societal part to move ahead han some of these systems where it's still so hard coded. [00:44:00] And like you said, the technical debt is just there.

 

It's outrageous because some of these programmers still speak ancient languages. You know, if you've ever stood on the side of an airport desk and someone's trying to help you and you happen to peek at their monitor and you go Oh my God, why is it black and green? Like, how is that still the thing we're using to keep everyone in the air?

 

And that's the safest system we got.

 

Meredith Broussard: Yeah, no, I totally agree. And I, it's amazing to me that these systems work as well as they do. And as far as I'm concerned, they're just, they're barely functioning, like they're held together with bubble gum and duct tape. And I just, I don't think that people should have quite as much faith in them as they do, because there's not much standing between us and total chaos when it comes to computational systems.

 

Quinn: And again, I appreciate even, even despite that, how you work so hard to not go all the way to doomerism on these things, you know, to Luddite, because again, [00:45:00] there are incredible examples of these things changing the world, you know, the ability for the scientists, whenever they did it, was it January or February, 2020, when they uploaded the, you know, the genetics of SARS CoV2 to the internet within a day or something. You're like, tell that to our grandparents, you know, explain that one. Good luck. It's incredible. So that the whole world could access it and we could build these revolutionary new mRNA vaccines that so many other people worked for decades to make safe.

 

You know, it can do amazing things. It will do amazing things. But like you said, the sort of the blind faith if not, by some venture capitalist, eager disregard, intentional, public disregard for trust and safety and things like that. It really feels like it just starts to benefit the same folks and hurt the same folks along the way, if not at more scale.

 

So, how can we help? What is most actionable to you? Where can people get most educated on these things? Where can they, whatever [00:46:00] their skills or vocation, volunteer or donate or participate in some way, where can we actually start to make some progress on these things? Is it internally at their own work?

 

Is it in other places? I'm super curious because again, you seem keen on not just, Hey, this isn't ready yet, but also saying but these are the things we need to do. I mean, you spent the whole last chapter of the book that way.

 

Meredith Broussard: Well, the first thing that you can do is you could buy my book. That is the most important first step.

 

Quinn: Moral of the story.

 

Meredith Broussard: Yes. So Artificial Unintelligence: How Computers Misunderstand the World was my previous book. And that is about the inner workings, the outer limits of computers and then More Than A Glitch: Confronting Race, Gender, and Ability Bias in Tech picks up where Artificial Unintelligence left off and takes a deeper look at society and the way that very human problems play out inside computational systems and both books end on a high note, because, you know, it's kind of a lot to [00:47:00] deal with. I look at the technical realities of modern computer systems. I look at some of the disasters that have happened and I talk about how can we learn from these disasters, but I also talk about okay, what can we do differently?

 

So it starts with empowering ourselves, learning a little bit more about technology so that we can push back when algorithmic decisions are unjust or unfair. Also we need some action on the policy front. You know, we need better data privacy laws. We need better better tech policy overall, there have been some really great strides made in Washington around this, I point people to the blueprint for an A.I. Bill of Rights, which came out of the Biden White House Office of Science and Technology Policy. President Biden's Executive Order on A.I. that came out recently is also a really great leap forward, and we've seen some motion in many government, U.S. government departments toward wrangling [00:48:00] A.I. a little bit better. So at NIST, we have the A.I. Risk Management Framework. The GAO has done an algorithmic audit and put out some guidelines for using algorithms, you know, in government contexts. So these are pretty impressive leaps forward.

 

We still have a ways to go though. So there's legislation like the Algorithmic Accountability Act that has not been passed, but, you know, ideally will be in the future. So it's about individual and collective action. It starts with learning more, and then it's about making different decisions than we have in the past.

 

Quinn: Yeah. It all sounds so easy when you put it that way. What other books do you recommend? Weapons of Math Destruction, anyone else? Or, I mean, it's 2024, YouTube videos, anything that you really feel will help people get up to speed on this.

 

Meredith Broussard: I designed the bibliography of both of my most [00:49:00] recent books as reading lists.

 

So I would go through the bibliographies. Ones that I usually point people to are Weapons of Math Destruction, as you said, Safiya Noble's book, Algorithms of Oppression, Virginia Eubanks has a book, Roja Benjamin's book, Race After Technology, Charlton McIlwain's Black Software, a couple of influencers who I like Avriel Epps is doing some really interesting stuff around understanding algorithms.

 

There's a guy named Joel Bervell who does social media stuff about racial bias in medicine. I've learned a lot that way. So there are many, many resources out there.

 

Quinn: Thank you. That's great. And yeah, we'll definitely point folks towards your bibliographies. Is it Automating Inequality?

 

Meredith Broussard: Yes. That's the one. Perfect. By Virginia Eubanks.

 

Quinn: Fantastic book. Well, we'll use A.I. to sub that in. I'm kidding. We'll not do that.

 

Meredith Broussard: Virginia, I'm so sorry. I [00:50:00] lost the thread there for a second.

 

Quinn: You're a monster. Okay. Last couple of questions we ask everyone, and then I'm going to get you out of here if that's okay.

 

Does that work for you? Meredith, when was the first time in your life when you realized you had the power of change or the power to do something meaningful? And folks tell us about running for student council as a kid or, you know, inventing something the first time it could be, could be anything.

 

Meredith Broussard: So I think one of the first times that I realized that I had power was a story that I tell in the beginning of Artificial Unintelligence, which is when I got an erector set for Christmas or my birthday or something when I was little and I wanted to build this robot and I thought the robot was going to be my new best friend and we were going to like dance and party together and it was going to, you know, be alive.

 

And I built this robot with my little, you know, little tiny, teeny tiny kid sized wrenches and what have you, and it had a motor. I was very excited about the motor, and I put the motor in, I flipped it on, and nothing [00:51:00] happened. I started tinkering with it. I got my mom. I was like, Mom, the, you know, the robot is not working.

 

She said, well, did you turn it off and turn it on again? I said, yes. She said, did you flip the batteries? I said, yes. And so then she went and did the same thing. So she was like, Oh, it's broken. So I realized at that point, there was a really big gap between what I imagined the technology would do and what the technology could actually do.

 

And so I've gone back to this moment many times over the years. And I use it as a way of thinking through what the technology can and can't do, and as a way of thinking about the power that humans have, the power that humans have when they're assisted by tools like motors and, you know, little kid sized metal wrenches but also the limit to that, the fact that, you know, that power is not unlimited.

 

Quinn: I love that. It turns out [00:52:00] most of being an adult is managing expectations almost no matter what your job is or whether you have kids, or you're a teacher or not, it really boy, does that come in handy? Thank you for sharing that. I love erector sets so much. Those things were amazing, but you're right expectations out the window for who this little friend was going to be. Meredith, who is someone in your life that has positively impacted your work in the past six months?

 

Meredith Broussard: I have been talking to a student group, they are called Encoding Justice and they are doing a bunch of really great work about surveillance in schools.

 

They're opposed to surveillance in schools. And so I am really grateful that they have been sharing their wisdom with me. I am doing a project about school surveillance. And I have just learned a lot from their efforts. They have a really interesting platform that they have put out [00:53:00] around things that young people think can be done right now to change our A.I. driven world for the better.

 

So I would definitely recommend checking out Encoding Justice.

 

Quinn: I will do that immediately. I think that's great. And, you know, just such a travesty. It is such an unforced error of what we need to do. I mean, even just down to the mental health of these kids, it's such a nightmare. Meredith, last one.

 

What is a book you've read in the past year or so that has changed you in some way? Changed your thinking or opened your mind to some topic you hadn't considered before? Or frankly, I mean, I just basically read about dragons at night at this point because it turns my brain off. So it could be a coloring book.

 

I don't care. We have a whole list on Bookshop, and the people love it.

 

Meredith Broussard: The most spectacular book that I've read recently is All Star Chain Gang. And it is a kind of post apocalyptic book about the carceral state and about terrible possible future. And it just, it moved me and it made me think [00:54:00] differently about the world.

 

Quinn: I love that. Thank you for sharing that. I'm so impressed. You can remember the last book you read. I mean, it truly goes right through me. If I don't write it down, it didn't happen. Meredith, where can our listeners and readers and such follow you and your team's work?

 

Meredith Broussard: I'm on all the social platforms at MeriBroussard.

 

My website is meridethbroussard. com and you can find More Than A Glitch or Artificial Unintelligence anywhere that books are sold.

 

Quinn: Perfect. Thank you so much for all this and for writing these books and talking so honestly about your own experiences and how, again, they are part and parcel of everything else.

 

Not that they're not obviously damaging on their own, but this is where we are and what we need to dig out of to use these tools in a much healthier, equitable, useful way for more folks. So I really, really appreciate everything you do and for your time today.

 

Meredith Broussard: Thank you so much.