SCIENCE FOR PEOPLE WHO GIVE A SHIT
Aug. 16, 2021

122. We Are The Algorithm

122. We Are The Algorithm

In Episode 122, Quinn has big questions about AI ethics and, like many other situations, is left wondering: was Dr. Ian Malcolm right all along?

Our guest is Dr. Rumman Chowdury. She is the director of the Machine Ethics Transparency & Accountability (META) team at Twitter, where she’s helping to build a new ethical backbone into Twitter from the inside out.

On every social media platform you interact with on a regular basis, there is some type of machine learning or algorithm determining what you see and how you interact with it. Obviously, that is quite a responsibility to bear. We’ve seen algorithms in the past be straight-up racist, suck people into alt-right conspiracy funnels, and cater to the worst of our human tendencies.

These are all running on systems designed by humans – mostly white men – and that shows in the ways that they work. It’s Dr. Rumman’s job to clamp down on the worst of these to make algorithms that perform ethically – and she’s certainly got her work cut out for her.

Have feedback or questions? Tweet us, or send a message to questions@importantnotimportant.com

Important, Not Important Book Club:

 

Links:

 

Connect with us:

 

Important, Not Important is produced by Crate Media

Transcript
Quinn:
Welcome to Important, Not important. My name is Quinn Emmett and this is, science for people who give a shit. Folks, there's a lot going on out there. Our world is changing every single day from your iPhone in your hand to the wildfires outside. We give you the tools you need to feel better, and to fight for a better future for everyone. The context, straight from the smartest people on earth, people who are on the frontlines of the future working on this stuff, and the action steps you can take to not only feel better and get involved yourself, but to support them to drive change. Our guests are data scientists, nurses, journalists, CEOs, founders, investors, educators, engineers, business leaders, you name it, astronauts, we even had a Reverend.

Quinn:
Some quick housekeeping, folks. You can send questions, thoughts and feedback to us on Twitter @importantnotimp or you can email us at questions@importantnotimportant.com. You can also join 10s of 1000s of other smart people, and subscribe to our free weekly newsletter at importantnotimportant.com. Once a week, every Friday, 10 minutes or less, you're going to get all caught up on the most important science news that's affecting yourself and your family and your business and your investments, our country, the world, plus some analysis and action steps you can take. Also, you can hunt for a new, impactful job on the frontlines in the future with these folks at importantjobs.com.

Quinn:
And, if you work for a company, if you run a company or organization or lab that's already doing that work, you can list your open roles there for free, and get them in front of our entire community. And finally, if you haven't been listening recently, folks, go back, check out our recent conversations about the future of mosquitoes, how to protect yourself against wildfire smoke, you can find out how the universe is going to end and what that means for you, and for people of color and science. And finally, how you can help beat poverty by giving people money. And of course, make sure to just hit that subscribe button on your phone so you're ready when our next conversations drop. We're talking to senators, we're talking to Jose Andreas about disaster relief. We've got so much awesome stuff coming. So subscribe now so you're ready to go.

Quinn:
This week's episode, where you're asking big questions about artificial intelligence ethics. And, believe me when I say, this applies to you. And also, it's kind of, about Jurassic Park. Was Dr. Ian Malcolm right the whole time, since 1994, when I asked for the Jurassic Park soundtrack for Christmas? Will life always find a way? Is that bad? Well, we've got someone here today who can help provide some light on that. Our guest today is the, maybe, actually, irreplaceable, Dr. Rumman Chowdhury. She is helping to build a new ethical backbone into Twitter, from the inside out. And obviously, we are so thankful for her efforts to do that. I learned so much from this conversation. And so, if you're curious about how your social networks work in what you're seeing, and what you're clicking on, if you work on data science, or if you're a founder that might implement some machine learning at some point, which is basically everybody at this point, this conversation is for you. Here we go.

Quinn:
My guest today is Dr. Rumman Chowdhury and together we're going to try to help you understand how quickly the fields of artificial intelligence and machine learning are advancing and some of the practical, ethical considerations involved. I feel like there is a light on the horizon with these things. I think there's a lot of good folks and good companies out there trying to do the right thing, but also trying to move quickly. It's complicated and I'm excited to dig into that today with Dr. Chowdhury. Welcome, doctor.

Dr. Rumman Chowdhury:
Thank you so much. Thanks for having me on, Quinn.

Quinn:
Absolutely. Rumman, tell us quickly if you could, who you are and what you do.

Dr. Rumman Chowdhury:
Sure. I am the director of a team at Twitter known as meta machine learning ethics, transparency, and accountability. So if you go on social media platform, Twitter or really almost any other device or platform, you will probably have some sort of algorithm or machine learning output impacting what you're seeing and how you're interacting with things. My job on that team is to ensure that the algorithms we're building at Twitter are responsibly and ethically built so that people aren't being unintentionally harmed.

Dr. Rumman Chowdhury:
My background, I'm a data scientist and a social scientist. I'm what you would call the social science field, a quant. My PhD is in political science. Before I started at Twitter, I had a startup that did algorithmic audits, and it's called Parity. And also, I was the first lead at Accenture. Just a big tech consulting firm. I created their practice for responsible AI. And, I had really interesting folks I worked with all around the world for about four years.

Quinn:
That is rad. All right, so you've got some game in this thing.

Dr. Rumman Chowdhury:
A little. Just a little.

Quinn:
Okay. All right. Well, from all of my research, it seems like the world should be very grateful for you and for what you're trying to do so I'm excited to help folks understand that today. Right before we get going, Rumman, we'd like to start with one important question. A little tongue in cheek, a little, why are we all here type of thing? Instead of asking what is your entire life story, as fascinating as I'm sure that is, we'd like to ask, Rumman, why are you vital to the survival of the species?

Dr. Rumman Chowdhury:
Why am I vital to the survival of the species? Can I give you a very nihilist answer?

Quinn:
Most people start with laughing at me and then I get something profound. Sometimes people just cackle and they don't stop. You can do whatever comes to mind.

Dr. Rumman Chowdhury:
I actually think that none of us are vital to the survival of the species. I do think, though, that individuals could be extremely harmful to species. I think it's very unfortunately imbalanced. I have a very feminist perspective of what it means to have positive change. And, it's that it requires a lot of people in a lot of roles. So I don't think of myself as someone vital. But, I do think that there are plenty of people, and I could be one of them that are helpful, or trying to be helpful.

Quinn:
I'll take that. We'll take it. If that's what gets you out of bed every day then, great, we'll roll with it. Like you said, it's a little hard out there some days so we'll roll with that. Here is why, I guess to reiterate, for myself, and for you and for everybody out there, why wanted to have this conversation, where we'll go with it. I feel like at least everybody who listens to this show, but across the wide spectrum, everyone's increasingly aware that some version of this, we're calling it artificial intelligence, comes at least in this case, at least on the consumer level, in a lot of forms of machine learning, right? These neural networks.

Quinn:
You've got these massive datasets, whether it's pictures or words, and algorithms for everything from your Instagram images, your twitter reviews, hospital records, autonomous driving, dating matches, mortgages, right? All these things, booking flights, these systems are everywhere, and they're efficient, and they're relatively cost-effective, and they are useful internally and often externally. But, it's also pretty easy to see this with most tech advances, and particularly this one, thanks to some incredible chips and data processing, misuse and disuse do happen. And, in some ways, in some fields, there are more prominent than other. Quite a lot of the social sciences. You said your major is political science, I'm like a pagan atheist, but I was a religious studies major. Not the most lucrative majors anyone's ever had, but at the same time, integral to being paired with the quote unquote, hard sciences. And, they have often been, at least as far as I can tell, pretty left out a lot of this development, history and anthropology and collaboration of any sort.

Quinn:
But, we are all interacting with these systems on the daily, whether it's on our phone, or whatever it might be. So, I wanted to explore some real world practical philosophies and mechanisms and guidelines in AI and machine learning and try to better understand for myself and everyone else, how they are being in some ways or could be developed and implemented in a more inclusive way, with hopefully the cooperation of not just folks internally, but users or patients or whoever it might be. And, I couldn't think of anyone better to talk to you than yourself.

Quinn:
Dr. Chowdhury, by all accounts, Facebook, and YouTube, and TikTok, and Snapchat, and like a horde of massive international social networks are in desperate need of folks, like yourself, and I'm sure they do have some wonderful folks. But, why did you choose Twitter?

Dr. Rumman Chowdhury:
Well, first of all, I like to call myself a Twitter power user.

Quinn:
Okay.

Dr. Rumman Chowdhury:
Of all the social media platforms, it is actually the one I use the most. But, in seriousness, it is personally the one I've gotten the most value out of. I think a lot of people will say this, Twitter, for some reason, as a medium, is a place where a lot of us find our community. And for me, working in the field of responsible AI, it is actually how I've gotten to know so many people who work all around the world in my field. It's where we share papers, it's where we share our findings, it's where we share our thoughts. There are so many people, and I think a lot of people will can attest to this that, we have met on Twitter that we consider to be actual friends. So if we are ever in a situation where we meet physically, I literally hugged people. Pre-COVID, I have hugged people.

Quinn:
I was like, wait, no, don't do that.

Dr. Rumman Chowdhury:
But, yeah, I literally... I say, we're old friends catching up, even though in retrospect, I'm, wait, I've never actually physically met this person before. The other thing is, social media platforms like Twitter, like Facebook, etc, they have a huge impact on society. And, it is worth talking about and understanding and wrapping our heads around. Now, everyone in the world is not on social media, everyone in the world's not on Twitter, but many people in the world are impacted by it, even if they are not a participant in the platform. And, we can think through political situations, social situations, what is the media that gets highlighted. Even things like people getting de-platformed, or removed from the platform, it's incredibly impactful? It has ramifications.

Dr. Rumman Chowdhury:
The political scientist in me is thinking of all the papers one can write about socio-political impacts of actions on social media platforms, and certain people do write these papers. The other thing I liked about Twitter is, it is actually a pretty small company. It is 6000 people. So it is-

Quinn:
Is that it?

Dr. Rumman Chowdhury:
Yeah. Compared to us, Facebook is 80,000, I want to say. So, it's interesting, we're actually a medium-sized company with a profound footprint and a profound impact. So, I could, in this position, theoretically be somebody who could really drive this platform, along with my counterparts and other folks in the company in a really positive direction, because it's not a giant behemoth we need to move, it's actually a fairly agile organization. And, you kind of see on the platform, there's lots of experimentation that happens, a lot of open conversation back and forth. Just to highlight a specific example, this is before I even joined Twitter. I joined in late February/early March, in October of last year, our Twitter users highlighted that perceived to be bias, based on gender based bias in our algorithm that automatically cropped images for you, as you posted on Twitter.

Dr. Rumman Chowdhury:
And what I took away, what I really loved, was that there was this conversation that happened between, literally, Twitter leadership, and the general public, where people did things like pull together datasets, and test it out on their own. They were funny ones, like someone made one of all these different Simpsons characters, like Apu versus Homer, etc, etc. But, it was all done in the spirit of collaboration. I don't know if I've seen another company interact so honestly. These were not tweets and conversations that were fabricated, or written for them by other people. And now, that I've gotten to know some of these people, I 100% know that these tweets were not written for them and nobody vetted them via some group. It was unfiltered and honest conversation. I love that.

Quinn:
That makes so much sense. It's nice to see that you are excited about that agility. It does seem like for those of us who've been spending an inordinate amount of time on Twitter for a long time, that at least the past year and a half or so, even just on a basic... what would seem like a basic product side, to even a power user, there has been some progress and an increasing rate of development and transparency. And, if not, exactly, literally open source with some things, then real conversations about how it should be used, and how we can be better.

Quinn:
And, I think you'd find a lot of people who think that the picture cropping is just a setting and not understanding that it's intelligent. And, that means with anything that we have programmed, which I feel like is the entire theme of this thing, it's going to either purposely or inherently, or even accidentally bring some of our human biases into them. And, a lot of that comes down to who's involved with designing it and implementing and all that stuff. And so, it's nice to see this little microcosm of something and realize, like, wait, this isn't quite working correctly and it's actually working systematically in a way that's damaging to some folks.

Quinn:
And, if, like you said, the folks internally are writing their own tweets and interacting in an honest and progressive way, then that's awesome, because Twitter does have a very outsized impact. We know that, for sure.

Dr. Rumman Chowdhury:
Absolutely. Yeah, no, it's an ethos that I've seen throughout the company. There's been this openness to think through the issues of machine learning ethics. One thing I love about this field in general, we're very young, and we're very new. I started my role at Accenture in 2017. I was one of the first people in the world to have a job like that. There have been researchers for quite some time, but really in an applied sentiment.

Dr. Rumman Chowdhury:
Fundamentally, at my core, I'm a builder, I'm a tinkerer, I'm a data scientist. I really love making things and I like solving problems. That's frankly, why 99% of people become data scientists, they like solving problems. And, you solve it using code. What I love about the work I do inside a company is I get to go do that, I get to go fix something, and I get to go make something good. But, you need leadership buy-in, you need people to be excited about it. And, in the few short months I've been here, I've been allowed to be very audacious. I can talk through some of the stuff we're doing, but it's made me really happy to see that kind of buy-in.

Dr. Rumman Chowdhury:
And, if anything, people are like, tell us what we can do, give me your wildest idea. And, we're going to go give it a shot. Why not? And again, I do think it boils down to some of Jack Dorsey's thoughts on transparency, and decentralization, which he's talked about quite a bit. He's talked a bit about this concept of algorithmic choice, which my team is actually tinkering with. What does it mean to give people meaningful choice over the algorithms that are in the system? And, you're right, most people don't even know how much of their experience is being curated for them. And, I will say, there are definitely many ways in which personalization is nice, and we want that curation. At Twitter, you have the option of going into the reverse chronological timeline or doing the curated timeline. And, a high majority of people actually end up with a curated timeline, because it filters out a lot of stuff you don't really want to see.

Dr. Rumman Chowdhury:
But also, it is worth being knowledgeable and understanding how algorithms can be impactful. And, for example, creating filter bubbles or highlighting certain information and not highlighting others. And, it's really important for people to be knowledgeable. But, yeah, what I love is that from the top-down, this company's really supportive.

Quinn:
That's awesome. Again, as someone who I think like you has found just this incredibly significant... as a cohort, but also a bunch of different groups, and individuals, the ability to interact with folks that you wouldn't otherwise have in any other scenario, really, since I started this thing, and we don't just cover data privacy, or artificial intelligence, or neural networks, or climate change, or COVID. For every one of these conversations, I try to get basically a 301 in this thing, and just hustling and audio books and highlighting as fast as I can. But, it's also, we're trying to reach out to folks who are the best in the world, who many of them are on Twitter, to have conversations, to understand how they're doing the work and how they're thinking.

Quinn:
Twitter might be the smallest of raw users of x10 networks, but, like you said, the outsize influence of the folks that are on there who are willing to engage is something that I think we can all get a lot from if the tool is designed in an inclusive, useful way. I had this thought this morning, because I'm, again, a nerd who likes to read. I've been going back through all my all-time favorite sci-fi and fantasy books and things like that. I'm not sure how familiar you are with Asimov's Three Laws of Robotics. They got to be almost 100 years old now. I think they're both prescient and probably horrifically outdated in a lot of ways if you look under the hood.

Quinn:
They obviously don't cover everything, but they're also this incredible foundation. And, they exist in the context of robots or androids, as servants, primarily. But, it's also compelling, because it implies for our conversation today that we understand what these robots or servants are doing and why, and that we retain control over that, which is, I think, one of these issues, you see a lot of, when it's the black box, as they describe it, for a lot of these algorithms, right? The laws are, a robot may not injure a human being through inaction or action or allow a human being to come to harm. They have to obey the orders given to it by human beings, except where it conflicts with the first one, which is to harm through action or inaction. And, it must protect its own existence, right, as long as it doesn't conflict with the other two.

Quinn:
And, you reminded me of this, basically, trying to get ahead of, oh, we've built this thing, let's set up some rules for it. It reminds me of some of the work that Dr. Jennifer Doudna and her crew did in the early days of CRISPR, too, when they took this step back and they recognized very quickly, holy shit, this thing that we have discovered/made, could be just incredibly powerful in so many ways. And very quickly, they're, no CRISPR babies, right? No germline edits. We can't do this. And of course, there's going to be folks who ignore that and there have been some, but, for the most part, there's been this interesting community adherence to these fundamental tenets.

Quinn:
I think it seems like, to a lot of folks, that the field again, which is super broad of artificial intelligence, is a little more behind there. I wonder, is that true? And if not, is it because it's so much more broad and encompassing? Or is it because we've been working on versions of AI for decades now? Or it's moving quickly? Where do you fall on that spectrum as one of the first people to really practically work on this side of it?

Dr. Rumman Chowdhury:
Yes. So I am so glad you framed this question the way you did. I feel like too few people make that link and make that connection. I was getting a little scared when you started with the Asimov's laws because I'm like, oh, God, is he going to ask me about Terminator? Please don't ask me about Terminator.

Quinn:
No.

Dr. Rumman Chowdhury:
Do I have to launch into my spiel about, we're not an AGI world. But, your connection's actually... not only is it the right one to make, it's funny, because I was rewatching Jurassic Park, the original, the 90s Jurassic Park-

Quinn:
So good.

Dr. Rumman Chowdhury:
Right, which actually is a film version of the problem that you're talking about. And, very specifically, in the first half of the movie, there's this... I had forgotten. So, I was a kid, right? I had forgotten so much of the stuff. Watching it as an adult. And, especially in this field, I'm like, they could actually be talking about the uses of machine learning and AI right now. So they're sitting at this table and this is sort of when they've lost power in the park and they don't know where the kids are, etc., and they're sitting on the [inaudible 00:21:27] table... No, sorry, this is before. Sorry, this is before it starts, before they go off, and they're debating whether or not he should have actually made this park.

Dr. Rumman Chowdhury:
Is it the right thing to do? The paleontologists are, no, this is terrible. You have no respect for nature, and you have no respect for chaos. And, interestingly, again, watching as a kid, right, so Jeff Goldblum's character is weird... He's Jeff Goldblum [crosstalk 00:21:53]. It's like Jack Nicholson, right?

Quinn:
Yeah, that's right.

Dr. Rumman Chowdhury:
Yeah, he's like the dude. But, he is supposed to be a chaos theory mathematician. And, his famous line is, life always finds a way. But, he's actually correct, in that, all the worst-case scenario things start to happen, because entropy is real. So how does this link to things like AI, right? And, you're right, I think there can be a lot of ego in this field, a lot of assumption of control. And in a sense, Asimov rules can't apply, because we don't have control. And, it's not the scary Terminator, it's going to be sentient, kind of way. Actually, a lot of it comes from just not doing a lot of the basics, and not really understanding how society works, and how human beings interact, right?

Dr. Rumman Chowdhury:
This is where someone like myself and all of the other folks who do the kind of work I do, it is unsurprising that so many of us can come from a background of social sciences or sociology. And, we've also learned programming and development, AI development. But, you get a respect for the complexity of the human experience, once you've actually try to build a model to do some sort of prediction of human behavior. It's very confusing, it's very paradoxical. It's very complex. So to say that we're building these systems that, quote, replicate or understand the human brain, it's shorthand that actually has quite a lot of hubris and ego behind it.

Dr. Rumman Chowdhury:
And you're right, it often goes unchecked, in part, because people love the picking narrative. It's going to get you all the, quote, tweets, it's going to get you on all the shows, and whatever. And, it's harder to say, well, we should be more thoughtful about the kind of data and we don't want to bake in the endemic social harms that exists in society. There is no way an AI system can be built by a human that is not reflective of these harms, unless you have very thoughtfully looked at your data and looked at your modeling and modeling assumptions. And, there's a lot of demystifying that folks like myself do. This stuff is not magic. There's a whole conversation of, should we stop using the term artificial intelligence? Because to some people, it's really scary and other people it's really alienating. They automatically assume all this stuff they've seen in movies about AI, when actually what we're talking about are algorithmic decision-making systems.

Dr. Rumman Chowdhury:
It's just deciding what pair of shoes to show you. It's not awake. It's not going to go talk to [crosstalk 00:24:19], right? But, this is what people think. But the interesting thing is it's not because the tools are so complex, it's because they're so blunt. What surprises me sometimes just how actually simplistic they are, they just work at a colossal scale so it seems bigger than it is. Yeah, no, and a lot of it's actually just math. So as a quant social scientists, my background is actually in quantitative methods and lots of stats. The thing is, a lot of machine learning especially and also AI is, a lot of complex-ish math. And, you can really think through and have a lot of fun thinking about the mathematical assumptions that are made in building this code and how they manifest themselves in reality. It's a fun thought exercise, to be honest.

Quinn:
I'm sure. Thank you for sharing all that it. It's a little bit of the Wizard of Oz pulling back the curtain. When you're like, look, these are actually fairly, I want to say, simplistic but, like you said, blunt tools and a collection of blunt tools, but the design and the implementation and who is doing that, we just... I'm a nerd, I love all of this stuff, I love all of the progress, I'm happy to... I don't want to say celebrate all of the technological progress we've made, but I'm excited about it, while also cognizant of and wary of the fact that there's so much real world shit that we are either... Forget, we haven't fixed, but we can't even have a conversation in this country to acknowledge... Again, we're not anywhere close to reparations for what we did in slavery and reconstruction or with the indigenous population.

Quinn:
We can barely talk about that we redline cities, and now climate change is here. And, you see reports, they're just, oh, these cities in those areas, it's eight degrees hotter already. So if we can't even begin to deal with this stuff, but we're designing systems that are taking those biases, and they're fairly blunt, yeah, sure, it could just be showing you shoes, right? Relatively harmless.

Dr. Rumman Chowdhury:
Absolutely. Well, and also, that was maybe a facetious example, but we do have AI systems or ML systems that literally replicate redlining. So my colleague, Dr. Chris Gilliard, he coined this term digital redlining. And what happens is, and we've actually seen this happen, the very direct example would be in the city of Chicago... I forget, I think it was an insurance company, I actually don't remember what company it was... made a lending algorithm to determine riskiness. All companies do this, by the way, in this field now. They create some sort of a riskiness algorithm to determine based on a wide... You would be surprised, the data that is pulled into this, it's actually way more than the obvious data. Pulled in a bunch of data to determine individual applicant's likelihood to default on a loan.

Dr. Rumman Chowdhury:
One of the input is zip code, because, guess what, zip code is highly correlated with someone's financial situation, and might be the default. But, also, in many places, it is associated with race. So what would happen, and because of literally, the history of redlining, and how the city of Chicago and surrounding area are structured, as well as many other cities, we actually literally replicate redlining in our code, and in the output of the code. Another example, which was interesting/sad was Amazon delivery trucks in the Boston area. So if you know Boston, you know that there's sort of the affluent part of the core city, there's sort of the outer ring, which is less affluent than the very affluent suburbs, right. So when Amazon was figuring out where to offer next day delivery, in part, it's a function of where the richer neighborhoods were, where are the people who are most likely to be Prime subscribers.

Quinn:
Sure.

Dr. Rumman Chowdhury:
And, what would happen was, so they would offer them in these more affluent areas, so center of Boston, affluent suburbs, and not offer it in the low income areas. But, these trucks are literally driving through these other neighborhoods to get from the core center to the outer suburbs, but they're not offering that service. Yeah, sure, whatever, next day delivery-

Quinn:
That's literally what we did with highways in the 50s, 60s, and 70s. It's the exact same thing, but with Amazon trucks. Now, we're like, wait a minute, do we just need to get rid of these highways? It's, fucking, maybe, that's what you did

Dr. Rumman Chowdhury:
Right. No, yeah. Right. No. We're seeing in some situations, a very clear line from the mistakes of the past being codified into mistakes of the future... what's actually mistakes of today. What's worse is, there's this veneer of objectivity. That is the thing I worry about the most, is this veneer of objectivity, that technology applies it, oh, it's an algorithm. How many times have you heard the phrase, oh, the data said so?

Quinn:
Sure.

Dr. Rumman Chowdhury:
Right, and that is such a common refrain. So, people hide behind the output of a code, they are scared of the output of the code. There's all these interesting studies about whether or not people feel like they can refute an algorithmic response, versus a person saying something to them. It's a very different and scary feeling we have when an AI is telling us something. We're also seeing it in the criminal justice space. Algorithms have been used to determine parole, and it was also found by ProPublica, that these algorithms were racist.

Dr. Rumman Chowdhury:
And then, interestingly, it wasn't because of some complex coding thing. If you dug into it... and, thank you, Julia Angwin, for making all this work public, their survey that they use to collect data to create this model was, in my opinion, as somebody who wrote surveys for many years, and is a social scientist, one of the most biased things I have read in my life. So it would ask you, what I call the Jean Valjean question, is it ethical to steal bread if you're hungry? How is that a valid question in determining if someone should get parole? Ask whether your parents were divorced? Whether or not you grew up as the... How is this valid? Who are your friends were? Whether your friends have been arrested?

Quinn:
Sure.

Dr. Rumman Chowdhury:
Right? It was very scary. And, all of this gets codified. And, this parole board, they just see that an algorithm, theoretically, in all of the normative assumptions we make about data and algorithms, the algorithm says this person should or should not get parole. And, it absolutely influences decision-making.

Quinn:
Right, from step one. Like you said, before we even get to the algorithm, it is the design of the harvesting device itself, which again, somebody designed that, and then someone signed off on it, and then someone published that. That's where it feels like we have to really consider this and why I'm so happy and excited about the work you're doing and that it's you doing this work. So I'm curious, are there, in your experience so far, not just Twitter, but any significant examples or maybe things we haven't heard of, of some more prominent tech companies or larger companies or someone like Accenture, backing off a feature or finding it difficult... If not, I guess having self-awareness right, finding it difficult or not impossible to build and implement something equitably, recognizing none of this stuff is perfect.

Quinn:
But, for probably a million reasons, we still see Facebook, recommending political groups to people even after they said we're not doing that anymore, right? We still see so much dis and misinformation around vaccines. And, there's been so much excellent journalism lately about, these are literally the 12 people driving most of it and this is the one guy driving even the most of it. It's like, how does one person have so much power, right? You look at YouTube, which is just this incredible resource in the world. The things you can learn from this, it's insane. But, it's also so well documented that a few straight clicks, and you are down the rabbit hole of false medicine or racism or hate groups or whatever it might be.

Quinn:
What am I missing? Where I've been some examples that we can learn from of folks and companies taking a step back and going like, oh, hold on, maybe this isn't working the way we thought it should, besides, like you were saying, some of the stuff that Twitter's done with the photo cropping and such.

Dr. Rumman Chowdhury:
Yeah, I think it's an interesting question, because it's a tough time right now, and I'm going to acknowledge as a tough time for a lot of folks working in responsible machine learning and responsible AI field. Earlier this year, late last year, Google infamously fired the two leads of its responsible AI research team, Dr. Timnit Gebru and Dr. Meg Mitchell. And, that was very disheartening for a lot of people. A lot of those folks still work on that team. They built a really amazing team of folks, like many of whom I consider my friends. And, it was hard to see such a big company take such a drastic and painful step backwards, to be honest.

Dr. Rumman Chowdhury:
Frankly, I feel like the industry hasn't had a lot of wins. I can only speak to the stuff I built? What I will say, because a lot of this is a client-

Quinn:
Of course. No, absolutely.

Dr. Rumman Chowdhury:
Yeah, no, it's totally fine. But, what I'll say is, I think sometimes there is a lot of attention paid to the big tech companies, and there should be, right? Like the Microsoft's and Amazons and Facebook's and Googles and Twitter's of the world. Where I found the most potential for doing this stuff right, was actually in the companies that adopt AI systems, but they are not, at their core, AI companies. So what does it mean when Coca Cola is adopting AI, right? And, how do they understand how these things are used? Or, McDonald's, for example, or your favorite soap brand, or something like that?

Dr. Rumman Chowdhury:
What's really interesting is, a lot of these companies and these brands are very consumer conscious. They've been around for a really long time. And, sure, they see AI and ML as a way of expanding their business, but are they willing to do that at all costs? No. Frankly, they do a risk calculation in their heads and they're like, well, if we are some company that has built our brand on being family-friendly, do we want facial recognition, cameras, identifying children? People are not going to like that, right? They're not going to like targeted ads, that we give to babies or whatever to buy our products. And, we don't want parents saying, I'm not bringing my kids here. That, to them, matters more than some fancy tech.

Dr. Rumman Chowdhury:
I'll give you a very specific example. During my time at Accenture we had a client that's a permanent makeup client. So if you have ever followed the makeup world at all, you know one thing that has happened... and, thank you, Rihanna, and among other folks, is that-

Quinn:
I have so much love.

Dr. Rumman Chowdhury:
Right? One thing that has happened is the diversity of shades and colors that now exist, are impressive and these were colors that did not exist for people, even a few years back. So this company was looking at an image detection or a facial skin detection algorithm that would match you with your foundation shade. And, I pointed them to Joy Buolamwini's work at MIT, called Gender Shades. And, she actually worked with Dr. Timnit Gebru who was the lead of the team at Google, where they uncovered that these algorithms did not correctly identify faces of color. Do you really want a darker skinned black customer going in front of your system saying, ooh, I want to match my foundation shade, and it says no face detected? You don't want that?

Dr. Rumman Chowdhury:
These are considerations that these companies are willing to make. They would rather not adopt the technology than adopt a faulty technology or bias technology. And honestly, that's where I see there being a lot of potential to do good. And, one way to push that good forward and nudge the needle on tech companies... one of the ways is for companies that are adopting these technologies to be more critical and ask better questions. And, that was one of the things I used to help companies do.

Quinn:
It feels like that's what so much comes down to, right, just asking better questions. Start with asking questions, but try to learn how to ask better ones because if it even makes you pause, then that's something, right? Because that feels like something that we're lacking. On the other hand, one of America's most enduring established credos and arguments is that the market can fix everything, right? And, I love the market and I love competition when we have it. But quite literally, the way our companies in America are codified or formalized is to provide value to shareholders, to stockholders, that is in the way you were developed as a Corp, which just immediately, off the bat, before you can do anything, provides an immediate base layer of incentives that are often at odds with what's best for society, or even that company's customers sometimes, right?

Quinn:
Again, you talked about Coca Cola, or McDonald's. And, if artificial intelligence didn't exist, they still have to ask these questions about, we need to advertise to kids if we're going to drive sales of soda because that's what provides value to the stockholders, right? I'm curious what you feel like what role do incentives play in this rush to design and then implement machine learning and artificial intelligence across so many different systems and fields? What are those primary ones do you run into most often?

Dr. Rumman Chowdhury:
Yes. And, it's like you're reading my mind. So, about a year ago... You're like [inaudible 00:38:07], also [inaudible 00:38:08]. So a little over a year ago, some colleagues and I did some research and wrote a paper on responsible AI practice, and our intent, we've interviewed about 25 people who have my kind of role specifically in corporation. So we're looking at people who had a role to either deliver or create responsible AI practices. We asked them, what would it take for this to be successful in your organization? So what are you seeing today? Where do you see it heading? And, what is the ideal state for you? What are your big blockers?

Dr. Rumman Chowdhury:
There's a few things we uncovered. So first of all, the background literature we rely on and the research that we're starting from is actually the theories of organizational change management. So really, for companies to adopt responsible AI, the kind of work that we need to think about is, how have we driven organization wide change before. This could be something as simple... and it's actually not silly, as companies moving to more casual dress codes from having to wear a suit. I specifically remember when my dad, who is... I love my dad to pieces, but very old school conservative guy, who literally one day was, how do I adjust for work now? Because, he'd been used to wearing shirts and ties and slacks his whole life, right? My dad is also the type that cuts the lawn in a short-sleeved button down. I love my dad. I take him shopping for polo shirts and khakis, right?

Dr. Rumman Chowdhury:
Facetious example, but how do you spark organizational-wide change? So, we highlighted 4 or 5 things that would be really critical to making responsible AI successful. And, one of those things actually was incentives. This is the biggest thing. Incentives for people and incentives for our AI models that we're building. We could do a whole other very mathy, very nerdy podcast about how we measure success for our algorithms. I could rant all day about how machine learning people are obsessed with something called accuracy that is a statistical value known as R-squared. And, unfortunately, it is one of many ways one measures what we call robustness or the correctness, essentially, of a model, but it just got the best name.

Dr. Rumman Chowdhury:
So accuracy becomes a thing, right? But, it plays this game of telephone, where if I'm talking to a policy person, or a CEO of a company, or someone who's not a stats person or machine learning person, I go, oh, the model actors, it's 99%. This person's going to think, wow, it is so correct at predicting the real world when actually that is patently untrue. So we need better ways of measuring success of our models, but we also need better ways of measuring success in people. So, a more down to earth example is, the average data scientist is not incentivized to think of ethical concerns when building a model. You are paid based on how quickly you push up your code, how quickly your model is trained, and, quote unquote, metrics and things like accuracy, right?

Dr. Rumman Chowdhury:
So, being the person who would raise your hand and say, I spent a half-day deep-diving into this code, and I realized we have massive gaps, and this x copulation, and that's going to lead to some output biases, for a lot of people, the answer to that is nobody asked you to do that, right? It's not incentivize. I'll make a parallel here to diversity, equity and inclusion initiatives. Where the place where we see the needle being moved, is when managers are incentivized in their performance reviews. They are assessed on the diversity of their teams. That is one of the biggest drivers to actually sparking diversity. Not just inclusive programs, and having interns and having a lot of nice programming, but the thing that moves the needle is making it quantifiable and making your job, in part, count on having a diverse team.

Dr. Rumman Chowdhury:
The same applies to responsible ML, how do we incentivize people in a company to do the right thing? How do we make it easy for them to adopt? And, that's my job, I make the tools and I make the things that make it easier for ML modelers to do work responsibly and ethically. But then also, how do we start building the right kinds of incentives, i.e. metrics for our models, so that we're able to say, sure, that's the accuracy, but, this is, by the way, the impact, and this is how it's impacting this community, with actual measurable harm, rather than kind of me having the specter floating over it, right?

Dr. Rumman Chowdhury:
At the end of the day, folks like myself, we're here to make better products. And, sometimes that means not using machine learning, but sometimes it means using machine learning wisely. But, that's really all we're here to do.

Quinn:
It's a it's a small order.

Dr. Rumman Chowdhury:
Just a [crosstalk 00:42:56].

Quinn:
It's a game right? It's not something like examination of the human race.

Dr. Rumman Chowdhury:
No, it's just a side project.

Quinn:
That's awesome. Again, I appreciate you sharing so much perspective on that. Again, I've got three tiny, very curious children. There's two things. One, there's this amazing GIF, I'm a moron, out there of... its Yoda in Empire Strikes Back right before he disappears. And, it just says, Luke asked so many questions that Yoda finally just gave up and died. And, that's what it feels like having an eight-year-old, a seven-year-old and a six-year-old. And, I couldn't love more how much... I'll ask my six-year-old, what do you think about? The Big Bang.

Quinn:
Okay, but it's endless questions. But, it keeps me in this element of the five degrees of why or whatever you want to call it of, but why, but why, but why? And, sometimes the answer, that first principle, or the practical implementation of getting to the bottom of that system, or problem, can be as simple as, like you were saying... and apologize if I misconstrue this, but building an incentive into measurement of a manager's review of literally how inclusive and diverse is your team? Forget the programs, forget all that other stuff, it's a quantifiable, very basic measurement and metric. And, I think, to some people, I can see how that would seem elementary, but it's not. And if we don't fix that root elementary problem, then all that other shit just isn't going to do anything.

Dr. Rumman Chowdhury:
Yeah, absolutely. I do think here's where, yes, the problem is daunting. And there are some really big issues to work through, right? Quite literally, the systemic biases embedded in the systems that are being constructed. At the same time, that is not always a helpful narrative because as we were talking about climate change, sometimes the big, scary narrative that everything is on fire, and we're all going to die, it just makes people give up and they don't want to try. I do genuinely think that we're at a time where there is, as you said in the beginning of the podcast, there is hope, we can build better systems. There's so much positive potential.

Dr. Rumman Chowdhury:
Sometimes the folks who work in my field were painted as naysayers. And, I find that kind of ridiculous. I think we are the biggest optimist because we don't just see these systems as being able to recreate status quo or make status quo incrementally easier, we actually see it as fundamentally changing things positively. So if folks like myself get frustrated, it's because we're thinking this could be so much better.

Quinn:
Got you.

Dr. Rumman Chowdhury:
Not that it shouldn't exist. Sometimes, yes, it shouldn't exist or definitely things that shouldn't exist. But, at the same time, education is such a great example. I used to be so excited about Ed Tech. And Ed Tech is actually what got me into responsible AI because I'm, oh, there's all these really cool things you could do with education and remote education and adjusting education or child's specific needs. And instead, what Ed Tech has become, is this horrific panoptic punitive system that reinforces terrible power structures. And, as somebody who spent many years as a teacher, would literally suck all the joy out of teaching from me.

Dr. Rumman Chowdhury:
Plenty of the folks in skilled are academics, but a lot of my friends have said they've gotten so burned out because of these punitive, panoptic blood broken systems that are being built, instead of the educational tools that would actually facilitate good communication in an era of COVID.

Quinn:
Sure, we've had a pretty incredible opportunity over the past 18 months to look at, and of course, many of these things had to be rushed to market and suddenly scaled up so that every American child who has internet, which is not every American child, could use them. So understandably, not going to be perfect systems, but, again, we've had this opportunity to look at it and go, wait a minute, how are we thinking about this? Who is included? Who are we having conversations with about this? Who's going to be using it? It's frustrating.

Quinn:
And again, so much of it comes down to who's designing it and implementing it. Who is not there to ask questions, but also, literally, the way companies are designed to function, as an entity. Let's say, I'm an up and coming data scientist, or an engineer, or a founder who will probably implement some machine learning or some version of AI, if we want to call it that, across my products internally, or customer-facing or consumers or B2B, or SAS, what do you feel like at this point in your career are some fundamental, transferable... These aren't specific things for each thing, but transferable best practices, again, like Asimov's three laws, type of things, that you feel are helpful, and valuable to start with, that people can apply?

Dr. Rumman Chowdhury:
Yeah, I guess we'll start with the data scientists looking for a job. So everything, of course, is a function of privilege. If you have the privilege of interviewing at a bunch of different places, and being able to weigh your options, etc., ... I'm going to start off, by actually, by saying that I very much don't think the burden of making this change is on the people who are getting their first job. Getting a job in tech is very hard. You should get your foot in the door, right? And, the goal is, hopefully, to put yourself in a position where you can be a change maker. That being said, there are things that I would suggest people do if they have the opportunity to really vet companies and have the luxury of weighing multiple options. So, I would specifically ask about, how do they do work in algorithmic ethics? And how do they build systems responsibly? What is their take on having diverse teams?

Dr. Rumman Chowdhury:
Even specifically, when you have a technical interview, I would actually ask the person interviewing you, what are some technical ways in which you audit or assess your models? Do you have this kind of functionality at your company today or are you planning on having one? To be honest, most of the time, the answer is going to be something like, ah, we don't know and, ah, we're trying, which is actually what should be the answer. Where I would worry is if you have an overly confident person who's saying, oh, yeah, we got this sorted out, don't worry. Like, oh, yeah, absolutely. I would worry then. And, I would also worry if they just laugh at you. They're like, oh, that's not a problem for us. So those two answers I would worry about. A good answer would be something in the middle that's along the lines of... that would be the answer I would give is, we are trying our best and here are the things we are doing to try, but we know that we haven't figured it out yet. Because, this is not a field where anybody has figured anything out. So, that would be one.

Dr. Rumman Chowdhury:
Founders, it's a really great question. I'm glad you brought up founders because there's a big difference between, punching up and punching at the Googles and Amazons of the world. Punching over down to a founder, right, because having been a founder, it's very scary and it's very precarious. And, to your comment earlier about shareholders versus customers, nowhere is that more evident when you talk to founders. You are beholden to the people investing in your company. So if you are a founder, number one, I would say, choose your investors wisely and choose people who will allow you to actually say no to unethical, potential clients, for example, or we'll push you towards scale at all costs or growth at all costs, or being invasive with people's data and just make money, don't worry about it, right? And, you can definitely find people, there are people that exist.

Dr. Rumman Chowdhury:
I will also plug for a second that I started a VC fund on responsible innovation for exactly this purpose so that founders actually can't get funding for people who do believe in responsible and ethical technology, and we're willing to invest in it. The other thing for founders is, there's this group that two colleagues of mine, Lyel Resner and Wilneida Negron have started, it's called Startups & Society. And, the purpose of Startups & Society is to give founders positive examples of resources and connections to VCs, that are all really involved in the creation of better technology. So, there is promise for founders, there are resources out there. Sure, it is much easier to go get a pile of cash from a big name VC who doesn't care, and all they will do is aggressively push you to grow and make money. And, that's a choice. But, there are options.

Dr. Rumman Chowdhury:
So, it's hopefully going to be a better playing field when it comes to funding and growing companies. And, consumers are also more conscious as well. So it's really important that founders think about this kind of thing.

Quinn:
That's really helpful. I appreciate it. And yeah, there's so much VC money out there and the stakes when you take those checks are just so high that sometimes... on the one hand, you want to cheer when it's a product or a company or founders that you love, and they get their big checks, and they can go build and scale something amazing. It's on one hand to cheer. On the other hand, I've invested in some things, I've done startups, I've worked on the inside of big companies. I also look at it as, oh shit, what's being asked of them now is, in some shades, pretty untenable, if not impossible to reach without necessarily, I don't want to say, shortcuts, but without pausing to ask some questions sometimes about why haven't you hit your growth marks?

Quinn:
Yeah, so that's really cool to hear about those. We'll definitely put all those groups in the show notes. Before we get into the last sort of few questions we ask everybody, I'd love to hear about Twitter's... about your Bias Bounty challenge that you've launched.

Dr. Rumman Chowdhury:
Yes. I'm so excited. I'm doing a happy dance just because we're on audio.

Quinn:
I want to nerd out about this for one second. Tell us what it is and why is it so groundbreaking that you guys are doing this?

Dr. Rumman Chowdhury:
So two years ago, I was at DEF CON, and I had a really great time and I learned a lot. I saw this parallel worlds to my world of algorithmic ethics, in this field of privacy and security in infosec. I was on a panel, we were talking about deep fakes. And, I remember, we ended up in a whole conversation about this idea of an algorithmic bias bounty. So, the way a bug bounty works in the security privacy space is, a company will put up a piece of software or some product that they have and they'll say, if you make these bad things happen, we will pay you.

Quinn:
Sure.

Dr. Rumman Chowdhury:
So, basically identifying vulnerabilities in what we're doing. And, that's stuck in my brain for literally two years. And, until Twitter, I didn't really work at a company or have a situation where I could do that, because the thing is, you have to be the owner of an algorithm, you have to be a company that has an algorithm, and the company has to say, yeah, you know what, put this out there. And, the first person I ran this by is our CTO, Parag. I was like, Parag, this is a wild idea, Parag, but tell me what you think? He was, okay, go ahead, go do this. So we pulled together this challenge. It's the first of its kind. We know we're experimenting, we're figuring it out.

Dr. Rumman Chowdhury:
So, we launched officially today. It's part of DEF CON. We're using Hacker 1 which is a vulnerability platform as our core platform that we're hosting. Our rubric and everything is up online. People have a week. It's open to anybody in the world. And, that's actually a very intentional thing. Often our field is very Western. It's actually very specifically a particular type of person, right? Most of us were raised middle class or upper middle class. We all have a very Western education, small college educated, but most of us are... and, well, I wanted different perspectives from people all around the world who are going to tell us, you know what, you need to think about caste based discrimination? Because you don't talk about caste in America, right? Or, you need to think about this particular sub-community that doesn't exist in western countries and is very specific to some part of the world.

Dr. Rumman Chowdhury:
So yeah, the challenge is up. We're offering money, cash prizes, for the people who are in first, second, third place, as well as most innovative and most generalizable. We also have an amazing panel of judges. So for anyone who's listening to the podcast who's a bit of an infosec nerd, like Mudge works at twitter. If you follow OG security world. Mudge and I have been on many meetings together, but, yesterday, it was my first time meeting him one-on-one. I had to take a minute to fan girl.

Quinn:
100%.

Dr. Rumman Chowdhury:
100. But, he gets so embarrassed. My colleague on the team, I promised her I wouldn't fan girl too long because she gets embarrassed about it. Oh, I have to. Come on. Anyway, so I fan girl'd for a minute. But, Mudge is super excited. He's one of our judges. Ariel Herbert-Voss is at OpenAI, she's another judge. Patrick Hall, who's a data scientist, running a consultancy firm on algorithmic audits, as well as Matt Mitchell from the Ford Foundation. He's sort of, a public interest cybersecurity hacker. So, we have a wide range of super interesting people. We're really excited about people's submissions, and what they're going to-

Quinn:
So, that's awesome. It's like a murderer's row of OG's, like you said, to judge this thing. This will probably come out after that's complete so let's go back to the future for just a second before we get out of here. What don't you know, that you're hoping to find out from this?

Dr. Rumman Chowdhury:
Oh, tons. Tons. I think the way we sometimes identify harms can be incorrect. I think the ways we are framing these problems can be problematic. I think, where we choose to investigate and how we choose to investigate can be limited and flawed. And, we're really hoping for a rubric that's generalizable, that we can start using and the public can start using across a lot of different models.

Quinn:
All right, last one, and then you're out of here. What is a book you've read this year that has opened your mind to something you hadn't considered before or changed your thinking in some way?

Dr. Rumman Chowdhury:
Oh, I love this one. There's so many. A book that I recommend to almost everybody who works in my field or in any field that's trying to drive change, is this book called Against Purity by Alexis Shotwell. And, it is a notion that a lot of us struggle with, this idea that there's this pure ideal form of something. Climate change is a good example. Are you really a climate change activist unless you're living in a shack in the woods, drinking rainwater? What she talks about is how it is a colonialized and derivative mindset. So, it's really helpful to really frame how do we do this kind of work effectively?

Quinn:
I love that. The climate's legion of gatekeepers is just a bit of an issue that we're trying to wrestle with. So I love that perspective. Rumman, I know you got to run. Thank you so much for your time and your perspective, and all the work you're doing there. I know you're just getting it started, at least at Twitter. So I can't wait. I was clicking on something to make sure I put you in the show notes and I saw that you're not verified. And, it's the funniest thing I've ever seen. Thank you so much. I really appreciate it. Please give Jack my love. And we're going to turn this baby off and I'll send you a little follow-up. But, thank you again, we really appreciate what you're doing out there. Twitter's important, and we're lucky to have you.

Dr. Rumman Chowdhury:
Thanks, Quinn.

Quinn:
Thanks to our incredible guest today. And, thanks to all of you for tuning in. We hope this episode has made your commute or awesome workout or dishwashing or fucking dog walking late at night, that much more pleasant. As a reminder, please subscribe to our free email newsletter at importantnotimportant.com. It is all the news most vital to our survival as a species.

Brian:
And, you can follow us all over the internet. You can find us on Twitter at importantnotimp. It's just so weird. Also, on Facebook and Instagram @importantnotimportant. Pinterest and Tumblr, the same thing. So check us out. Follow us, share us, like us, you know the deal. And, please subscribe to our show wherever you listen to things like this. And, if you're really fucking awesome, rate us on Apple podcast. Keep the lights on, thanks.

Quinn:
Please.

Brian:
And, you can find the show notes from today right in your little podcast player and at our website, importantnotimportant.com.

Quinn:
Thanks to the very awesome, Tim Blane, for our jamming music. To all of you for listening. And finally, most importantly, to our moms for making us. Have a great day.

Brian:
Thanks, guys.