March 8, 2022

Abhishek vs Terminator

Abhishek vs Terminator

There’s these metaphors that sum up a lot of what we’re trying to do here, what needs to be done on planet Earth, from climate change to COVID to AI ethics, which is something you definitely need to know and care about before it's too late, there are like 7 Terminator movies and only 1/3 of them are any good.

Anyways!

Re: today’s conversation, we need to design and implement standardized AI ethics regulations across everything AI touches, so, everything, while also asking questions like: what is “ethical”?

And who gets to decide?

And why do they get to decide?

And how are they incentivized to decide, in today’s society?

And who provides those incentives?

Who gets to regulate all of this?

Who elects the regulators?

And how do we make sure companies actually implement all of this?

These are among the most important questions of our time, because AI touches everything you do.

The phone in your hand, your insurance, your mortgage, your flood risk, your wildfire risk, your electronic health record, your face, your taxes, your police record, those Instagram ads for the concerningly comfortable sweatpants, your 401k –

– some version of AI, whether it’s the AI we always thought was coming or not – is integrated into every part of your life.

My guest in Episode #132 is Abhishek Gupta.

Abhiskek is the founder and principal researcher at the Montreal AI Ethics Institute, an international non-profit research institute, with a mission to democratize AI ethics literacy.

He works in machine learning and serves on the CSE Responsible AI Board at Microsoft, he works as the Chair of the Standards Working Group at the Green Software Foundation, and is the author of the widely read AI Ethics Brief, and the State of the AI Ethics Reports, the most recent of which just dropped.

Abhishek helps me ask better questions every single week and his work is instrumental to helping society build not only more powerful and equitable AI, but one that somehow improves on the most important element of all: us.

-----------

Have feedback or questions? Tweet us, or send a message to questions@importantnotimportant.com

New here? Get started with our fan favorite episodes at podcast.importantnotimportant.com.

-----------

INI Book Club:

 

 

Links:

 

 

Follow us:

 

Transcript

Episode 132

Quinn:
Welcome back shit givers. This is Important, Not Important. And I'm Quinn Emmett. There's these metaphors that sum up a lot of what we're trying to do here and what needs to be done. So walk and chew gum at the same time, or build an airplane and fly it at the same time. To use some more practical real world examples, for climate change, we need to mitigate further damage and also adapt to baked in damage.

Quinn:
We need to decarbonize as fast and as comprehensively as possible, but also find the most transparent and measurable and effective ways to actually suck the existing carbon out of the sky. We need to find new and better antibiotics, but also cut down drastically on the number of ones we use today in people and animals. We need to feed more people healthier food on less land while we prevent deforestation and reforest where we can and with native carbon sucking trees.

Quinn:
We need to train millions more Black doctors and nurses while finding ways to today drastically improve biases and outcomes throughout at the current healthcare system. We need to remove dark money from the political system while also making sure the candidates who are committed to doing so from top to bottom are well funded enough to actually win elections and get into office.

Quinn:
We need to deal with this pandemic while we plan and prepare for the next one. We need replace every vehicle on the planet with an electric one, but also increase public transportation participation with more lines and dedicated lanes and more reliable service to more places. We need to hand out scholarships and green cards to every scientist and graduate who wants one, but also make sure current citations reflect a wider canvas of lived experiences than we already do.

Quinn:
And for today's conversation, we need to design and implement standardized AI ethics regulations across everything AI touches. So everything, while also asking questions like what is ethical and who gets to decide? Why did they get to decide? And how are they incentivized to decide and write those ethics in today's society? Who provides those incentives? Who gets to regulate all of this? Who elects the regulators and how do we make sure companies actually in implement all of this when they're designing tools on deadlines?

Quinn:
So one, yes, in case you're wondering, it's super fun to be married to me. And two, these are among the most important questions of our time because AI like climate touches everything you do. And we're going to get into some of those examples today. Today's a broader conversation about this. The beginning of hopefully a series of conversations like these. Think about it. The phone in your hand, your insurance, your mortgage, your flood and your wild fire risk, which both affect your mortgage and your insurance.

Quinn:
Your electronic health records, your face, your fingerprint, your taxes, your police records, those Instagram ads for the upsettingly comfortable sweatpants, your 401k. The point is some version of AI, whether it's the AI we always thought was coming or not. Or more likely today, some version of machine or deep learning is integrated into every part of your life. You are training those algorithms every day and you are at the behest of them.

Quinn:
So I don't think it would be some groundbreaking reveal to you folks to reveal that these incredible technologies are being implemented and profited from at light speed without very much say from you about any of it. And when I really need to go deep on this subject, on AI, on the ethical front, on the practical implementation side, there's one place I turn, and that's the Montreal AI Ethics Institute.

Quinn:
They help me ask better questions. And because like climate again, AI touches everything, their excellent analysis has really actually helped me level up so much on all of this stuff. So my guest today is Abhishek Gupta. And Abhishek is the founder and principal researcher at the Montreal AI Ethics Institute. And they are an international nonprofit research institute in one of my favorite cities, Montreal. This whole thing is just a lure for me to be able to visit him.

Quinn:
And they've got a mission to democratize AI ethics literacy. Abhishek works in machine learning. He serves on the CSE Responsible AI board at Microsoft, where his work helps solve among the toughest technical challenges at Microsoft's biggest customers. In case you're wondering how I'm going to make this even more interdisciplinary than it already is, through his work as the chair of the standards working group at the Green Software Foundation, Abhishek leads a development of the software carbon intensity standard towards a more comparable and interoperable measurement of the environmental impacts of AI systems.

Quinn:
We didn't even get to that today. So we'll do that another time. His work focuses on applied technical and policy measures for building ethical, safe and inclusive AI systems. And truly has been recognized by governments across the world. Lastly and more pertinent to you guys here, and we've noted them in the newsletter a thousand times, he is the author of the widely read AI Ethics Brief, and the State of the AI Ethics Reports, the most recent of which the sixth volume has just dropped in which I've annotated heavily and is a part of our discussion today.

Quinn:
As a reminder, you can always reach the show at questions at importantnotimportant.com. Or you can reach me on @Importantnotimp or @quinnemmett. Let's go talk to Abhishek. Abhishek, welcome to the show.

Abhishek Gupta:
Hey, great to be here on, Quinn. Excited to talk about AI and a whole host of other things with you today.

Quinn:
Yeah, I imagine we'll dive right into Star Trek. It'll be great. Abhishek, we like to start with one important question to try to set the tone for this fiasco. Instead of asking for your entire life story, as exciting as I'm sure that is, I like to ask the question, Abhishek, why are you vital to the survival of the species? And I encourage you to be bold and honest. You're here for a reason.

Abhishek Gupta:
So I can be a little bit like our last defense against the Terminator both metaphorically and hopefully realistically too. I mean, the machines are marching on us, right? Aren't they? Or at least it seems to be with all the hype around AI out there in the wild. So I hope to be that last defense, that last gun slinger out in the wild protecting us before the machines completely turn us into little batteries. And now these are all mixed metaphors, right? I'm mixing up Terminator, Matrix and all the other cool sci-fi that we have going on for us.

Quinn:
That's perfect. I just started watching the other day the new Matrix movie without giving it away. I don't know if you've seen it yet, but they do the thing where he's in the goo and they've got him plugged in 50 different ways. And I was like, "I feel like we're closer to that than we've ever been before." It's just a matter of time.

Abhishek Gupta:
I mean, think about all the hype around putting on a HoloLens, putting on an Oculus. That's getting us there. Isn't it? I mean, I was reading this very scary quote, which is that once you are inside a womb, right, with all your needs met, why would anyone ever want to leave, right? And that is a scary thought. I mean, we have a biological example for that, right?

Abhishek Gupta:
A baby is perfectly comfortable having all their needs met, perfect temperature, everything. That's what's scary about all this is that we're numbing ourselves into that with the perfect set of entertainment, hand curated stuff. Here's what you should buy. Here's what you should listen to. Here's who you should be friends with. In fact, I just finished reading this past weekend Fahrenheit 451.

Abhishek Gupta:
And I feel like I missed the boat on it the first time around, right? I guess I wasn't born then. But that had a fascinating thing with the Parlor, where you get fed this perfect mix of propaganda and entertainment and connection. And people are just numb, right? They've forgotten what it means to think critically apart from all the book burning and everything else that goes on there. As you said, we're closer to all of this than we think we are.

Quinn:
It's pretty wild. Well, I couldn't think of anybody better suited in a thousand different ways to be our last line of defense against the Terminator. So I'm so excited it's you and I will be right behind you. As I talked about in the intro, one of the reasons besides just being a super nerd, right? Why I feel like this conversation, which could be an infinite number of conversations going down a number of different rabbit holes and branches of the tree is so important is because however you want to define AI, artificial intelligence.

Quinn:
Whether it looks like what we thought it would look like Star Trek or anything else, it is in every part of our lives. Which is a lot like when I have these climate discussions. Which is like, there's no climate discussion. It applies to everything everywhere. It impacts your life in every different way in ways you can't even imagine. Like the jet stream down to the air you breathe and the water you drink, and whether your street floods on a sunny day.

Quinn:
But AI has been interesting, right? Again, however you want to define it in the past or science fiction or in reality or processing power. It's had these moments over the decades, right, of being this mirage that's constantly out of reach. It would come forward and then it would hibernate and we'd have these AI winters as they describe it, right? But now besides just already being a part of all of our lives the past five, six, seven years, tech is leaving research labs and the hands of the folks who develop it and being capitalized on immediately.

Quinn:
And in ways that the original folks who'd worked on them couldn't even have imagined. And maybe they agree with, or they don't. Maybe they're the ones bringing out labs. The point is, I'm so curious because your group has been so helpful to me. But I'm curious, when did it become clear to you? I wonder if there's this moment of intuition of, "Hey, we're not talking about the ethics side of this enough. I've got to do something about this." When was it, "I need to start putting this conversation out in the world more." And how did you come upon the idea of building the Montreal group?

Abhishek Gupta:
I think the moment was pretty clear for me, right? It was in the summer of 2017, I was at the UN ITU, so the International Telecommunications Union for the inaugural AI for Good Global Summit in Geneva. And this was to contextualize the landscape a year before the GDPR came into effect, right? Which was May, 2018. So Europe was all a buzz with privacy data rights and all of those associated things.

Abhishek Gupta:
I think for those of us who remember, we got a thousand email notifications asking us to consent to things and for services that we didn't even remember signing up for in the first place. So there's that. But also for those of us who were involved in helping transition our products and services to become GDPR compliant, right? So there was a big buzz and momentum around it.

Abhishek Gupta:
But when I came back to Canada after that trip to Geneva, I found that the Canadian landscape was highly fragmented. And there were very few and far between pockets where conversations on the societal impacts of technology where more specifically AI were taking place. And this was right around in the time when Canada was also, at least in the public consciousness bursting onto the scene with AI.

Abhishek Gupta:
And that came on the heels of the announcement of Element AI, which was one of the biggest success stories at that time on the Canadian AI scene. It was backed by Yoshua Bengio, some of the other well known founder in the Montreal startup ecosystem. And that was when I realized, "Hey, by the way, why don't we have a coherent national discussion around some of these impacts." Clearly this technology is very powerful. There isn't that much doubt about that, right?

Quinn:
Sure.

Abhishek Gupta:
It seems odd now to think that, "Well, how was it that the ethical discussions weren't mainstream." They just weren't. That was the state of affairs. And we've had, I think, a significant push over the last I would say two and a half years where we've seen a lot more popular media coverage, a lot more discussions groups around the world. Back then it wasn't the case.

Abhishek Gupta:
And another thing that I realized was accessing those small few and far between pockets. There were quite a few barriers for people to participate in those discussions. Even those who came in with specialized knowledge or with backgrounds in this space such as myself. There were two barriers. There were self erected barriers. So for folks who self selected themselves out because they thought that, "Hey, I need a PhD in math or computer science to be able to meaningfully contribute to these conversations."

Abhishek Gupta:
And on the other side there were barriers erected by those who were holding those conversations. So "the gatekeepers" who you needed to come with a wanted set of credentials, or be from certain backgrounds to be able to participate. And so it started really as an experiment in that summer where I thought, "Hey, let's band together with a few other folks who are interested and open these discussions up for people to come and articulate why they're concerned, what their hopes, aspirations, fears and concerns are."

Abhishek Gupta:
And it grew into a movement from there, right? We started to just through word of mouth, become this place that was safe haven, a welcoming embrace for people coming from all walks of life really with either a very deep background in this space or with no background in this space. But what they did find was that we were willing to work together to elevate the level of discourse to provide nuance to the conversation. So we weren't talking about Skynet and Terminators coming after you, right? Even though I would love to be the last defense.

Quinn:
100% we've made that clear.

Abhishek Gupta:
But we were talking about very real concerns, right? And we were talking about not hypotheticals. We were talking about systems that were currently deployed and are currently being deployed and what those realistic impacts are. Because the impacts were very real on people who were sometimes even present in the room, right? They had experienced a [worthmic 00:15:25] discrimination. They had been denied for example, credit amongst other things.

Abhishek Gupta:
And that was really, I think, a moment where we realized that, "Hey, what we're trying to do here is bringing value and is an effective mechanism." To borrow the term from the tech ecosystem to scale, right? Not in a gimmicky way, but in an impactful and meaningful way in the sense that the nuance that we were bringing, people had the opportunity to take this back with them to their communities, to their organizations, to their work, their colleagues, their families and help everybody be more informed.

Abhishek Gupta:
Because we wanted to create these local champions, these advocates for this conversation. So that really for me was the time when I realized, "Hey, there are lots of other people who've thought about some of these issues." For example, clinicians thinking about informed consent. So we don't always have to reinvent the wheel.

Quinn:
Sure.

Abhishek Gupta:
Let's stand on the shoulders of giants.

Quinn:
I love that. And I thank you for having that moment of clarity and realizing you needed to put something together. It's remarkable that firewall of people who self select themselves out of these things because they don't believe that they have the credentials or the skills or the corporate rank or whatever it might be to contribute to this discussion. And on the other side, there's the gatekeepers going like, "You're exactly right. You don't have the credentials and you don't have the positions to contribute to that discussion."

Quinn:
Holding on that for a second, because your other point was, we have this history of, for instance, the healthcare segment saying, "We've got consent. We've got HIPAA. We've got these things. We don't have to reinvent the wheel." And at the same time as much as those processes are there, we know that two things. One, they certainly don't work perfectly. We've got all these ethical issues across so many different systems in our world. But at the same time we can build on them and we can do better to make those more inclusive. Because, again, AI touches everything and is only going to continue to be that way.

Quinn:
I think back and I feel like you and I connected very quickly on how nerdy we both are and how deep we go on this pseudo utopian idea of Star Trek and all these different things. And I feel like your moment of clarity, where you're like, "I have to start these conversations." It's a similar feeling I had when I stumbled upon your work and was like, "Oh, this is for me. I need this as, like I said, a liberal arts major who's doing this job of trying to help folks ask better questions and then provide actionable measurable advice in a direction."

Quinn:
It's so helpful because you guys are trying to do both, right? You're trying to instigate and nourish the conversations about what are these ethics and what does it mean? And what can we learn from prior sectors and prior examples of something coming to fruition so quickly and so overwhelmingly. But also trying to say like, "Yes, but also what are the practices we have to put into place while we're having these discussions because it's rolling down the hill?"

Quinn:
And so I've learned so much from you guys and there's some great books that have come out in the past couple years. We had on Dr. Carissa Véliz who wrote a book called Privacy is Power, which is fantastic. And Atlas of AI, I really enjoyed. An alignment problem, which speaks to basically our entire society, which is the whole thing, right?

Quinn:
But I also think back to these books that I grew up with like Heinlein, Moon Is A Harsh Mistress, and iRobot in some discussions. And the new stuff like Klara and the Sun, which if you haven't read is fantastic. Or super weird stuff like Hyperion, right? It was interesting to me thinking about those, right?

Quinn:
Which can often paint this portrait over a few hundred pages of this is how things go wrong not because of a machine or algorithm, but because of us. And so it was interesting in the recent volume six of the report learning about Sean McGregor's work with the AI incident database, which I loved. And of course, first, because my way of dealing with things that are scary is to make jokes.

Quinn:
But it reminded me of those signs you see in offices and the office is like three days since the last work safety violation, right? And someone has to wipe off the number and set it back to zero because someone splashed coffee. But in Sean's case, we're talking about someone hacking an entire continent's like eyeball records, right? And you wipe the number and go back to zero.

Quinn:
But you can tell from the data, right? That there is as he put it, this uneven distribution of harms basis. And it just comes back over and over again, whatever we're talking about, to the teams that designed these tools not being inclusive enough or even at all. And if that's by design or by accident or just how we're built. And I wonder how much of that thing, the data behind that and the questions behind those, how much that informs what you guys are trying to do. Is that a fundamental building block of how you have these conversations?

Abhishek Gupta:
Yeah, absolutely, right? And I think it's great that pointed Sean's work. Sean and I have known each other for many, many years now and he's such a sharp mind, right?

Quinn:
It seems like it.

Abhishek Gupta:
Yeah. It's a practical tool, right? That's I think probably the direction that we were headed in, which is that it's great to get these overarching themes and pictures of what might happen. But what is it that we can do today, right, or tomorrow or next week? And that's, I think, what the incident database helps to bring that gap to action. It helps to bridge that gap to action so that we can start to act on some of these insights.

Abhishek Gupta:
And that does inform to a great extent the work that we do at the Institute as well, especially on the fundamental research side of things or some of the consulting that we do for large public entities, which is, well, why are some of these things still happening? And now we're talking present moment 2022, right? It's not that we're strangers to the societal impacts of AI. At least I would hope not with all the coverage that we get.

Abhishek Gupta:
Especially as you said, Carissa's book on Privacy Is Power. A fantastic book. I read it. It makes it very, very clear that this is something that is impacting our daily lives multiple times a day. It's not just that it's a single service that we use and get impacted by, right? But for us what's been interesting is, well, what is the ecosystem within which we're operating, right?

Abhishek Gupta:
And how is that actually shaping or failing to shape rather these concerns and why action is not being taken? And I think one of the things that I've realized as a part of doing research, as a part of doing some of these consulting engagements is that we're really not paying enough attention to the organizational aspects of where these products and services are being built and how they're being deployed, how they're being procured, right?

Abhishek Gupta:
Because, again, let's also not forget that it's not just some companies who are building and selling products and services, but it's also other organizations who are choosing to procure and then deploy these products and services. Because not everybody's got 10 brilliant AI engineers, ML engineers I should say rather and PhD folks who are developing new stuff, right? So then the question is, okay, well we are aware that these things are happening. Then what gives? Where is the gap?

Abhishek Gupta:
And I think it comes from the fact that we've got a fundamental gap today in terms of organizational behavior analysis, in terms of what the organizational incentives are, how employees are functioning within these large organizations, what the sticks and carrots for them are? What are the stated goals and what are the implicit goals, right? So the stated goal might be that, hey, we've put out the set of ethics principles or guidelines. And my gosh, there are...

Abhishek Gupta:
I think I was just checking this weekend. I think over 700 of those sets of principles and guidelines. At least if you go and check out OECD.AI, they've got a little compilation. Well, little is an understatement. Over 700, right? So that's a lot. And let's be practical here. If you are someone who's developing a product or service, you are on a business deadline.

Quinn:
Sure.

Abhishek Gupta:
I'm not going to review 700 sets of guidelines to figure out which one works for me. And even if my organization has a set of guidelines, if it's overly broad and vague, me as someone who's writing code, let's get very practical and concrete here, and I have to make a poll request against our code base and have someone review it, what are some practical things that I should be looking out for, right?

Abhishek Gupta:
As the person who's doing the code review, for example, what are the benchmarks that I should be evaluating against? And those are things that I think are getting left out. And it's not farfetched to say, "Well, hey, we're going to put some of this by the way aside because we still have to meet our deliverables."

Abhishek Gupta:
Because wouldn't I love to get the bonus at the end of the year? Because what am I getting evaluated on, right? As an employee, as a manager of a team, as the head of a business unit, right? What are the things that I'm getting evaluated on? And are ethics and other guiding principles front and center, when we're making those evaluations and determinations?

Quinn:
Sure.

Abhishek Gupta:
I don't think so. At the moment that doesn't seem to be the case that they're explicitly outlined in any of those evaluations.

Quinn:
And if they are, they're not practical in any way. I mean, we've all worked at companies where you look at the mission statement on the wall printed letters and you're like, "Okay, but what does any of that actually mean to my day-to-day job?" Much less, again, creating these tools that, like you said, are on ridiculous deadlines and affecting millions of people. And you're going, "That's how I'm being judged."

Quinn:
It makes a lot of sense. I mean, I know I harped on it so much and accidentally dominated our discussion the other day with the group about incentives. But if there's anything that I keep coming back to in my work here across all the different things we talk about whether it's decarbonization or offsets or infectious disease or whether to drop masks or AI ethics, you're just constantly going, what are the incentives to employ these things, these tools, these people, these regulation, this legislation, this procurement in an equitable way?

Quinn:
Who is designing those incentives? Who gets to design those incentives? Who gets to even be in the discussion of them? And you can just keep backing up and going like... I mean, that's the whole point of the alignment problem, right? Which has been around since before the book, but the book does such an eloquent job of describing it's us, right? It's about the operators. And that's where we're not just...

Quinn:
Again, we can out the technology all day and how it might be broken for mortgages or policing or whatever. But whatever. The internet is the mirror of society, right? And without this wide variety of shared values being incorporated without cooperation. If we don't have those and then seek to employ them in a practical specific way like you're saying, you just inherently get, because we've always gotten massively biased tools and systems that just hurt people or reinforce existing marginalizations. It's the story of what we do. And I do believe we can do better. And that's why I love what you guys are doing because you're constantly asking these questions going like, "Why? And we can do better. And how can we help?"

Abhishek Gupta:
It's like children, right?

Quinn:
Sure.

Abhishek Gupta:
It's like small children asking why and why again.

Quinn:
Sure.

Abhishek Gupta:
I think we need that a little bit because it's a sad state of affairs where I think there is this overall diminishment of attention spans on really, really difficult long term problems, right? Because they require a persistent focus, a commitment to see them through. And with just so much happening in the world... And it's not to say that that hasn't always been the case, right? But I think now we are exposed to all of that constantly.

Abhishek Gupta:
And so it's almost like a driver to bring our attention back again and again and say, "Hey, by the way, let's not forget that these are concerns, that these are the questions that we need to be asking." I mean, think about at the start of the pandemic where some of the surveillance technology was being rolled out, essentially helped curb the spread of the pandemic, right? But we also needed to parallelly ask questions around, well, hey, are we going to dismantle this once the pandemic is over? It isn't over yet.

Quinn:
Not easy to do.

Abhishek Gupta:
Not easy to do. In fact, even if you look at the history of passports themselves, they did have their origins in protecting against some of these concerns, right? And then did they ever get rolled back? No. Now everybody needs to carry one if you're traveling internationally, right? And we've completely gotten about the history there because, well, it's just an accepted part of society.

Abhishek Gupta:
One of the other things is that we as humans are tremendously adaptive creatures, right? We can normalize anything very, very quickly. And so we forget and it fades into the background. So we need folks who ask these critical questions, right? Who keep bringing up and questioning what forms the fabric of society. And that it's not a given. Nothing is a given. It originated at some point for some reason.

Quinn:
Sure.

Abhishek Gupta:
It's just that we've forgotten what the reasons were and why this was put into place. So I think every so often getting a kick in the back and saying, "Hey, by the way, why do we have this again?" And seeing if there's something better out there sure, is a worthy thing to pursue.

Quinn:
Sure. I mean, I feel like there's so many ways to agree and expand on that. You go to a public pool and there's 10 rules that are common to every public pool. And then there's maybe an 11th that seems totally obscured. And you're like, "That's there because someone did something that was wholly inappropriate or hurt someone or someone got her, and now we have to make a rule because of it."

Quinn:
And I think also the other day someone said, "Oh, so we're going to drop masks on planes. But 20 years later we're still putting shampoo in bottles that are one ounce big and taking off our shoes." Our sense of risk balance and overall again, assessing and going, like you said, "We not only can, but we will normalize anything and everything." And so quickly whether it's data sharing or Snapchat maps or any of these things. It's incredible.

Quinn:
So I am inherently, obviously a White guy born in the late 20th century in the US. And we know that the US doesn't really have any recent or substantive data privacy or AI legislation. Europe, like you were alluding to, has GDPR or as most people probably understand it, the question "Accept all cookies?" that you see every day and drives you crazy. But that's what that is.

Quinn:
I know they've got new insurance regulation. But I'm curious outside the West, are there countries that are handling these conversations and these standards or regulations if they're there in a more progressive or more... There's got to be someone I hope.

Abhishek Gupta:
Yeah. I mean, for example, India has the PDP coming out, right? So the personal data protection bill. That is now being to be and talked about and will probably come into effect. Vietnam has got something around privacy as well. And there are lots of other countries that are exploring what legislation in this space can look like. Data privacy being I think a natural entry point to talking about the more pervasive impacts a technology if you're talking now, but the acts that AI is going to have, right? So you now have got the Algorithmic Accountability act from Senator Wyden and colleagues in the US, which we had some contributions there to make.

Quinn:
Hey now.

Abhishek Gupta:
Yeah. We were very happy to be invited into the process and we fully endorse it as something that will help to move the industry forward or the ecosystem forward rather. Both assessing, but also mitigating the negative impacts of technology. You've got similarly the EU AI Act. But, again, those are Western examples. Speaking of non-Western examples then.

Abhishek Gupta:
If you look at places like India and Vietnam, what's interesting is that you've got this inheritance that's coming from GDPR as a starting point, right? So for better or for worse, the GDPR did set a precedent, right? It woke up in the broadest public consciousness that, hey, privacy is important. Here's how it's being intruded upon. You need to think about it. You need to take action, right?

Abhishek Gupta:
But I think one thing that's been lost as a part of it as other countries now almost always default to things that are in the GDPR as a starting point. Not to say that the GDPR is bad. It did set a lot of great precedence. But in some situations we are forgetting that notions of privacy might also have certain cultural specificities, certain localizations that we need to be aware of, cognizant of.

Abhishek Gupta:
And just wholesale copying, that and making it your own privacy legislation isn't the smartest way to go about it. And I think that's where really we are... One of the papers that we have under review at the moment with a colleague of mine over the Center of Legal Policy in India and a colleague from the City University in New York, who is from Vietnam. We've been doing a comparison as an of upcoming privacy legislations in India and Vietnam against the "global standard" at this point, which is GDPR, right?

Abhishek Gupta:
The idea is, well, what are some things that are great from inheriting that as a standard, as a starting point? But what are also some other things that we need to consider? As you were talking about it earlier, who are the people who have been invited to the table as these pieces of legislation are being drafted? Who's being left out? What do we need to think about? What are some localizations that we need to think about? And also what is going to be the impact if we have slightly deferring versions of privacy legislations for companies that...

Abhishek Gupta:
Let's face it, right? All of them are based out of a very small part of the world and technology emanates from there and breads everywhere else, right? What does that mean then for operating across these jurisdictional boundaries, where different legislations apply? Are we going to then have some of a lowest common denominator? Lowest in this sense being I guess-

Quinn:
Sure.

Abhishek Gupta:
Perhaps the strangest common denominator maybe is the way to put it where you just don't even think about anything else. You're like, "Okay, well, what's the most stringent set of legislation regulation?" Let's just comply by that standard.

Quinn:
Sure.

Abhishek Gupta:
And hopefully, works out for the rest.

Quinn:
What is one thing that could be acknowledging an enormous amount of differences, again, like you said, from privacy to ethics, the whole anthropology of it? But is there a baseline that we can at least start with that everyone can agree on? And I imagine even that is incredibly difficult even if it's just like the right to delete your data. I mean, again, I don't envy your job consulting on these things.

Quinn:
But at the same time, it's so necessary to keep asking those questions. Again, as we're trying to decide... If decide is the right word. As we're trying to incorporate more and more nuance to these ethical discussions, which you can do all day. Philosophers have been doing it for thousands of years, but at the same time trying to point them towards practice because we have to start somewhere.

Quinn:
And, like you said, GDPR is nowhere near perfect and then can be a sledgehammer and can make competition more difficult in some ways. But none of them are going to be perfect. All of them aren't perfect. So what are these levers that we can hopefully all agree on, if at all?

Abhishek Gupta:
And I love that idea that you bring up that it makes competition difficult. Because I think that's one thing that also doesn't get talked about often enough, which is how does it alter the market dynamics, right? For organizations that have tons of money, if you have 25 different legislations, sure. They'll just throw more lawyers and more programmers at it and comply with each of those legislations and regulations and continue to have their position in the market.

Abhishek Gupta:
But that forces out smaller actors, smaller players who then have to restrict themselves to perhaps one jurisdiction because they just don't have the resources to be able to comply with legislations and regulations across 25 different jurisdictions. So it does have an impact on the market dynamics as well. One of the things that I've been thinking about as we discuss all of these issues is, well, can we create a set of comments, right?

Abhishek Gupta:
Can we create a set of shared patterns, shared tools that are perhaps funded by some of the biggest organizations who have resources, but then who make them open source and accessible and maintained? And then there's a lot to say on the open source front. But continuously maintained so that smaller actors can take those on. And we continue to have a competitive market for these products and services.

Abhishek Gupta:
And it's not that this is a pipe dream and doesn't have any precedent. There is this fantastic group called Tech Against Terrorism, which actually coordinates several actors who have resources. So big organizations who have resources. They create tools for content moderation, for detection of hate speech, for detecting for example, if criminal activities being organized.

Abhishek Gupta:
They make those tools available for organizations who don't have those resources and then they can use those tools to empower and equip themselves to protect against some of these malicious actors as well. So all that to say that this is not without precedent and I think it just requires perhaps some willingness, some momentum.

Abhishek Gupta:
A coordinating body that brings together some of these folks and says, "Hey, by the way, we're not going to continue to fragment the space by bringing in more proprietary tool services and compliance mechanisms. But instead we're going to work towards building a set of comments that any and all of us can use." And let's also not forget that one of the greatest powers of the open source ecosystem is that anybody can go and analyze what's being done.

Quinn:
Sure.

Abhishek Gupta:
So I would say this carefully. There is a potential for building consensus. Because let's also be realistic here that in most cases open source software also tends to then cluster around a few small set of core maintainers who have the resources and time to continue investing in it. And the rest of the community may or may not fully engage in that consensus process

Quinn:
Sure. In your day-to-day, week-to-week, month-to-month, again, working on these discussions and publishing your research and the conversations and the analysis, but also doing the consulting work and legislative stuff, do you feel like without substantive carrots and sticks as we talked about, without those incentives that a commons like that is possible? Or do you feel like we need to crack down on those carrots and sticks first?

Quinn:
And I'm not trying to be too pessimistic about it, but, again, you look at the massive amount of wealth being created off these tools every day from the biggest companies we've ever seen. How incentivized are they to work on these things, to establish with comments like this, to backtrack on some of the things they've seen? And, by the way, that's acknowledging all of its many, many, many warts to see.

Quinn:
For Facebook, they are seemingly walking away from facial recognition in some ways. Again, remains to be seeing how that goes. I'm just trying to be a little skeptical about it, I guess.

Abhishek Gupta:
No. And perhaps you're right, right? I mean, at what point did that come around in their transformation, right? And what were the triggers behind... Those actions are opaque to us, right? So I think attributing any intention, good or bad is difficult. It's really unclear how they arrived at that decision and why they arrived at the decision. And maybe they just weren't getting enough business value out of it that they said, "Hey, it doesn't hurt. We'll get a quick PR win. And let's just say that we're not going to use it because what were we really getting out of it, right?" So that could totally be the case.

Quinn:
Sure.

Abhishek Gupta:
I mean, I'm not privy to any of that. So the skepticism is perhaps warranted given everything else that we've seen. Let's also not forget that there are well-meaning humans at each and every of these organizations-

Quinn:
Of coarse. Yeah-

Abhishek Gupta:
... who are trying to do good stuff. And it's just to what extent are they shackled, constrained by organizational structures around them? And to what degree do they have agency over it, right? But to answer your question, if we do need those carrots and sticks, I think we do, right? The natural state of the ecosystem is always going to be one where you're trying to grab power. You're trying to grab a chunk of that market landscape.

Quinn:
Sure.

Abhishek Gupta:
If we're talking about compliance tools, bias mitigation tools, other things, I would rather not hope that people are doing that to gain market share so that they can make some money off of it. Because I think these are fundamental tools and compliance services, for example, that we would need if we want to ensure that the state of the ecosystem is actually good, right?

Quinn:
Sure.

Abhishek Gupta:
And perhaps without directed substantial investment that either gets triggered by carrots and sticks or otherwise, we're just not going to get there, right? Startups are going to still keep coming up with their products and services to mitigate bias, to provide privacy, to do this and that. But to what extent are we actually engaging in capacity building, right? This is something that we hear in the world of nonprofits a lot, which is, okay, well, how do we build capacity, right, in NGO work?

Abhishek Gupta:
And that's really how I think we need to think about this, is how do we build capacity for everybody to be able to enact their duties of building responsible AI? Not just those who have the resources, because then that's just repeating the same pattern again, which is centralization of power, of marginalization. And so then what did we really achieve?

Quinn:
Folks out there, again, you might say, "I don't use Facebook." That's fine. They've got WhatsApp as the predominant chat tool across the world. You've got Instagram and all these things. But also it's important. And I can't make this clear enough. This is touching every part of your life, whether you're part of Facebook or out. I mean, we reported on a few months ago how early in the pandemic, Epic, which is one of the largest electronic health record providers.

Quinn:
I can't remember what the other one is that's the second biggest. They implemented some "AI, really machine learning tools" over the past year and just fundamentally realized. And, again, it might have been the markup who reported on this. They just do truly tremendous work there. And I'll find the link and I'll put it in the show notes again. But the point was, they just realized like it doesn't work.

Quinn:
And we rolled it out without and spent enormous amount of money and forced these healthcare systems and hospitals and nonprofit hospitals and whatever it may be, who are all entirely fragmented. Which is an entirely different discussion data collection on that front. At enormous cost and made decisions on billing and made decisions on how healthcare systems should provide their services and how insurance can be looped in on a tool that just fundamentally didn't work.

Quinn:
And it feels like you have to have those checks and balances along the way, however, they're incentivized or mandated from on top, whether it's internally or externally from a commons or from regulation or both. It's got to trickle down to someone who goes, "Before we do this, let's acknowledge power we have and ask, does this even work?" Forget is it biased for mortgages or policing or whatever that may be?

Quinn:
That's different kinds that doesn't work. That is it's working in a way that is dangerous to a segment of people. This is like, does this actually do what it's supposed to do? Or, of course, an entire another discussion we can have is, do we even understand how it's supposed to work? When you come to this black box stuff, folks, it touches your everyday life. And I'm not trying to do this in a scary way.

Quinn:
Some of these technologies are incredible and they're powerful. And the things they can unlock along the way will be helpful and magnificent and predictive and can do things for us. But it's always important to ask these questions.

Abhishek Gupta:
To your point around the use of AI and healthcare. I mean, look at the massive failure of IBM Watson, right?

Quinn:
Oh, my gosh. Yeah.

Abhishek Gupta:
That was out there. They sucked up a lot of oxygen in terms of funding, in terms of engaging with hospitals, in terms of consuming media cycles, et cetera. And what we realized was that, well, none of that actually works at all, right? And hospitals just chose to disengage. And it has long term consequences now, right? Because if anybody else comes along with a product or service that actually works, they're going to be met with a great degree of skepticism because people are going to say, "Hey, we tried out this IBM Watson thing and they promised a whole bunch of things.

Abhishek Gupta:
They were this giant company. So if they couldn't do it, can you? What is to make us believe that you can do it, right?" And so these failures also have long term impacts that go beyond just the company or that bilateral relationship, right? Because it impacts the rest of the ecosystem. And it keeps coming back to this question of organizational incentives and behavior. Because AI is this exciting new thing, right? At least for a lot of organizations continues to be.

Abhishek Gupta:
And for someone who brings this idea forward as the head of a business unit or some other organizational unit and say, "Hey, we should engage with this external provider for getting these AI services and products into our organization." It's a quick win for them in a sense of trying to show that they're being innovative, that they're thinking out of the box or whatever other jargon you want to throw at it that folks used to evaluate the effectiveness or the innovativeness of an organization, right?

Abhishek Gupta:
And I think something that you were saying earlier, which was, we should maybe have a chief question asker at any organization who comes in and asks just those pestering questions. Just like, "Why are we doing this? What's going to be the impact? Have you really thought this through? Let's take a second, right? Let's not jump in."

Quinn:
Sure.

Abhishek Gupta:
And we need that, right? I mean, heck, I work in writing these systems day in and day out. So it's like I'm trying to chop off my own legs and take away my own employment by saying, "Hey, we don't need people who write code, right?" I'm not saying that. I'm not anti-technology.

Quinn:
No.

Abhishek Gupta:
I'm just saying that, "Hey, we just need to ask a few questions before integrating it so deeply into our lives that it becomes almost impossible to extricate ourselves from it."

Quinn:
Sure. And we do a very good job of that as humans of making it very difficult to back ourselves out of these corners once something becomes inherently a part of the fabric of life. So I guess on that note, I was thinking recently about how... I'm not sure how old you might be. I grew up on Prodigy and IRC and Instant Messenger and then digital cameras. And then there was the Facebook and you had to have your college EDU address to sign up.

Quinn:
But then there was all this talk of the generation younger me. It was so much more comfortable online and just shared more stuff. Less worried about privacy. Sharing their lives. We were the old ones. And now I feel very old every day for a thousand different reasons. But I wonder now with everything that's come out on Facebook and the decision making inside and how that's contributed to certain things, whether it's elections or misinformation around vaccines, whatever it might be.

Quinn:
I wonder now if that generation is starting to fully understand how all of that sharing and their data is being capitalized on, and whether that has moved the ethical needle for them. And not even, again, just by the biggest companies in the world. And you think about the markup the other day was talking about the 12 billion location data market that's out there. It's astonishing, right?

Quinn:
Anyone with any amount of money can get from those brokers. So I want to pivot all of that towards action, and I guess it's two sides of the coin. The first is inherently, what can our community do to protect themselves online, to make sure that they're able to use these tools, which can be powerful and helpful? But at the same time, not feel like they're being taken advantage of at every time they drop a marker in a map.

Quinn:
And then secondarily, what would you recommend to non AI people? Again, people who aren't writing this code or implementing it to most successfully instigate conversations like these about a commons internally or externally about internal AI ethics, principles and practices inside their own companies. So on the one hand, how do we protect ourselves? On the other, how do we acknowledge we're going to use these tools, people are going to use these tools. We're going to buy these tools even if we didn't develop them. What are the practical ethics we should be abiding by to use them, but also make sure we're doing it in a way that is equitable and safe? Easy questions. I know.

Abhishek Gupta:
And if I had the answer to all of them, right?

Quinn:
Right.

Abhishek Gupta:
I think the first thing really there is to look at some actionable tools and things that have been put out there, right? So the folks at Mozilla have this great list of things called privacy not included, right? And it's a list of products and services and their privacy ratings, which is super helpful. Presents it with a emoji, right? So it's smiling, not smiling and whatever the gradations between those are.

Abhishek Gupta:
So very modern. For the younger folks out there, relatable. So that's one way to step forward, really take action and understand, "Hey, I got this product X from company Y. Where on this list does it rank? What are its privacy implication, right?" So that's one great tool for folks who are perhaps not deep into the AI world.

Abhishek Gupta:
I gave the static stock a while ago on building civic competence in AI, where I proposed a set of three questions that can help folks understand the impacts that these technologies have. So the first question is, does this use AI? Second is, does it serve me or the creator of that system? And the third is, could it do better? And it might seem very basic. But, right, going in with a theme that you mentioned before of what is very direct and actionable, something that asks us to think about these issues critically.

Abhishek Gupta:
I think this helps to elucidate that very clearly in terms of centering a little bit the locus on yourself in terms of what the impacts are and what actions you can take. Because I think that's been one of the things that as much as we would like to hold these powerful actors accountable, while that happens, we also need to take action on our end, right?

Quinn:
Sure.

Abhishek Gupta:
To protect ourselves. And for example, if we're looking at folks who are, let's call them digital natives, right? Because they've perhaps grown up with these technologies much younger than I guess both you and I. One thing that we need to think about is this point that you alluded to, which is that it's not just big powerful actors who are shaping our behaviors. It's also I think about local influencers, right?

Abhishek Gupta:
Small influencer who are on Instagram, who are on TikTok and who are driving and shaping our ideas about issues, right? And those are, I think, also equally subtle. Because we tend to build what is called a parasocial relationship with these folks, right? And over time we forget that they are also putting up a facade of sorts. They are acting, right?

Quinn:
Sure.

Abhishek Gupta:
They're consistent with a persona that they have online and they have an agenda that they're trying to drive, right? Which may be explicit, may not be explicit. If your only way of getting news and understanding of issues relies on what influencers are telling you in a 25 second TikTok video, maybe that's something where you need to question yourself. Because they are not experts, right? If we are talking about the current conflict between Ukraine and Russia.

Abhishek Gupta:
If you don't have prior experience in international conflict and international law, in history, maybe you don't have the most informed opinion. I'm not saying that's always the case, but maybe you don't, right? And so just relying on influencers and not relying on primary sources, folks who have deep expertise and background can be problematic. And I think that's... To answer your question more succinctly.

Abhishek Gupta:
For people who are digital natives who are coming up with this technology, just the very critical skill of questioning, where you're getting your information from and not letting these parasocial relationships as an example overcome your ability to ask those questions I think is very important.

Quinn:
I love that. I think that's all incredibly helpful I think as we alluded to at the beginning. I feel like you and I could and maybe should have 10,000 conversations and you can go down all these rabbit holes. And, again, branches of healthcare privacy, right? Or mortgages, or folks insurance is being used for flood risk and artificial intelligence, machine learning, deep learning, all these things are being used for fire risk, right?

Quinn:
They're being used for all these different ways that you just don't realize. And, again, that approaches your mortgage. So it's really important to just keep asking questions and exploring the nuance of these as much as possible. And, Abhishek, as we're talking about trying to find across the West and the East and everywhere from these smaller but very tech heavy countries like a Vietnam or...

Quinn:
Look what's happening in Taiwan with chips, right? Seeking to find this, is there a sliver? Is there, like you said, a common denominator that we could start with? And maybe that's just awareness. I mean, that's how this whole thing started for me, was I was trying to have conversations with friends who were inherently invested in and interested in these topics, right?

Quinn:
The macro science driven but still anthropological questions of the biggest issues of our time. And they weren't getting the information that I was getting. Because I had just curated a fire hose of the most reputable content to come my way. And they were getting their news from Facebook. No fault of their own. And it turns out that didn't work out very great for everybody. But also that's why I eventually just slapped together an email and said, "Hey, here's the five or 10 things you missed."

Quinn:
Because selfishly I just wanted to be able to have informed conversations with folks about these things. And you just not might be aware of why you don't get that mortgage or why policing works the way it does, or why Instagram wants to build a tool for under 13 year olds. The more we are just all aware of every action we're putting in, again, you as a user training that algorithm every day. It matters.

Quinn:
Every touchpoint you put into that is pointing you towards that matrix plugged into the goo where it says, "Live your life this way." And sometimes it can be really helpful and sometimes it can just be too much or used in ways that the person in the research lab that designed it didn't intend for it to be. Or it can be sold without your knowledge. We talked about in the newsletter about how reported 96% of people who when iOS 14.5 came out and they said, "Do you want this to track you?" And you were like, "No, absolutely not." Across all these other apps.

Quinn:
I didn't realize that really doesn't apply to location in a lot of ways. And that's why there's still this $12 billion market of your location data being sold to anyone who wants to pay for it at any time from these data brokers. And you just have to be aware about that because people aren't going to have conversations, much less take action if they just don't have any idea that it's going on.

Quinn:
So I appreciate your commentary of, look, some of these folks on TikTok and Instagram, some of them are smart and incredible science communicators and, or political science. You saw when the CDC got on, et cetera, et cetera. It can be helpful, but you got to question your sources and you got to question the incentives behind why they're putting themselves out there.

Quinn:
Anyways, I'm going to give you last few questions, then we're going to get you out of here. It's been long enough. These are questions we ask everybody, Abhishek. I don't call it a lightning round, but feel free to not spend all day of them because you've got more important things to do. First one, when was the first time in your life when you realized you had the power of change or the power to do something meaningful either by yourself as a child or with a group or a club or at a company, whatever it might be? When were you like, "Oh, shit I can do something here."

Abhishek Gupta:
I can give a recent example. I think in 2018 when we were invited to participate in the G7 AI Summit, or rather I should say I was invited by the Canadian federal government to participate in the G7 AI Summit, I realized that the community that we had built up in Montreal to discuss AI ethics had grown so large and was so diverse and had so many great ideas that I could just snap into all of them and carry 400 voices with me into the room.

Abhishek Gupta:
And the real holy smokes moment for me was when I was in that room discussing these issues and the future of work, we realized that the power of those 400 voices was very strong, very well informed as a collective. Because the insights that we were able to bring were in a lot of occasions on par if not better than what the other experts in the room had to bring. So that was a wake up moment for me that the community that we've built really had the power to bring informed discussions to the front.

Abhishek Gupta:
And at the same time, a realization that, yes, I'm in a privileged position where I do get invited to these sorts of things. But it's also my responsibility to then carry those voices with me into the room so that we can all share in shaping those technical and policy measures.

Quinn:
I love that. That's awesome. I mean, and that is the reason to build these communities because those communities are out there and those folks are out there just yearning to be part of something like this, to contribute in some small way and say, "This is my lived experience or this is how I'm trying to implement a tool like this or asking questions like this." And I love that you're the figurehead for all of it. The man standing against a Terminator. Abhishek, who is someone in your life that has positively impacted your work in the past six months?

Abhishek Gupta:
Can we talk about authors who have positively impacted us. I mean, they're not directly in my life.

Quinn:
Yeah, absolutely.

Abhishek Gupta:
So I would say Shane Parrish.

Quinn:
Sure.

Abhishek Gupta:
The guy from Farnam Street. So he's written this set of books called The Great Mental Models. And, man, they've been transformative in the way. We were talking about questioning the fabric of society and how things come about. I think by far that has been the single biggest impact for me over the last six months. I don't get any commission for this whatsoever, but fantastic set of books. And he runs his community, which is the Farnam Street Community. Absolutely fantastic place to be in to learn from folks. So I would say Shane Parrish, probably.

Quinn:
I love that. I am too an accolade and I've tried to just suck up as much knowledge and perspective from that community. And I've got literally those hardcover books, red books over there on my bookshelves right now. They're fantastic. Even if you just dip in and out of them, I mean, what you can pull away to apply to your everyday life is truly helpful. Abhishek, what's your self care? What are you doing after I tormented you with this conversation? How do you take the load off?

Abhishek Gupta:
Good home cooked meal, man.

Quinn:
All right.

Abhishek Gupta:
Every day.

Quinn:
Are you cooking?

Abhishek Gupta:
Well, my parents are visiting, so it's my mom. So even better.

Quinn:
Oh, nice. Very nice. That's a real win. I love that. And last one, coming back to authors and we a whole list up on bookshop of all the recommendations. A book in the past year that has opened your mind to a topic you hadn't considered before, or it's actually changed your thinking in some way.

Abhishek Gupta:
I guess I'll have to repeat what I said before, but it's got to be The Great Mental Models-

Quinn:
Awesome-

Abhishek Gupta:
... have just been so fantastic, right? And I know I'm a little late to that game because those books have been out for a little bit, but they're just straight up. They're fantastic.

Quinn:
They're timeless. I love it. That's awesome. Well, we will definitely give Shane a shout out and throw those on our bookshop list. I don't even know if they're available there. If they're not available there, then we'll put a link to them on the Farnam Street website. Abhishek, where can the people follow you and your crew of X-Men in Montreal?

Abhishek Gupta:
So the best way to stay in touch with the work that we do is the AI Ethics Brief. If you punch that into your favorite search engine, it hopefully should bring it up. But for those who want to know, it's at brief.montrealethics.ai. And I'm sure Quinn's going to throw in a link. Personally, for me, you can find me on Twitter, on LinkedIn and all of those places. If you can't find me on the internet, I don't know. Just shoot me a message, let me know and I'll get on there. Because I think I'm pretty searchable on the internet. But if I'm not let me know.

Quinn:
I love it. Well, I'm so delighted that you're in charge of the internet. It's the best decision humanity could make at this point. Thank you so much for your time today. I sincerely appreciate. And for all your work, truly I have selfishly gained so much and feel like I've leveled up quite a bit on this stuff from reading your work. Not just the big reports, but every week with the newsletter, which I inhale and annotate as much as possible. So thank you guys for everything you do.

Abhishek Gupta:
It's our pleasure. And thank you, Quinn, for being such a fantastic communicator. I think we need folks like you who bring the message out to everybody in the world about asking those critical questions and taking action. It's not just about asking, but it's also about acting. Thank you for all that you do as well.

Quinn:
Trying. Thanks to coffee. Thanks to our incredible guest today. And thanks to all of you for tuning in. We hope this episode has made your commute or awesome workout or dishwashing or a fucking dog walking late at night that much more pleasant. As a reminder, please scribed to our free email newsletter at importantnotimportant.com. It is all the news most vital to our survival as a species.

Brian:
And you can follow us all over the internet. You can find us on Twitter @Importantnotimp. Just it's so weird. Also on Facebook and Instagram @importantnotimportant. Pinterest and Tumblr, the same thing. So check us out, follow us, share us, like us. You know the deal. And please subscribe to our show wherever you listen to things like this. And if you're really fucking awesome, rate us on Apple Podcast. Keep the lights on. Thanks.

Quinn:
Please.

Brian:
And you can find the show notes from today right in your little podcast player and at our website importantnotimportant.com.

Quinn:
Thanks to the very awesome Tim Blane for our jam and music, to all of you for listening, and finally most importantly, to our moms for making us. Have a great day.

Brian:
Thanks guys.