Program transcript:
Grant Reeher: Welcome to the Campbell Conversations, I’m Grant Reeher. My guest today is Nathan Sanders. He's a research affiliate at the Berkman Klein Center for Internet and Society at Harvard University. And he's also published a new book with Bruce Schneier titled, "Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship", and he's here with me today to talk about that book. Mr. Sanders, welcome to the program.
Nathan Sanders: So glad to join you, thank you.
GR: We appreciate you making the time. So, we'll get into the specific topics that are covered in your book, but first I wanted to ask you just a couple of basic, big picture items to set the context here, and the first one is the most basic of all. All of us have heard by now of artificial intelligence. Most of us have probably intentionally used AI to answer some question or another, but to just start us off, could you give us a very, very concise definition of what artificial intelligence is and what distinguishes it from high powered computing more generally?
NS: Oh, it’s such a great and foundational question, and I acknowledge that almost everybody, especially those who are experts in the fields, defines AI little bit differently. In our book, we try not to get bogged down in the technical distinctions and all the variations of how AI models are built, and instead we adopt a very broad definition. That, for our purposes, AI is any technology that replaces a cognitive function that used to be the exclusive domain of humans. So that does include technologies like large language models, which are used to do things like generate text that you might use in writing an email, or also to control external systems like software systems that maybe used to have required a human in control, but it does include other forms of AI as well. Think of, for example, Google Maps or navigation software that's helping you plan a route from point A to point B, something that is a very hard, challenging computing and used to be something only humans can do you, you think of that AI as well, as well as computer vision models, other forms of natural language processing and predictive machine learning as well.
GR: All right. Well, I'm glad I asked, and it is already complicated. Now, your book is about the application of AI in the politics and the public policy arena. Let me just start with kind of the end if I can, and that is, could you give me, say, your two biggest hopes for how AI could improve what we do in the politics and public policy arena and on the other hand, your two biggest concerns or worries about it?
NS: Sure. In terms of hopes, one thing I am optimistic about is the ability of citizens, of people to leverage any new technology and including AI, to enhance our power as citizens of democracies. We have some great case studies and examples of what that looks like around the world in our book. We talk about citizens groups using AI to watchdog the government to incorporate instances of public corruption. We write about a great example of a group in Brazil that's been using AI for that purpose for about a decade, long preceding the current modern developments and large language model technologies. We talk about groups of citizens in the US and elsewhere around the world who are using AI to improve how citizens can have their voices heard in policymaking processes, helping people to articulate their views about the laws that should govern all of us and communicate that in an impactful way to the legislature. So there's obviously a huge risk, and we see this as the central risk that AI poses to democracy, of the technology being used to concentrate power and make the already powerful more powerful. But I am optimistic about citizens groups leveraging the technology to switch those dynamics. In terms of fears, I already mentioned the biggest one, that it will be used to concentrate power. And of course, we do see examples of that around the world. We see AI as a fundamentally power magnifying technology, and that means that pro-democracy advocates that want to reform good governance using the technology, they can use it for that purpose and it can be effective. But just as well, elected officials and others with authoritarian tendencies that want to use AI to control citizens and to enforce unjust policies, AI will magnify their power as well, and we can see examples of that around the world today also.
GR: We'll get into some of those a little bit later. I wanted to ask you a more of a historical question as a follow on to that. And I was thinking about this and I was wondering how would AI compare with technological innovations of the past that have also had great impacts on politics? I thought of three right away. One very simple one is simply the ability to amplify a voice, you know, voice amplification. And then of course, you have radio, then you have opinion surveying and polling. And then an obvious one would be television. Those are all the ones that came to mind to me. Do you think that AI is going to have a more dramatic effect than those kinds of technological innovations, or will it be comparable or less? I don’t know.
NS: Well, I think you're right to locate AI along a historical spectrum of new technologies that, when they're introduced, affect not just politics, not just government and administration, but really everything. I'm not a historian, but I know historians and economists have a classification of what are called general purpose technologies, technologies that really do change and transform everything. Maybe some people differ about what belongs in that category, but it includes things like telecommunications and includes things like railroads and includes things like universities as knowledge generation technologies. And I do think we can locate AI along that spectrum. In terms of political applications, you know, we see AI transforming the way that people interact with candidates and elected officials in a way that extends the progression of the introduction of whistlestop tours on the back of railroad cars, moving into broadcast communications like radio, moving into visual communications with television, moving into the internet and social media, and now it's getting the ability to not just broadcast opinions, but to individually respond to questions, tailor conversations to an individual person, and even exchange information in a two way communication with many, many individuals at mass scale in a way that previous technologies just wouldn't have made possible. And once again, that capability can be applied to good purposes that most of us and most of your listeners would probably think are beneficial. And it can definitely be applied in negative ways as well.
GR: I'm Grant Reeher, you're listening to the Campbell Conversations on WRVO Public Media, and my guest is the data scientist Nathan Sanders. And we're discussing his new book. It's titled, "Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship". So, let's get into some of these things more specifically and you have given us clues about the arguments that you, the two of you make in your book. But how would AI affect democracy specifically? You talked about the concerns about concentrating power in the way that citizens might do things, but how will it affect democracy as a system of government? And I'm going to define democracy here very broadly in the way that you defined AI as being a system in which the people, usually through elected representatives and their leaders, give laws to themselves. And I think that's sort of the fundamental essence of it. How do you think AI is going to affect us?
NS: It's a great question and of course, it's a broad one. In our book, we do go through in-depth applications across all the major branches and systems of governance, politics, citizenship, the judiciary, the administrative branch.
GR: And I want to get into some of those, each with you, as we go on. But go ahead.
NS: And maybe what I can do to start that conversation is just to share an example that we think is really interesting, that we've just written an update to the story we shared in the book of the most recent happenings that, specifically talks about how citizens and candidates, voters and candidates engage with each other. We just wrote a story about what's happening in Japan right now with a brand new political party called Team Mirai that is really reimagining how politics works and using AI as a tool in that reimagination. This is a party that was founded by a 30-something software engineer named Takahiro Anno. And at the time we wrote the book, he had just run for office for the first time. He had run for governor of Tokyo, and he is, as an individual software developer, had made some tools to help him get the word out about his platform and views, and to get feedback from people that were novel and helped him reach hundreds of thousands of people in the Tokyo electorate in a way that a individual candidate traditionally couldn't do. Now, since we published the book, he's now been elected to the upper chamber of the Japanese national legislature, the Diet. So, he's effectively a senator now in Japan, and he has founded a political party, and he's now receiving public funding that is distributed to Japanese parties, and he's using it to advance that vision. He's hiring engineers and using it to build new political technologies, not just for his party, but open source tools that the entire Japanese electorate can use. Let me just give you one example of what that looks like, to make clear how this is so different from how politics is practiced today elsewhere. He's developed an AI interviewer tool that allows many, many individual voters to have a conversation with a representative that can explain a policy position from this party's platform and, crucially, get feedback on it. All of these dialogs are available for anyone to see online, and my coauthor Bruce Schneier and I have reviewed some of those. We've seen really interesting examples of voters learning about proposals to reform the structure of the Diet, to change the structure of the legislature, being informed about those and reacting to them saying, well, I like this aspect and I think it could lead to this, but I don't like that. And then the outcome of this interview process is a recommended change to the policy proposal to the candidate's own political platform, his written political platform. That changes something the candidate reviews and either says yes or no to. And then the voter gets a response, they get to see was the suggestion that I made in this interview accepted? Is that now part of this party's platform or not? This is being done on a scale right now of thousands of voter conversations and thousands of responses. But the technology is so scalable, we can see it being applied at a scale of millions. And I think that kind of individual voter interaction and the responsiveness to voter inputs and preferences could really change how politics is done.
GR: Interesting, interesting. Two things come to my mind listening to that story of yours. I want to put these things to you as follow up questions. One is, transparency seems to be absolutely the key, though, in having that process be trusted and not feel like it might be being manipulated. Would you agree with that?
NS: I do for sure.
GR: Yeah. And other thing that popped up in my mind is, it has to do with age and politics. We see in the United States right now, I think there's a collective frustration among many younger voters, and I'm going to define younger sort of in political terms, being like under 55, with the age of the leadership. It's certainly been a big conversation in the Democratic Party. And I wonder, and also the fact that, older citizens vote in higher proportions than other citizens and so therefore they get more attention paid to them. I mean, it's no accident that, you know, Medicare and Social Security are often called the third rails of American politics. You don't touch them without risking your own political death. Do you think this might, the effect of this technology might get at that? That somehow this is going to provide more openings for younger people to have more power?
NS: A great question. First of all, I agree with your premise. I think that age is emerging as an increasingly important dynamic that's controlling political outcomes, not just in the US, but around the world, especially in places with an even more skewed age distribution in their electorate like Italy and Japan.
GR: Right.
NS: The political scientist David Runciman has written really powerfully about some of those dynamics, and I think they do intersect with AI. Bruce and I have been watching the polling on voter understanding and preferences about how AI is used in government, and it's changing over time. But one of the things that is maintained, steady trend, is that younger voters tend to be more informed about AI and not necessarily more favorable or open to its applications, but at least more informed about its potentials and risks. I think that as that changes over time, it will manifest in our politics. One thing that we're already seeing in the US is a huge local response to the citing of data centers, the compute infrastructure, the physical infrastructure that's used for AI models. Many communities across the country have really risen up to reject the idea that the negative externalities, the environmental effects, the impacts on energy prices of that infrastructure would hit them locally in their communities. I think that's really just the starting point of a larger grappling with how AI is changing society. That will happen, at a national level. You know, to me, the biggest political implication of the growth of AI is the concentration of wealth and power. The fact that we have a very small number of companies, mostly located in the US, mostly in one city in the Bay area, that are concentrating wealth at just an enormous, unprecedented scale. Multitrillion dollar valuations built on the future potential of AI. The political implications of that concentration of power I think are really severe, and I haven't yet seen either of our major political parties or most of our major political leaders in the US really identify solutions to that growing concentration of power or organize a political response to it. I'm confident that will happen, and I hope that parties will adopt a position that it is bad to have that concentration of power. It seems clearly an existential democratic risk to me.
GR: You're listening to the Campbell Conversations on WRVO Public Media. I'm Grant Reeher and I'm talking with Nathan Sanders. The data scientist is a research affiliate at Harvard University's Berkman Klein Center for Internet and Society. And he's the coauthor with Bruce Schneier of, "Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship", and we've been discussing his book. So, Nathan, right before the break, you really set out a pretty serious, specific concern about concentrating power, and the way it might play out in the United States. And one of the things that popped into my mind as a political scientist when you said neither party has really started a real conversation that's critically examining this, is because then no one wants to go after the goose that's laying the golden eggs when it comes to campaign financing. I mean, these folks give a lot of money. And so that's one thing that I think is going on. But how did the two of you see this problem being tamed, what could we do?
NS: Well, if you don't mind, I'd love to just pause for a moment on that question of campaign financing.
GR: Sure.
NS: And then I'd love to go on to talk about solutions to the broader problem. I think you're absolutely right. We're seeing really a capture of government and elected officials by the wealthiest industries in our country. And increasingly, that lobbying spending is being funneled through AI technology companies and it's having a real effect. When we talk about solutions to mitigate the risks that AI is proposing to democracy in our book, we note that a lot of the solutions are not really specific to AI or specifically about AI. And we do think renovation of our campaign finance system is critical to control and mitigate risks of the development of the technology. That is a solution, campaign finance reform, that is responsive to the risks of how AI is manifesting in our politics today, even though that's not a solution, that itself is inherently about technology, so I'm really glad you raised that.
GR: Yeah, yeah, it's interesting. You know, now that you say that, the Supreme Court has been pretty firm on its view about how money is equivalent to speech and how corporations are equivalent to individuals, and thinking about that connection between money and speech. AI may change that, I mean, AI may be the thing that, kind of the straw that breaks the camel's back on that. I hadn't thought of that before, very interesting.
NS: I hope we do grapple with that and it does lead to some change, I agree with you.
GR: So, you've got a part of your book, and this gets a little bit more into the weeds, but, about how AI could affect and improve how government itself works, the actual administration of government. Tell us a little bit about that.
NS: Yes, that's right. We see examples in the US and around the world of government agencies leveraging AI to change how they operate, in some cases to improve efficiency, in some cases to augment their capabilities, to help them do things they couldn't do before. In fact, in the later days of the Biden administration, about a year ago, they published an inventory of AI use cases across the federal government. And it may surprise people that even at that time, we're talking about really reporting from 2024, how many active applications of AI already existed in US agencies. It was more than 2000, it's really happening at a very large scale. And it ranges from everything from agency officials trying to write emails faster by using AI assistance like Grammarly, you know, really kind of small scale granular use cases, to agencies that administer billions and billions of dollars in government benefits trying to automate that process using AI. And we see, especially as we've transitioned from the Biden to Trump administration, a pretty clear change in the policy and guardrails around how it's being implemented, how the technology is being implemented in agencies that we're concerned about. If you don't mind, I can share a specific example of what that looks like.
GR: Absolutely.
NS: So in the US, one of our largest agencies is the center for Medicare and Medicaid Services that administers health care benefits for millions of Americans and represents a massive amount of money. Those benefits are often lifesaving, critical to people's lives. And, you know, for CMS and for other health care administrators, they face a life or death question, often thousands of times a day, which is, will you authorize a health care provider to perform a medical service, or will you say, no, I'm not going to pay for it. So under the Biden administration, there was an initial policy laid out that allowed for CMS and the insurers that work with it to start using AI in that administration process. And they laid out some pretty clear guardrails for when human review is required, when it's appropriate to use automated decision making, when you disclose that, etc., etc.. The Trump administration has come in and now issued new guidance with a pretty different posture. They've done two things that concern us and that we've written about. One is really to peel back those guardrails and to leave more of those decisions in the hands of corporate insurers and the technology providers that those insurers use. And the second, they've put in place some financial incentives that some people have criticized as effectively a bounty on denying care, saying that you will get a financial incentive if you build an AI system that says no. That's where the incentive is being placed. To us, this is a form of what we would think of as tech washing. The idea that if a computer says it, it's true, or it's objective or it's okay. We urge people not to think of that kind of use case as an objective use of technology, but rather asking technology to encode a policy decision, to encode a set of values. The technology can be used to encode the value that our goal is to save money by saying no to people's healthcare needs. It can also equally be used to encode to say, we need to say yes as quickly as possible to people who need care, so they don't have to wait for a decision. Those are two totally different uses of the same technology.
GR: That's really interesting in that distinction. You've given me a phrase now I'm going to commit to memory, ‘tech washing’, I love that. If you've just joined us, you're listening to the Campbell Conversations on WRVO Public Media. I'm Grant Reeher and my guest is Harvard University data scientist Nathan Sanders. So, you've also got a very intriguing chapter about how it could affect the court system. And, you know, there's so much that is subjective in the criminal justice system. So, I’m really curious for you to share with our listeners a bit, how you see that playing out.
NS: Well, you know, we see transformative effects of AI being used in judicial processes around the world. And one of them has grown since we wrote about it in the book. We wrote about the use of AI in the Brazilian judicial system. Brazil is a society that's maybe even more litigious than the US, which is shocking. They have millions and millions of court cases processed per year, and many of them are cases brought by citizens against the government, they're effectively accountability cases. The Brazilian court system spends about 1% of GDP every year just administering cases, and it spends another roughly 1% of GDP paying out penalties assessed to the government when they're found responsible in those cases. So, it's a really significant large scale issue in Brazil. Part of the problem that they've been facing is an ever growing backlog. There are so many of these cases, they just can't process them all. And the problem gets worse and worse every year. And so starting several years ago, again, before the advent of technologies like ChatGPT, the Brazilian judiciary had started adopting AI technologies to improve the efficiency of that process, to automate decisions such as how should we distribute our litigators who represent the government across cases, to put the best people in the cases where they can have the biggest impact? Using it really for administrative procedures as opposed to making decisions on behalf of judges. And they found that and reported that to be effective, they've actually turned around that backlog. So now it's shrinking instead of growing, which is a big deal for that judicial system. But the response that has been reported is also interesting. Litigants are also using AI to automate filing cases and to write court documents. So now they have an even faster growth of new cases. And it's created this arms race, both sides using AI for opposing purposes. And the question is, is that good or bad and how should we change that? You know, I think it's clear to see what the risks is. If ultimately the legal system is just machines talking to machines, that sounds disastrous. On the other hand, we hope that there's a potential upside to this, which is that there's a real accountability function for the judicial system to play. And if more people are able to represent their concerns to the government by bringing litigation, and if the judicial system is more efficient and processing those concerns, that's a good thing for democracy. We need to see how this plays out, but those dynamics, I think, will be important in other countries too.
GR: That's very fascinating. We've got about three minutes left and I want to try to squeeze in a couple questions if I can in that time. I want to go back to the citizens here because you talked about that a lot at the at the beginning of our conversation. For the average citizen is not going to wrap their minds in their hands around all this. So for the average citizen out there, what should they be paying most attention to when it comes to AI and politics? What is the thing that they got to keep their eye on, do you think?
NS: I urge people to think about that problem of concentration of power and to demand that their political parties and representatives present real solutions. And we present alternative vision in the book for how AI can be developed. There's nothing inherent about AI as a technology that says that it has to be developed by a few companies with trillion dollar valuations. There's nothing about the technology that says that it has to be trained at such a large scale, and so repeatedly, that it uses enormous energy and environmental resources. There's nothing about the technology itself that says that only the richest individuals and companies can profit from it and capture value from it. And there's nothing that says that only that small group of people can decide how models are trained and what biases they may be subjected to. Instead, we present an alternative vision of AI developed not by big corporations, but by people and the systems that we put in place to represent us in government, a public AI model. You know, when we started writing about this several years ago, the idea of a public alternative to corporate AI didn't exist in the real world yet, but today it does. We have examples like the Apertus model in Switzerland. This is a effective large language model, a modern AI model that was trained by government institutions in Switzerland and offered as a public good for everybody to use. This is an alternative vision of how AI can be developed that can be more sustainable. By the way, the Swiss model was trained on national compute infrastructure, that is hydro-powered, powered by renewable energy and doesn't require value to be captured by corporations, but instead captured by people, and the technology can be built for public benefit.
GR: So, my last question, we got about a minute left, kind of a bottom line question for you to return to where I started. I want to have you put yourself on a scale here. And then maybe just say a couple quick words about it, in terms of how optimistic or pessimistic you are about the future here in this regard. If 1 is the end of civilization as we know it, and 10 is Doctor Pangloss, this is going to create the best of all possible worlds, where are you? What's your number?
NS: Great question. I think I'm right in the middle. I think I'm a 5. You know, I'm an American, I see a lot of things to be pessimistic about in our government today, a lot of things. But I'm very optimistic ultimately about us as citizens steering our government and exercising our democratic control. And I'm optimistic about us leveraging new technologies to increase our power as the public.
GR: That's great. It's a great place to end. That was Nathan Sanders. And again, his new book is titled, "Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship". It's incredibly illuminating. And it's also, as Nathan just suggested, a call to action to all of us. Nathan, thanks again for talking with me, I really learned a lot in this conversation.
NS: My great pleasure. Thank you for having me.
GR: You've been listening to the Campbell Conversations on WRVO Public Media, conversations in the public interest.