In the largest global election year yet, generative AI is already being used to trick and manipulate voters around the world. Will this growing trend have real impact? Today on WIRED Politics Lab, we talk about a new online project that will be tracking the use of AI in elections around the world. Plus, Nilesh Christopher dives into the lucrative industry of deepfakes, and how politicians are using them to bombard Indian voters.
Leah Feiger is @LeahFeiger. Vittoria Elliot is @telliotter. Write to us at [email protected]. Be sure to subscribe to the WIRED Politics Lab newsletter here.
Mentioned this week:
AI Global Elections Project
“Indian Voters Are Being Bombarded With Millions of Deepfakes. Political Candidates Approve,” by Nilesh Christopher
“A Far-Right Indian News Site Posts Racist Conspiracies. US Tech Companies Keep Platforming It” by Vittoria Elliot and David Gilbert
How to Listen
You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:
If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts, and search for WIRED Politics Lab. We’re on Spotify too.
Transcript
Note: This is an automated transcript, which may contain errors.
Leah Feiger: Welcome to WIRED Politics Lab, a show about how tech is changing politics. I’m Leah Feiger, the senior politics editor at WIRED. Around the world, strange things are happening in politics. The rapper Eminem has come out in support of an opposition party in South Africa. Leaders who died years ago in India and Indonesia have been, all of a sudden, resurrected and are actually telling people to vote for candidates currently on the ballot. And in the United States, Joe Biden called a lot of people in New Hampshire and told them to stay home during the primary. To be clear, none of this actually happened. Eminem has never said anything about South African politics, at least as far as we know. But generative AI can manipulate the images and voices of pretty much everyone and make them say pretty much anything, including something about an election or campaign. Today on the show, we’re going to talk about how AI is already being used to try and sway this year’s political races. WIRED has just launched a massive new project tracking the use of AI in elections around the world. We’re going to be collecting and sharing examples all year long. Reporter Vittoria Elliott is leading the project and is here today to tell us all about gen AI. Tori, I can tell you’re real because I can reach out and touch you.
Vittoria Elliott: Thank you so much. I am real. I am three raccoons in a trench coat. The middle raccoon is the one that controls my hands.
Leah Feiger: Tori, why did you originally pitch me this project?
Vittoria Elliott: Yeah. This election year is huge. It is the biggest election year around the world. More people are voting than ever, just in terms of local, parliamentary and then big general elections. We’re seeing more than 60 countries vote and more than 50 of those elections are general elections, meaning we’re looking at tons of different levels of voting and sometimes national leadership being elected.
Leah Feiger: It’s the most elections ever.
Vittoria Elliott: Yes. It’s also the most people voting and the biggest election since the invention of social media, and probably the most people on the internet ever during an election season.
Leah Feiger: Right.
Vittoria Elliott: It’s massive. Social media companies have been struggling for years with how to deal with the sticky issues of elections and politics, particularly around mis and disinformation. Now we’re adding a new layer on top of that, which is generative AI. It’s really tough right now because deep fakes, video fakes, things like that are really obvious sometimes. But that is just the tip of the iceberg. That’s just the stuff that is all more evident to us.
Leah Feiger: Okay, so how much will generative AI actually be used in elections? Or is it still too early to tell?
Vittoria Elliott: It’s interesting because last week, Nick Clegg, whose president of global affairs at Meta, and Meta owns Facebook and Instagram, said that the generative AI stuff is sort of, he was saying, “We’re not seeing this as a trend, this is aberrations. These are one-off instances.” But the thing is that they don’t really know and neither do we. We launched this project to track all the instances we can find of generative AI’s use in politics around the world. We’re launching with about 50 examples and that’s just off the bat. That’s not even the stuff that I currently have on a spreadsheet that we’re going to keep adding to this right now. Because we don’t really know how generative AI is going to be used in our elections in the US and everywhere else, we think it’s really important to start getting a sense of what that may actually look like. This technology is moving really fast and we’ve already seen how it’s been used in places that have already had their elections this year. What it might look like by the end of the year, the techniques people are using, that could change.
Leah Feiger: We are already seeing, in elections around the world that are currently taking place or have taken place that gen AI is playing a big role, like in India. In know that we’re going to get into that in the next segment, so stay tuned. But clearly, this is happening.
Vittoria Elliott: Yeah. I think the scarier part is that, when I say we don’t know, it means that we don’t know the extent of it. A lot of times when people think about generative AI, they think about ChatGPT giving them stupid answers and maybe some of Google’s very bad new search returns that we’re seeing.
Leah Feiger: Telling you to put glue in your pizza.
Vittoria Elliott: I literally saw that one this morning and I was like, “My friends.”
Leah Feiger: So good. Really good recipe advice from Google. Don’t do that, you guys.
Vittoria Elliott: Yeah. It’s like five-year-old paste eating. So they think of ChatGPT or they think of deep fakes, where you have politicians saying things that they definitely never said. But those are just two small things. Generative AI can manipulate something that already exists, so audio, video, photo. It can also make something entirely new, like ChatGPT being asked to write a speech, which is a thing we’ve seen. The problem is right now, it’s much easier to tell if something like a video, which has a lot of moving parts generally and tends to be of a thing, you can verify if a thing happened.
Leah Feiger: Right.
Vittoria Elliott: It’s much harder with audio or text. We are seeing deep fakes already, especially in India, especially in Indonesia. We’ve already seen a ton of that in Pakistan, in Bangladesh, tons of that. But again, that’s the visual stuff that is a little easier for people to pick up on. That may just be the tip of the iceberg.
Leah Feiger: Who’s actually creating this stuff and putting it online?
Vittoria Elliott: Great question. A lot of times, we don’t even know. In the AI world, there’s a thing called watermarking where you can stamp something and say, “This has been created by generative AI.”
Leah Feiger: Right. We did that for our project that we just put out. For all of the generative AI photos that we include, we have a big old watermark that says, “This is generated content. This isn’t real.”
Vittoria Elliott: Exactly. But that is our choice to do that. That is goodwill on our part because we don’t want people to use it for the wrong reasons.
Leah Feiger: We’re so good.
Vittoria Elliott: We are. We’re great people. A lot of new tools don’t do that. There’s not a lot of regulation around this. Even big platforms are self-policing. They are building the bridge as they’re crossing it. Meta, Google, all these companies are putting policies into place as they’re realizing there are problems that might cause real issues. A lot of times, we don’t necessarily know who’s making these. Sometimes people will claim it, if it’s made on behalf of a campaign for instance. Sometimes they’ll be like, “We made this and it’s part of the campaign.” For instance, the DNC made an AI generated parody of Laura Trump, who is now the co-chair of the RNC. They made a parody of a song that she released. They were very open, “This is AI generated,” whatever. There’s that. We know they’ll claim that. But sometimes, the stuff that we’re seeing in the wild, no one’s necessarily claiming it. It’ll be shared on Twitter accounts or on Facebook, or What’s App, or whatever, and we can’t necessarily confirm that that is the account that made it. We just know that that is the account that shared it.
Leah Feiger: Sometimes you are able to link it back to specific companies-
Vittoria Elliott: Yes.
Leah Feiger: That are doing the generative AI itself.
Vittoria Elliott: Yeah, totally. For instance, there was a deep fake made of the former Prime Minister of Pakistan, Imran Khan, whose been in jail under corruption charges. His party was disqualified for running in the general election earlier this year. He was able to make campaign speeches using generative AI.
Leah Feiger: Wild.
Vittoria Elliott: To do that, they used ElevenLabs, which is the same company that was used for the fake Joe Biden robocall earlier this year. Sometimes we do know the companies involved, a lot of times we don’t.
Leah Feiger: How have these companies said that they’re going to approach elections this year?
Vittoria Elliott: Well, more legitimate companies like Midjourney and ChatGPT, OpenAI, Google, et cetera, they’ve said, “We’re going to put guardrails on. We’re not going to allow for generating political images.” ChatGPT, which is text based, they’ve said, “It’s not cool to use our tool to generate political stuff for campaigns,” or whatever, “You can’t run a chatbot on top of our interface,” basically. But they’re not doing great an enforcing it. There was a report from the Center For Countering Digital Hate that we covered in March, where they went into all these images generators and they were just like, “Give us an image of Trump doing this, give us an image of Biden doing this.” And it did it a lot of the time. For ChatGPT, Dean Phillips, who was a congressman who was briefly running for President.
Leah Feiger: Formerly running for President, Congressman Dean Phillips.
Vittoria Elliott: Built a chatbot called Dean.bot on top of OpenAI’s ChatGPT interface and it didn’t get taken down until the press was like, “Hey, isn’t this against your policies?”
Leah Feiger: I remember that very well. Something that I also remember from that moment is that Dean Phillips actually had a lot of Silicon Valley backers. It feels a little bit hazy. It’s like, “Yes, you shouldn’t use Dean.bot, but also we still kind of love and support you.” There’s a weird back-and-forth there. The stuff that Dean, for example, was saying about generative AI and legislating against it, Sam Altman was into it.
Vittoria Elliott: Yeah. That’s just in the US. For instance, in Indonesia, there was a company that built an app called Pemilu for the Indonesian elections. The founder of that app claimed that they had built something on top of ChatGPT that allowed them to write campaign speeches in a bunch of local languages.
Leah Feiger: Wow.
Vittoria Elliott: That was pulling in information to allow them to tailor messages to particular demographics, whether that was young people, women, whatever.
Leah Feiger: Well, talk about effective when you have a country with so many languages.
Vittoria Elliott: Yeah. It’s dispersed across of islands, different needs.
Leah Feiger: Absolutely.
Vittoria Elliott: Literally, this man spoke to Reuters and was like, “Yeah, we built it on top of ChatGPT.” Would OpenAI have ever caught that usage of it?
Leah Feiger: Right.
Vittoria Elliott: We don’t know.
Leah Feiger: Yeah, like you said, it’s very hard to ID these things.
Vittoria Elliott: Yeah. The techniques we have for them are really nascent and they’re voluntary most of the time.
Leah Feiger: Yeah, they rely on self-admission and good actors. People that are willing to say, “Yeah, I used this.” Or, “Is this okay for me to use in this way?” Interesting. Well, that’s a lot of different examples there and some of them actually sound pretty scary, but others sound less scary. Translating a speech actually doesn’t sound like the worst thing to me. That just is talking about informing voters.
Vittoria Elliott: Totally. I think a lot of the conversation around generative AI has been along the lines of mis and disinformation. People are really worried about how technology can be used to deceive.
Leah Feiger: Absolutely. It can be. As we talked about at the top of this, maybe there are some voters in South Africa that were just so, so pumped to hear that Eminem was supporting their candidate.
Vittoria Elliott: It’s also powerful in other ways, like it can be used for satire. Which is actually, as we’ll talk about in the second segment, how it kicked off in India was a satire thing. It can help rally people. But the problem is is that even sometimes if people know something’s fake, it can still be really emotional. If you’re seeing an image or hearing someone say something that feels important to you, even if you are like, “I know this isn’t real-“
Leah Feiger: Sure.
Vittoria Elliott: There’s something that’s very emotive about that.
Leah Feiger: Like what? Give me an example there.
Vittoria Elliott: In a story we actually just released this week, we looked at the AFD, the far-right party in Germany that actually a German court has recently designated as potentially anti-democratic and extremist. They ran an ad on Meta, on Facebook and Instagram, that showed an image of a white woman with injuries to her face that appeared, based on the researcher’s findings, to probably be manipulated by generative AI. The text said, “Crime from immigrants has gone up.” You get things like that where maybe someone could look at that image of this woman and be like, “That seems like a stock photo.” But if you have those fears or those views and you’re seeing that image and you’re seeing that text, and a politician is saying, “Yeah, we are really concerned about immigrant crime.”
Leah Feiger: You’d be scared.
Vittoria Elliott: That can still be very emotional because it may tap into something you already feel or believe.
Leah Feiger: Of course. Even with these dead politicians rising from the grave in India and Indonesia, to speak to their electorate, people know they’re dead. People know that their old leaders are dead. But how lovely for them to make a comeback and say, “Please vote for XYZ person.” Even though you know that it’s fake, there is some emotional pull, some resonance there.
Vittoria Elliott: Yeah. A really great example of this also is, in American Samoa, this no-name candidate, Jason Palmer, he’s an American business man, he’s not been in politics before. He won the democratic primary in American Samoa because he had an avatar of himself that he had made to answer people’s questions.
Leah Feiger: With generative AI?
Vittoria Elliott: With generative AI.
Leah Feiger: Wow.
Vittoria Elliott: He used generative AI to send out personalized text and email messages to people. He won the primary there. I’m sure people, looking at his avatar, maybe didn’t think that that was a real representation of him on their computer, answering their questions in realtime. I think people understand that, when they’re getting messages from an official campaign, it is a campaign.
Leah Feiger: Right.
Vittoria Elliott: It’s meant to sway them.
Leah Feiger: Sure.
Vittoria Elliott: But again, there’s still something emotive about feeling that kind of connection.
Leah Feiger: Absolutely. It’s hard to forecast months ahead. But what does the future of generative AI and elections look like to you right now? Having just compiled all of this data and all of these examples, what do you think is to come?
Vittoria Elliott: First off, I think if we look at what’s happening in the Global South, particularly in Indonesia and India, in Bangladesh, in Pakistan with this rampant use of this, where it’s really tailored to specific constituencies, the use of avatars and deep fakes, and really localized messaging, I think that’s really what we’re going to see. Again, when it comes out of a campaign, sometimes it’s easier to tell. But for instance, in Bangladesh, a deep fake went out that was of a local politician, a woman who was running in a local parliamentary race that said that she was conceding the election. That is the type of thing that we need to be really, really careful about. We’re really going to be relying a lot on researchers, of whom there are not enough at this point to fact check all this stuff, and self-admission. I think what I’m really seeing already right now is we have a list of all these examples, and that’s just what fact-checkers and researchers have been able to verify. The amount of work it takes to verify, I’m really worried that we’re not going to be able to keep up.
Leah Feiger: On that lovely, happy note, Tori, what can people do to help out with our project?
Vittoria Elliott: If you’ve see an instance of generative AI out in the wild, we have a little submission form in our show notes and also on our project page, and we’d love for you to send it to us. We want to be able to track all of this for the rest of the year. If you or someone you know is seeing generative AI out in the wild with regards to politics and elections, we have a form in the show notes and also on our project page. We’d love for you to send it to us. And you can send us stuff you’re not sure about because we’ve talked to a couple researchers who have offered to help us verify. If you get a weird video, weird voice note, weird message and you’re not sure, you can still send it and we’ll take a look at it for you.
Leah Feiger: Thanks so much, Tori. We’ll be right back.Welcome back to WIRED Politics Lab. Tori’s still in the studio with me. We are joined today by Nilesh Christopherherher, a WIRED contributor who has just put out a piece about deep fakes in the Indian elections. Nilesh, thank you so much for joining us today.
Nilesh Christopherher: Thank you for having me.
Leah Feiger: Tell us about your story. What are you seeing? How are Indian politicians and their campaigns using generative AI?
Nilesh Christopherher: It’s been fascinating to see the evolution of the news of generative AI. Initially, we saw gen AI being used in jest. The Prime Minister, Narendra Modi, his voice was cloned and was used to make him sing in languages he doesn’t speak.
Leah Feiger: Right.
Nilesh Christopherher: To personalize voter outreach in different regional languages. That has happened over the past six months. In our story, we specifically looked at the rise of deep fake service providers who are effectively businesses that have come up who do sanction deep fakes of politicians and offer personalized outreach. It has developed into a $60 million industry, between creating digital avatars of politicians for outreach to AI calls in the voices of politicians. More recently, in the lead-up to the elections in April, 15 million calls here made to individual voters, either canvassing votes or reaching them on specific occasions.
Leah Feiger: That’s so wild. It’s happened with politicians from all levels, right? They’re hiring these companies to provide them with deep fake avatars of themselves.
Nilesh Christopherher: Yes. That was the surprising part for me, Leah. We had an example of a local politician, Shakti Singh Rathore, who was quite down in the total poll. He belongs to the state of Rajasthan, it’s a North Indian state, and he belongs to the Bharatiya Janata Party, which is Modi’s Hindu nationalist party. He wanted to effectively talk more about everything that Modi has done over the past 10 years, so he got in touch with his friend, Divyendra Singh Jadoun, who’s 31-years-old. He also lives in the small place of Ajmer. They got together and they wanted to clone himself, and send personalized recording in his voice, telling voters in his constituency about Modi’s schemes, such as Digital India. That looks at development schemes for Indians and stuff. So making voters aware of everything Modi and his party has done, and in effect wanting their allegiance in the upcoming elections.
Leah Feiger: It’s so personal. I was so struck in the piece about how voters were really responding to the fact that these calls had their names in the beginning, and they felt like politicians were recognizing them and hearing them. That’s so fascinating when you have such a big country, the world’s largest democracy, so many elections. This is a month-and-a-half of campaigning to be able to then make it feel personal. They’re doing this, the videos, the translations and even songs, right?
Nilesh Christopherher: Yeah, absolutely. One of the fascinating things is most people living in India metro cities like Bangalore or Delhi, usually just when there is a spam call or a robocall coming their way, they just don’t pick it up. They cancel it.
Leah Feiger: Same. Yeah, that’s universal I think.
Nilesh Christopherher: Yeah, absolutely. But the interesting bit is people in rural areas, especially those in smaller tier two and tier three towns, feel a sense of validation, of being heard by their own politicians whenever they get these calls. The more calls, the more outreach you do, the more validated they feel. That is the key insight in which these personalized voter outreaches are built on. We spoke to an individual who runs this company by the name iToConnect, they did almost 25 million personalized calls-
Leah Feiger: Wow!
Nilesh Christopherher: With elections. These are not even in the voices of big name politicians that we know, like Modi or someone. These are local politicians reaching out to their voters in their constituency, and asking them and canvassing for votes.
Vittoria Elliott: Nilesh, India has such a robust local tech sector. I wanted to know if you could dive into that. How does that play a role here, the fact that India has so much tech talent to build this stuff out? Give us a sense of whose getting into this deep fake industry, where there does seem to be a lot of demand.
Nilesh Christopherher: There’s two parts to this. It’s been fascinating to watch this evolve, Tori. One is because of the number of languages, 22 official languages and thousands more, I witnessed the rise of regional systems. For instance, a specific startup in a state would cater to the Tamil language and they were trying a cache of Indian politicians speaking the Tamil language and doing outreach in that. Then you have certain Hindi language service providers, and Telugu language service providers.
Leah Feiger: That’s so interesting.
Nilesh Christopherher: Yeah. They do not have a technology moat that does not exist, so they’re building a linguistic moat in some sense. What we have witnessed is those who are doing it at scale, 50 or 60 million personalized calls in different politicians’ voice, they try to stay behind the scenes and they don’t want to reveal their identity because they have external investors who have put in a lot of money.
Vittoria Elliott: Oh, interesting.
Nilesh Christopherher: The dude who has become the face of this industry, Divyendra Singh Jadoun, who we profiled in the piece, a former politician who is now doing it for his friends. He doesn’t have any external investors, so it effectively has allowed him free rein to take up projects, to work with multiple different politicians across party lines, across languages, and scale his solutions.
Leah Feiger: The way that you write about him, this guy who kind of almost fell into this. He was a politician, and then COVID hit and he was sitting at home, picking up different hobbies, figuring out what to do. It seemed like doing generative AI for politicians and their campaigns became a hobby, which is such a wild way to get into something so impactful and frankly, manipulative.
Nilesh Christopherher: Yeah. Talk about being profiled by New York Times, Washington Post, and everyone just because of a hobby you picked up during COVID.
Leah Feiger: Yeah, but a really influential hobby. Tell us more about him. What was it like to interview this guy whose leading the charge here?
Nilesh Christopherher: He’s got a wild backstory. The first picture he showed in his Instagram grid is of him and his friends with garlands around their shoulders, campaigning for elections. He’s surrounded by four gunmen, carrying rifles and shotguns. It’s basically-
Leah Feiger: Serious.
Nilesh Christopherher: Yeah. They just want to portray themselves as really powerful people. This was when he was contesting student elections in early 2018, 2019. Think of them as young kids wanting to be taken seriously by everyone. That was the origins of his student politics. That is where he picks up on voter psychology, and how to contract with people, how to ensure you communicate your ideas effectively and build a brand for yourself. During COVID, he’s stuck at home. This dude wants to experiment with a bunch of things. He played the flute for us when he visited. He learned the flute in a month.
Leah Feiger: He could have become a concert flutist, but instead went down the generative AI pathway as one does.
Vittoria Elliott: So many people, the world would be so different if we let them pursue art.
Nilesh Christopherher: Yeah, fascinating. Then he tried being ambidextrous. I asked him to write in both his hands. He wanted to use these open source, freely available tools. Using that, just create fun stuff for couples who got recently married and would send a video of them as Superman. He would effectively take these requests online, on Instagram, graph their face onto Superman footage, and send it to their partners. That’s how he got his start, which has effectively veered into this political generative AI space he is in right now.
Vittoria Elliott: I think it’s so appropriate for how big the wedding industry is in India that he would start out doing romantic and wedding memes, actually.
Nilesh Christopherher: So much money there, yes.
Vittoria Elliott: I think one of the things that I was really interesting in. You said there’s a lot of money pouring into this industry. I was interested if you have a sense of where that money is coming from? Tech companies expect growth. What’s next?
Nilesh Christopherher: The individual that we’ve talked about, that we profiled, Divyendra Singh, he doesn’t have external investors but his appetite for growth is huge. Him, along with his partner, have been pitching multiple international politicians, especially those in Canada, which has a huge Indian diaspora. That’s where his partner reached out to Mr. Poilievre, he’s a politician in Canada, who has an Indian voter base of sorts. They wanted to translate some of his videos using generative AI for voter outreach. They’re also angling for the US elections and they want to expand, which is one of the most lucrative markets in the world. He specifically wants to pitch to politicians in the United States and create digital avatars of themselves and send personalized video messages to their voters bases. I asked him what are they going to do about audio clones in voice calls, since it was banned in the US. They said, while India is a huge market for audio calls, personalized audio calls from politicians, they are not going to be selling that solution in the US since it’s banned.
Leah Feiger: Wow. That global expansion of this industry feels like something we need to keep an eye on. We’re around five months out from the United States election and there’s so many other elections happening around the world. Yeah, once the India election is over, it makes sense that they would be expanding to other campaigns.
Vittoria Elliott: Nilesh, you had mentioned that in the US obviously, we’ve started panicking, especially after the Biden robocall around what is the role of this technology in our elections. India has also made some attempts to regulate AI in the lead up to the elections. I’m curious if you have a sense of how do we square the boom in the industry with at least some of the government lip service towards trying to have some guardrails around the industry?
Nilesh Christopherher: Yeah. I think the way you phrase it, there has been a lot of lip service and there hasn’t been action the Indian government. The Indian government notoriously you need one specific example to go out of proportion and they will take action. That instance of a nefarious use of deep fake happened last December, when an Indian actress’ face was grafted onto a video of a female wearing a low-cut dress. As soon as that went viral, there was a moral panic about the use of this tech. Ever since that, one advisory that came in was over indexing which said, “Any generative AI model that is produced by Indian startups or international ones will have to be licensed by the Indian government.” Which is not something anyone would want, and at Indian state capacity it’s going to take years for them to do it. That advisory thankfully was rolled back. We’ve had some of these misinformed rules that have been put in place, but there hasn’t been any definitive action taken against the rise of this nefarious use of deep fakes yet.
Leah Feiger: What do you think the effect of all of these deep fakes has been on India’s elections, on how people might vote? What is it looking like for the changing election landscape?
Nilesh Christopherher: It’s a lot of chaos. It has been a lot of chaos. One of the things that especially Indian fact-checkers were worried about entering this election is the lack of access to forensic testing AI capabilities. Every single fact-checker in India would have to write to a consultant or an academic in US universities to test these suspicious clips before they put out a fact check. Moving into these elections, there was no defense. Slowly, over the past three or four months, this detection equity gap, as experts call it, has reduced a bit. We witnessed the emergence of local startups who have created a few solutions to test for AI generated audio and video in India. But still, just in the last couple of days, we have witnessed three audio and video deep fakes that have been launched. Some calling the elections already, that a local party has won the election in the specific constituencies. These are not high definition deep fakes but are audio fakes, which are convincing enough and can go viral.
Vittoria Elliott: Nilesh, I wanted to take a step back and talk about what AI experts called the Liar’s Dividend. Or when the idea that when everything is potentially fake, even real things, real video, real audio, people can claim that that’s been made by AI, too. You’ve done some previous reporting on this and I’d love for you to tell us a little bit about what that’s looking like in the Indian context.
Nilesh Christopherher: That’s one of the most worrying things that we have witnessed, the erosion of trust in any material we see online, be it audio or video. Our first guess, in some sense, is to question if it’s fake or not. That is offered tremendous leverage for politicians who say vile and unbelievable stuff on record and get caught sometimes. Even in this election, we witnessed a BJP politician, BJP being the Bharatiya Janata Party, the Hindu Nationalist Party which Modi belongs to, a local politician from that party went ahead and said that Modi still remains single because he wants to sell the country. Which seems like a fairly benign statement, but once it went viral, he wanted to backtrack. A couple of social media handles even claimed that this was a deep fake video.
Leah Feiger: Wow.
Nilesh Christopherher: Fact-checkers then hunted down the reporter who took the bite, soundbite, and proved that it wasn’t the case, it was real.
Leah Feiger: That’s so wild. It’s really a what is reality, we are in the Matrix, all trust online eroded.
Nilesh Christopherher: While this is developing on one side, what has also happened from my understanding is the early phase of the use of generative AI was in memes and satire. We had videos of the Prime Minister himself singing in multiple languages. People heard that so much and the use of AI became normalized in that sense, which has led to the current Liar’s Dividend problem that we see.
Leah Feiger: As we’re heading into covering the United States election and everything, what lessons … I know you’re still in the middle of it. But what lessons do you have for us? What can the United States learn from India in all of this?
Nilesh Christopherher: That’s an interesting and I’ve been trying to wrap my head around it. One is while we have seen a bunch of really convincing nefarious deep fake audio and video emerge, we have also witnessed instances of any simple edit of an audio or a video being claimed as a deep fake. Calling even a simple splice of a video as a deep fake causes this problem of Liar’s Dividend, where politicians can simply get away with disclaiming deep fake on any audio or video. One thing that is media literacy. As journalists and fact-checkers, we need to be able to differentiate what exactly we call a deep fake and what is a simple edit.
Leah Feiger: Sure.
Nilesh Christopherher: That is number one, making that difference. Second is equipping newsrooms and journalists with forensic testing capabilities so that we can realtime, combat the flood of cheap fakes or deep fakes that are coming your way.
Leah Feiger: That are coming our way, that is so ominous.
Vittoria Elliott: Woof.
Leah Feiger: And probably so true. Nilesh, thank you so much for joining us today. Listeners, again, if you are seeing instances of generative AI out in the wild this election year, anywhere around the world, you can tell us using the submission form in our show notes. We’d love to hear from you. After the break, Tori and Nilesh are going to give us their favorite conspiracies of the week. Welcome back, this is Conspiracy of the Week. That part of the show where each of our guests give us the best theory they’ve come across this week and I pick my favorite. Tori, what did you find?
Vittoria Elliott: Last week, our colleague David and I, did a story on a website called OpIndia, which is a right-wing BJP-aligned website that spreads of Islamophobic conspiracy theories. One of them being Love Jihad, which is that Muslim men are attempting to marry or kidnap Hindu women, to force demographic change. We did a big story on them. We apparently forced them into a subscription model.
Leah Feiger: Read the story in our show notes, guys, it’s a good one.
Vittoria Elliott: But I always love when I find a local flavor of an internationalized conspiracy. When we were doing research for the piece, one of the articles that came up from OpIndia that I had seen circulated was that our friend, George Soros, favorite target of the right everywhere in the world-
Leah Feiger: It’s kind of impressively, honestly, that it’s become global.
Vittoria Elliott: Right? He’s everywhere. That actually, his real agenda is that he is not, as the right in the US might claim, part of a global cabal. They often times will focus on his identity as a Jewish man, as an immigrant. But in India, it’s that he is a white man specifically.
Leah Feiger: Sure.
Vittoria Elliott: And that he is funding anti-India, anti-Hindu initiatives.
Leah Feiger: Wow.
Vittoria Elliott: Because he is ultimately invested in the undermining of India and the Hindu identity. I, again, just always think it’s amazing to see how these global figures get that local flavor of how are they factoring to the conspiracy here.
Leah Feiger: To our situation.
Vittoria Elliott: Yeah.
Leah Feiger: The globalization of conspiracies is something that I could talk about forever. Poor George, who just has no idea.
Vittoria Elliott: He just wants to fund some democracy initiatives.
Leah Feiger: That’s so wild. Okay, that’s a good one. That was unexpected. Amazing. Nilesh, what do you got? What’s your conspiracy of the week?
Nilesh Christopherher: I was going to talk about Love Jihad, because that has grabbed all kinds of headlines.
Leah Feiger: Sure.
Nilesh Christopherher: But more specifically, one I found fascinating was a variation of it called Vote Jihad. In one of the recent political rallies, the Prime Minister Narendra Modi himself, mentioned Vote Jihad. This has lent credence to this long-standing conspiracy that Muslims will vote in large blocks that would undermine the Hindu nature of India and it will undermine Hindu rights in the longterm. It’s a conspiracy that relates to India’s demographic. Currently, Muslims constitute less than 15% of India’s population. But this conspiracy, as Tori said, it’s global where most of the Hindu right-wing constantly keep worrying about it. This takes the form of What’s App forwards, where Hindus are constantly reminded that Muslim voting blocks will get together and vote out the Hindu candidates, or they will ensure that Hindu rights are being undermined.
Leah Feiger: Again, globalized. We have the Great Replacement conspiracy theory. The virulently racist, “Immigrants are crossing the borders to vote on behalf of Democrats,” that’s being promulgated by the GOP, and you’ve got this in India. It makes complete sense. I’m so glad that countries around the world are united by their love of conspiracies here. What a beautiful way to end this. God. Guys, I’m sorry. I’m going to have to, Tori, I’m giving it to Nilesh this week.
Vittoria Elliott: That’s fair.
Leah Feiger: That’s a pretty good conspiracy.
Vittoria Elliott: He’s a guest.
Leah Feiger: And he’s our guest. See, we treat guests so well on WIRED Politics Lab.
Nilesh Christopherher: Thank you, guys.
Leah Feiger: Thank you so much for joining us. This was a great conversation. Thanks for listening to WIRED Politics Lab. If you like what you heard today, make sure to follow the show and rate it on your podcast app of choice. We also have a newsletter, which Makena Kelly writes each week. The link to the newsletter and the WIRED reporting we mentioned today are in the show notes. If you’d like to get in touch with us with any questions, comments or show suggestions, please, please write to [email protected]. That’s [email protected]. We’re so excited to hear from you. WIRED Politics Lab is produced by Jake Harper. Jake Lummus is our studio engineer. Amar Lal mixed this episode. Stephanie Kariuki is our executive producer. Jordan Bell is our executive producer of development. Chris Bannon is global head of audio at Conde Nast. I’m your host, Leah Feiger. We’ll be back in your feeds with a new episode next week.
Source image: Getty Images