Disinformation is on the rise this election season
In the immediate aftermath of the assassination attempt on Donald Trump at a rally last month, the public was inundated with disinformation. It wasn’t a surprise. We’ve seen it before whenever viral rumors outrun reporting—as Graham Brookie, senior director of the Atlantic Council’s Digital Forensic Research Lab, explained to Bloomberg, “In any fast developing event, there is inevitably a high influx of false or unverified information, especially on social media.” While some of that disinformation originates with armchair detectives and internet trolls, state actors also peddle falsehoods to sow chaos or influence voters. As we already learned in 2016 and 2020, the US election is a prime target, and some experts think 2024 will be even worse.
As the AP reports, citing research from Syracuse’s ElectionGraph Project, political ads have become a “prime source for misleading information about elections—and a tantalizingly easy way for con artists to target victims.” And the same is true in Europe, where research from Dutch consultancy Trollrensics found that “coordinated networks of accounts spreading disinformation ‘flooded’ social media in France, Germany and Italy before the elections to the European parliament.” However, the financial interests of social media companies may keep those best positioned to tackle disinformation from doing so. As The New York Times points out, all this viral content is great for engagement:
Most social media platforms profit when outrage and indignation results in more engagement, and ultimately, more advertising revenue. Companies have little incentive to alter the algorithms that allow toxic content to spread, despite calls from political leaders appealing to society’s better angels.
+ Despite the wave of Twitter alternatives that sprouted following Elon Musk’s purchase of the company, X continues to be a critical source for those on both sides of the aisle who are “looking for news and live updates of major events.” But as the Washington Post notes, the site’s retreat from “policing misinformation” has allowed falsehoods to flourish—and it isn’t just affecting Democrats.
+ More from the AP: “Russian-Linked Cybercampaigns Put a Bull’s-Eye on France. Their Focus? The Olympics and Elections.”
+ From WIRED: “How Disinformation from a Russian AI Spam Farm Ended up on Top of Google Search Results”
AI exacerbates the problem (but could also help solve it)
Generative AI has added a new wrinkle by making it easier to create deepfakes. Research from Google has found that the use of “generative AI-based content in misinformation claims” is on the rise and that “AI-generated images appear to obtain high engagement.” (NBC News has a good summary of the paper here.) Of course, even without GenAI, disinformation would be a problem. “Media manipulations have a long history,” as the paper above points out. But there are promising signs that we’re getting better at recognizing them. Although deepfakes have caused more than their share of harm, they “have yet to become the huge truth catastrophe that experts warned would be coming,” say Sara Fischer and Megan Morrone in Axios, in part because “media outlets and tech platforms have gotten better at spotting and debunking AI misinformation quickly.” Stanford researchers recently found that “the 2020 election saw fewer people clicking on misinformation websites” than in 2016, which they believe correlates to “efforts the social media platform [Facebook] took to mitigate the issue of false news on the website.” For our current election cycle, The Verge reports that “Google will now generate disclosures for political ads that use AI.” AI could be of some service too. A few weeks ago we shared research that suggests that generative AI chatbots could help dissuade individuals from believing in conspiracy theories. And generative AI is already being used to detect misinformation and support fact checking.
The disingenuous fight against disinformation fighters
Since 2018, the Stanford Internet Observatory has produced hard-hitting research exposing a range of online harms, including disinformation. However, in doing so, the SIO has also come under attack by conservatives like Rep. Jim Jordan (R-Ohio) and Trump adviser Stephen Miller, and the document requests and lawsuits are costing “Stanford millions of dollars in legal fees,” as the Washington Post reports. First, SIO founder Alex Stamos scaled back his involvement due to “political pressure” (per the Post). Then the Observatory failed to renew research manager Renée DiResta’s contract, prompting worries of the project’s demise. The Observatory insists that “Stanford has not shut down or dismantled SIO as a result of outside pressure.” But as DiResta argued shortly after in The New York Times, the recent events at the Observatory can’t be separated from the “conspiratorial thinking” that underpins a huge portion of political disinformation. As the headline of DiResta’s op-ed makes clear, “What Happened to Stanford Spells Trouble for the Election.” It’s also a good reminder of the intellectual dishonesty of those who decry cancel culture while being its most successful practitioners.
+ From Bloomberg: “Fight Against Misinformation Suffers Defeat on Multiple Fronts.”
+ More from the Washington Post: “Trump Allies Crush Misinformation Research Despite Supreme Court Loss.”
Reid Hoffman on deepfakes, AI, and the American economy
The potential for deepfakes to cause harm is clear. But could deepfakes ever be used for good? Beena Ammanath, a board member at the Centre for Trustworthy Technology, argues that to realize any benefits, we need to “[get] to a place where this technology is common, familiar and trustworthy.” However, doing so depends on “how synthetic content is used and the guardrails that surround its development.” One person who believes we’ll get there is LinkedIn cofounder Reid Hoffman, and he’s been experimenting with the technology to show what’s possible—most notably by creating his own “digital twin.” (He calls the twin ReidAI, and you can see the two of them in conversation here.)
Reid will be joining me next month for an exclusive discussion on how ReidAI came to be and why he believes deepfakes will be more than just an instrument for scammers and other criminals. We’ll also get into the potential of deepfakes for good (as well as for ill) and think through how such a powerful technology ought to be regulated—assuming that it is regulatable.
This free hourlong virtual event is happening September 27 at 9:30am PT/12:30pm ET. If you’re an O’Reilly member you can sign up here. And if you aren’t, you can save your seat here. It’s sure to be an interesting conversation. I hope you’ll join us.