Digital Disinformation Reaches a Fever Pitch, Social Media Execs Weigh In
Less than a week has passed since Meta and TikTok were ordered by the European Commission to provide details on the steps each company has taken to prevent the spread of misinformation during the Israel-Hamas conflict, and rather than simmering down, the debate over disinformation on social media is only getting hotter.
X—the platform formerly known as Twitter—has also entered the fray, receiving a request from the E.U.’s executive body for information. As a result, Elon Musk has reportedly said he plans to remove the platform from the European Union altogether.
Much of the latest round of debates stems from the Digital Services Act (DSA), a new E.U. regulation that aims to prevent the spread of harmful content by banning or limiting certain consumer-targeting practices.
Shortly after the Israel-Hamas conflict broke out earlier this month, E.U. regulators sent notices to TikTok and Meta asking for details from the two companies related to their handling of information during the crisis. The companies were initially required to provide the information by October 25th. Additional information regarding their ability to protect the integrity of government elections was requested by November 8th.
With those deadlines looming, more digital media executives in the U.S. are taking a closer look at the rising wave of misinformation online. Although the U.S. has not decided to take legal actions — yet — that could change at any point, says James Mawhinney, CEO and founder of Media.com, a platform dedicated to resolving misinformation.
Mawhinney believes governments at home and abroad could eventually cause social media platforms to require profile verification as a way to curb disinformation. The step would likely be hugely expensive and difficult for media companies to implement. It’s also unclear how well it would work, as verified accounts on X reportedly spread 74% of war misinformation, according to a new analysis by NewsGuard.
It’s clear something needs to be done.
Misinformation has flooded social media networks since the Middle East conflict began on October 7th, with videos and photos claiming to show the conflict making it difficult for online users to tell truth from fiction. Fake accounts impersonating journalists and government officials have also started popping up on X more frequently over the past few weeks.
Under the E.U.’s new online content rules, online platforms are required to do more than just take down harmful content. If they fail to act swiftly or adequately, they risk fines as much as 6% of their global turnover.
While the subscription model is one approach that’s been batted around by industry executives as a potential solution, Mawhinney says that too is imperfect.
“Social platforms will potentially struggle to convert to subscription models,” Mawhinney says. “It is in their DNA and is very difficult to retrofit a platform that users are used to being free.”
Additionally, Mawhinney says the subscription model alone may not be enough to curb disinformation on social media, particularly at the level we’re at now.
“I believe government legislation will eventually cause social platforms to require profile verification to help curb misinformation,” Mawhinney says. However, “it will only help if profiles are verified. X is proposing phone number verification rather than full KYC which is an imperfect solution. Again, this involves retrofitting which will likely cause a significant loss of users and advertising revenues.”