A fake image showing an explosion at the Pentagon briefly went viral on Twitter on Monday, causing markets to dip slightly for ten minutes. This incident revives the debate around the risks ociated with artificial intelligence (AI).
The false photograph, apparently taken with a program Generative AI (capable of producing text and images from a simple plain language query), compelled the US Department of Defense to respond. “We can confirm that this is false information and that the Pentagon was not attacked today,” a spokesperson said.
An account from the QAnon conspiratorial movement was among the first to relay the false image, the source of which is not known. Firefighters in the area where the building is located (in Arlington, near Washington), also intervened to indicate on Twitter that no explosion or incident had taken place, neither at the Pentagon nor nearby.
Temporary drop in the markets
The image appears to have caused markets to stall slightly for a few minutes, with the S&P 500 losing 0.29% from Friday, before rallying. “There was a drop related to this false information when the machines detected it,” noted Pat O’Hare of market analysis firm Briefing.com, referring to automated trading software which is programmed to react. to social media posts.
“But the fact that she remained measured in relation to the content of this false information suggests that others also considered it muddy,” he added for AFP.
Prime example of the dangers in the pay-to-verify system: This account, which tweeted a (very likely AI-generated) photo of a (fake) story about an explosion at the Pentagon, looks at first glance like a legit Bloomberg news feed. pic.twitter.com/SThErCln0p
—Andy Campbell (@AndyBCampbell) May 22, 2023
The risks of generative AI
The incident comes after several false photographs produced with generative AI have been widely publicized to show the capabilities of this technology, such as that of the arrest of former US President Donald Trump or that of the Pope in a down jacket.
Software like DALL-E 2, Midjourney and Stable Diffusion allow amateurs to create convincing fake images without needing to master editing software like Photoshop. But if generative AI facilitates the creation of false content, the problem of their dissemination and their virality – the most dangerous components of disinformation – falls to the platforms, regularly remind experts.
“Users are using these tools to generate content more efficiently than before […] but they are still spreading via social networks,” said Sam Altman, the boss of OpenAI (DALL-E, ChatGPT ), during a hearing in Congress in mid-May.