srpski english

Education / Breaking news and artificial intelligence - how to spot fraud

A large number of verified Twitter accounts shared a fake photo last week showing an explosion near the Pentagon, which caused a stir on the networks and even led to a brief drop in the stock market. Local officials soon confirmed that such an incident did not happen, but not before the fake news, published by some major media, went around the world. How to spot photos created with the help of artificial intelligence?

Breaking news and artificial intelligence - how to spot fraud

Education / Breaking news and artificial intelligence - how to spot fraud

A large number of verified Twitter accounts shared a fake photo last week showing an explosion near the Pentagon, which caused a stir on the networks and even led to a brief drop in the stock market. Local officials soon confirmed that such an incident did not happen, but not before the fake news, published by some major media, went around the world. How to spot photos created with the help of artificial intelligence?

autor teksta
Tamara Bajčić | Demostat | Beograd 30. May 2023 | Education

The photo, which had all the hallmarks of an AI image, was shared by several verified Twitter accounts with blue ticks, including one that falsely claimed to be affiliated with Bloomberg.

"Large explosion near Pentagon complex in Washington - initial report," the account said, along with a photo purportedly showing black smoke rising near the large building.

Twitter then suspended the account. However, it remains to be seen who is behind it and where the photo originated.

A Bloomberg spokesman said the account in question was not affiliated with the news organization.

Foto: Twitter

Twitter, which Elon Musk owns, now allows anyone to get a verified account for a monthly payment.

As a result, Twitter verification is no longer an indicator that an account truly represents whom it claims to be.

The Russian state media house RT shared a photo of an alleged explosion near the Pentagon in Washington on its Twitter account.

The media deleted the post a day later. They stated that they informed the citizens about the reports circulating online and that when the origin of the news and its (un)truth were determined, they took appropriate steps to correct the reports.

In a post on the Russian social network VK, RT tried to justify its apparent mistake.

"Is the Pentagon on fire? Look, there is a photo. Its not real; it’s just an AI-generated photo. However, the photo managed to deceive several large media outlets full of supposedly smart people," the RT publication states.

False reports of the explosion also appeared on Indian television.

Namely, Republica television reported a large explosion near the Pentagon, showing a fake photo and referring to the reports of the Russian news agency RT.

Indian television withdrew the report when it became clear that the incident did not take place.

The Republica has released news of a possible explosion near the Pentagon, citing a post and image tweeted by RT," the agency later said on its Twitter account. "RT deleted the post, and Republika retracted the news.”

The fake photo went viral just after the US stock market opened at 9:30 a.m., sending a wave of confusion through the investment world.

The S&P 500 briefly fell a modest 0.3 percent as social media accounts and investment websites popular with day traders relayed false claims.

The Dow Jones index fell by about 80 points between 10:06 a.m. and 10:10 a.m. but fully recovered by 10:13 a.m.

Shortly after the photo began circulating online, US Department of Defense spokesman Philip Ventura told Reuters that reports of an explosion were false, while the local Arlington Fire Department tweeted: "There is no explosion or incident at or near the Pentagon, and there is no imminent danger or danger to the public.”

How to spot AI photos

According to experts, the Pentagon photo contains details showing it was created by artificial intelligence.

"This image contains typical details that show it was synthesized by artificial intelligence: there are structural errors in the building and the fence that you wouldnt see if, for example, someone added smoke to an existing photo," experts say.

Also, it was clear from the beginning that there were no other first-hand witnesses to corroborate the event, especially in a busy area like the Pentagon.

Therefore, it is very difficult, practically impossible, to create credible content for such a fake event.

The building looks noticeably different from the Pentagon building, which can easily be verified using tools like Google Street View.

Other details, including an unusual-looking floating lamppost and a black pole sticking out of the sidewalk, further prove that the painting is not authentic.

Many tools, like Midjourney, Dall-e 2, and Stable, can create vivid photos with minimal effort.

These tools are trained by looking at large amounts of actual images, but when data is missing, they fill in the gaps with their interpretation.

This can result in people having extra arms or legs or objects in the image that change according to their surroundings.

When you see photos on social media that claim to show breaking, breaking news, keep the following in mind:

• News doesnt happen in a vacuum - In a major explosion or significant event, expect an influx of on-the-ground reports from different people and angles.

• Who uploads content - View the user accounts posting history. Does their location and event location match? See whom they follow and who follows them. Can you reach them or talk to them?

• Use tools like reverse photo search to upload an image and determine where and when it was first used. You can use several other tools, such as viewing live traffic camera footage to confirm that an event is taking place.

• Analyze the photo and surroundings – Look for clues in the image, such as nearby landmarks, road signs, and even weather conditions to help you place where or when the alleged event may have occurred.

• Hands, eyes, and posture – When looking at photos of people, pay special attention to their eyes, hands, and general posture. AI-generated videos that mimic humans, known as deep fakes, tend to have problems with blinking because most of the datasets they use dont contain faces with closed eyes. Hands that dont grip objects properly or improperly contorted arms or legs can help spot a fake photo.

The author is data protection manager

MOST POPULAR
NATO three years away from Serbia
NATO three years away from Serbia

  In all societies there are issues that are rather being skipped. Certain...

Timothy Less: Re-ordering The Balkans
Timothy Less: Re-ordering The Balkans

For centuries, the region was subsumed within the Ottoman and Hungarian Empires,...

Connection between the Market and Social State
Connection between the Market and Social State

The neoliberal path, started in 2001, has led to especially bad results in Serbi...

Panovic: Internal dialogue between the authorities and opposition on national TVs needed
Panovic: Internal dialogue between the authorities and opposition on national TVs needed

"Serbia has returned to the systemic and anti-systemic position of the political...

Serbia between NATO and Russia - Reality against emotions
Serbia between NATO and Russia - Reality against emotions

In reality, Serbia is closer than ever to NATO. In the course of the last five y...

UŽIVO
Ovaj sajt koristi "kolačiće" kako bi se obezbedilo bolje korisničko iskustvo. Ako želite da blokirate "kolačiće", molimo podesite svoj pretraživač.
Više informacija možete naći na našoj stranici Politika privatnosti