srpski english

Education / There is a fine line between misinformation and automatically generated text

A new study has confirmed that Chat-GPT-3, a new virtual interlocutor with artificial intelligence, can spread online misinformation faster and more convincingly than humans. Such situations now, more than ever before, raise the question of how effective the control over information is and whether there are state mechanisms as tools for solving problematic situations. 

There is a fine line between misinformation and automatically generated text

Education / There is a fine line between misinformation and automatically generated text

A new study has confirmed that Chat-GPT-3, a new virtual interlocutor with artificial intelligence, can spread online misinformation faster and more convincingly than humans. Such situations now, more than ever before, raise the question of how effective the control over information is and whether there are state mechanisms as tools for solving problematic situations. 

autor teksta
Dejan Srbinovski | Demostat | Beograd 3. Sep 2023 | Education

The research, recently published in the journal Science Advances, aimed to identify some of the main threats advanced text generators pose in the digital world, particularly disinformation and fake news on social media.

"Our research group is dedicated to understanding the impact of scientific misinformation and ensuring the safe engagement of individuals with information," study author Federico Germani, a researcher at the Institute for Biomedical Ethics and History of Medicine, told Psypost.

According to the studys authors, the main goal is to mitigate the risks associated with false information about individual and public health.

"The emergence of AI models like GPT-3 has fueled our interest in exploring how AI affects the information landscape and how people perceive and interact with information and misinformation," Germani said.

GPT-3 stands for Generative Pre-trained Transformer 3, the third version of the program released by OpenAI, with the first model released in 2018. 

Among a myriad of other language processing skills, the program can mimic the writing style of online conversations, the study explains.

What areas are most prone to lies?

The researchers looked at 11 topics they felt were susceptible to misinformation – including climate change, COVID-19, vaccine safety, and 5G technology. To do this, the studys authors collected artificial intelligence-generated tweets, found from false and true information, along with examples of real tweets related to the same topics.

Foto: Christoph Scholz/Fickr

According to the study, the researchers then used expert analysis to determine whether AI- or human-generated tweets contained misinformation and established a subset of tweets for each category based on the assessments.

The researchers then conducted a survey using the tweets - respondents were asked to determine whether the blurbs contained accurate information and whether they were written by humans or artificial intelligence. The experiment showed that subjects could better detect misinformation in "organically false tweets"—meaning tweets written by humans but still containing false information—than in the inaccurate statements of "synthetically false" tweets written by GPT-3.

The study further states that the subjects were also less able to detect false information from artificial intelligence than from humans.

"Participants recognized organic fake tweets with the highest efficiency, better than synthetic fake tweets." Similarly, the study explains that they were more likely to recognize synthetic truth tweets than organic truth tweets correctly.

The research also confirms that people have a more challenging time evaluating accurate statements compared to misinformation and that the text generated by GPT-3 "is not only more effective at informing and misinforming people, but it does so more efficiently, in less time."

Is artificial intelligence suppressing the journalism profession?

Stojanco Tudjarski, head of the data science team at Innovation DOOEL, in a statement to the local media on this topic, says that automatic text generators Chat GPT and more advanced GPT-4 can create any text that cannot be recognized as written by human or machine, among other things because, with the current level of development of artificial intelligence, machines can imitate human written texts extremely successfully.

- This puts the journalistic profession in an unenviable position. Anyone, with the help of properly timed queries, can get an average text on any topic, including newspaper articles. Therefore, we are faced with the real possibility of hyperproduction of texts, on average, of the same quality as journalists have written so far, says Tudjarski.

Tujarski emphasizes that the line between disinformation and automatically generated text is thin and that disinformation cannot be correlated with artificial intelligence.

- It is the people who create the misinformation, with or without VI/ChatGPT. I think the real challenge for journalists to stay relevant as professionals is how to be better than ChatGPT. And, by definition, they can only if they bring natural creativity to their writing because the texts produced by Chat GPT are, in a mathematical sense, nothing more than the texts that an average human being would write. Or, in the words of Noam Chomsky, "ChatGPT is nothing more than a highly sophisticated machine for creating mass plagiarism, and if anyone knows what hes talking about today, its him," says Tudzharsky.

The development of artificial intelligence and its impact on fake news:

Artificial intelligence has revolutionized various industries, including media and information dissemination. Unfortunately, this technology has also manipulated information and perpetuated fake news. AI-driven tools can now generate realistic-sounding articles, images, and videos almost indistinguishable from real content. This allows malicious actors to create compelling narratives capable of fooling even discerning consumers of digital media.

Examples of manipulation tools:

Deepfakes: Deepfake AI-powered technology can manipulate videos by superimposing one persons face on anothers body, effectively creating real but completely fictional content.

Text generators: AI-driven text generators can produce coherent and contextually relevant articles, where even journalists cannot tell the difference between real and fake news on first reading.

Image Editing: Advanced AI-based image editing tools can alter visuals so that even experts may struggle to spot whether an image has been manipulated.

Fight against fake news and verification tools:

As the threat of fake news continues to grow, various tools and technologies have emerged to counter its influence and authenticate digital media:

Fact-checking websites: Platforms such as Snopes, FactCheck.org, and PolitiFact are dedicated to checking and debunking false information, but many of these web-based tools are focused on the US market.

Reverse Image and Video Search: Tools such as Google Reverse Image Search and InVID can help users identify the origin of images and videos, helping to detect manipulated content. Media Literacy Education: Promoting media literacy and critical thinking skills helps individuals become more discerning information consumers, making them less susceptible to fake news.

Source: https://www.slobodenpecat.mk/tenka-e-linijata-pomegju-dezinformacija-i-avtomatski-generiran-tekst/

MOST POPULAR
NATO three years away from Serbia
NATO three years away from Serbia

  In all societies there are issues that are rather being skipped. Certain...

Timothy Less: Re-ordering The Balkans
Timothy Less: Re-ordering The Balkans

For centuries, the region was subsumed within the Ottoman and Hungarian Empires,...

Connection between the Market and Social State
Connection between the Market and Social State

The neoliberal path, started in 2001, has led to especially bad results in Serbi...

Panovic: Internal dialogue between the authorities and opposition on national TVs needed
Panovic: Internal dialogue between the authorities and opposition on national TVs needed

"Serbia has returned to the systemic and anti-systemic position of the political...

Serbia between NATO and Russia - Reality against emotions
Serbia between NATO and Russia - Reality against emotions

In reality, Serbia is closer than ever to NATO. In the course of the last five y...

UŽIVO
Ovaj sajt koristi "kolačiće" kako bi se obezbedilo bolje korisničko iskustvo. Ako želite da blokirate "kolačiće", molimo podesite svoj pretraživač.
Više informacija možete naći na našoj stranici Politika privatnosti