close
close

Flood of “garbage”: How AI is changing scientific publishing – Science & Technology

Flood of “garbage”: How AI is changing scientific publishing – Science & Technology

An infographic of a rat with an absurdly large penis. Another shows human legs with far too many bones. An introduction that begins: “This is certainly a possible introduction to your topic.”

These are some of the most dramatic examples of the use of artificial intelligence to find their way into academic journals in recent times, shedding light on the wave of AI-generated text and images currently sweeping the academic publishing industry.

Several experts who study problems told AFP that the rise of AI has exacerbated existing problems in this multi-billion dollar sector.

All experts emphasized that AI programs such as ChatGPT can be a helpful tool for writing or translating texts when carefully tested and disclosed.

However, this was not the case in several recent cases that somehow missed peer review.

Earlier this year, a graphic of a rat with incredibly large genitals, apparently created using artificial intelligence, was widely shared on social media.

The study was published in a journal by the scientific giant Frontiers, which later retracted it.

Another study was retracted last month because of an AI graphic that showed legs with strange, multi-jointed bones that resembled hands.

While these examples are images, ChatGPT, a chatbot launched in November 2022, is believed to have most changed the way researchers around the world present their findings.

A study published by Elsevier made headlines in March for its introduction, which was clearly a ChatGPT prompt that said, “Here is surely a possible introduction to your topic.”

Such embarrassing examples are rare and are unlikely to survive the peer review process of the most prestigious journals, several experts told AFP.

Tipping in paper mills

It is not always easy to detect the use of AI, but one clue is that ChatGPT tends to favor certain words.

Andrew Gray, a librarian at University College London, has combed through millions of documents looking for excessive use of words like “meticulous,” “complicated,” or “praiseworthy.”

He concluded that by 2023, at least 60,000 publications would involve the use of AI, or more than one percent of the annual total.

“We will see very significant increases in numbers by 2024,” Gray told AFP.

According to the US organization Retraction Watch, more than 13,000 scientific papers were retracted last year – more than ever before.

Artificial intelligence has enabled the bad guys in academic publishing and academia to industrialise the flood of “junk” articles, Ivan Oransky, co-founder of Retraction Watch, told AFP.

These perpetrators include so-called paper factories.

This illustration image shows the artificial intelligence (AI) smartphone app ChatGPT surrounded by other AI apps in Vaasa, Finland, June 6, 2023.

This illustration image shows the artificial intelligence (AI) smartphone app ChatGPT surrounded by other AI apps in Vaasa, Finland, June 6, 2023. (AFP/Olivier Morin)

These “fraudsters” sell authorship to researchers and produce vast amounts of low-quality, plagiarized or fake work, says Elisabeth Bik, a Dutch researcher specializing in detecting scientific image manipulation.

It is estimated that two percent of all studies are published by paper mills, but the rate is “exploding” as AI opens more and more floodgates, Bik told AFP.

This problem became apparent when scientific publishing giant Wiley bought the struggling publisher Hindawi in 2021.

Since then, the US company has retracted more than 11,300 articles on special editions of Hindawi, a Wiley spokesman told AFP.

To detect AI abuse, Wiley has now introduced a “paper mill detection service” that is itself based on AI.

“Vicious circle”

Oransky stressed that the problem lies not only with the paper mills, but in the broader academic culture that pushes researchers to the motto: “Publish or perish.”

“Publishers have generated 30 to 40 percent margins and billions of dollars in profit by creating these systems that require volume,” he said.

The insatiable demand for more and more essays puts increased pressure on academics, who are assessed on their performance, and develops a “vicious circle”, he said.

Many have turned to ChatGPT to save time, which is not necessarily a bad thing.

Since almost all articles are published in English, Bik says AI translation tools can be invaluable for researchers whose native language is not English – including for themselves, since English is not their native language in them.

However, there are also fears that AI errors, inventions and unintentional plagiarism could increasingly undermine society’s trust in science.

Another example of the misuse of artificial intelligence occurred last week when a researcher discovered what appeared to be a rewritten version of one of his own studies that had been published in a scientific journal using ChatGPT.

Samuel Payne, a professor of bioinformatics at Brigham Young University in the US, told AFP he was asked to peer review the study in March.

When he realized that it was “100 percent plagiarism” of his own study, but the text had apparently been reformulated by an AI program, he rejected the work.

Payne said he was “shocked” to find that the plagiarized work had simply been published elsewhere, in a new Wiley journal called “Proteomics.”

It was not withdrawn.

Leave a Reply

Your email address will not be published. Required fields are marked *