close
close

Flood of “garbage”: How AI is changing scientific publishing

Flood of “garbage”: How AI is changing scientific publishing

Science detective Elisabeth Bik fears that a flood of AI-generated images and texts in scientific papers could weaken trust in science
Science detective Elisabeth Bik fears that a flood of AI-generated images and texts in scientific papers could weaken trust in science. Photo: Amy Osborne / AFP/File
Source: AFP

PAY ATTENTION: Follow our WhatsApp channel to not miss any news that is important to you!

An infographic of a rat with an absurdly large penis. Another shows human legs with far too many bones. An introduction that begins: “This is certainly a possible introduction to your topic.”

These are some of the most dramatic examples of the use of artificial intelligence to find their way into academic journals in recent times, shedding light on the wave of AI-generated text and images currently sweeping the academic publishing industry.

Several experts who study problems told AFP that the rise of AI has exacerbated existing problems in this multi-billion dollar sector.

All experts emphasized that AI programs like ChatGPT can be a helpful tool in writing or translating texts – if they are thoroughly tested and disclosed.

However, this was not the case in several recent cases that somehow missed peer review.

Read also

Study: Musk’s misleading election posts viewed 1.2 billion times

Earlier this year, a graphic of a rat with incredibly large genitals, apparently created using artificial intelligence, was widely shared on social media.

The study was published in a journal by the scientific giant Frontiers, which later retracted it.

Another study was retracted last month because of an AI graphic that showed legs with strange, multi-jointed bones that resembled hands.

While these examples are images, ChatGPT, a chatbot launched in November 2022, is believed to have most changed the way researchers around the world present their findings.

A study published by Elsevier made headlines in March for its introduction, which was clearly a ChatGPT prompt that said, “Here is surely a possible introduction to your topic.”

Such embarrassing examples are rare and are unlikely to survive the peer review process of the most prestigious journals, several experts told AFP.

Read also

IOC: More product placement to be expected at the Olympics

Tipping in paper mills

It is not always easy to detect the use of AI, but one clue is that ChatGPT tends to favor certain words.

Andrew Gray, a librarian at University College London, has combed through millions of documents looking for excessive use of words like “meticulous,” “complicated,” or “praiseworthy.”

He concluded that by 2023, at least 60,000 publications would involve the use of AI – over one percent of the annual total.

“We will see very significant increases in numbers by 2024,” Gray told AFP.

According to the US organization Retraction Watch, more than 13,000 scientific papers were retracted last year – more than ever before.

Artificial intelligence has enabled the bad guys in academic publishing and academia to industrialize the flood of “junk” articles, Ivan Oransky, co-founder of Retraction Watch, told AFP.

These perpetrators include so-called paper factories.

Read also

After AI, quantum computing is also awaiting its “Sputnik” moment

These “fraudsters” sell authorship to researchers and produce vast amounts of low-quality, plagiarized or fake work, says Elisabeth Bik, a Dutch researcher specializing in detecting scientific image manipulation.

It is estimated that two percent of all studies are published by paper mills, but the rate is “exploding” as AI opens more and more floodgates, Bik told AFP.

This problem became apparent when scientific publishing giant Wiley bought the struggling publisher Hindawi in 2021.

Since then, the US company has retracted more than 11,300 articles on special editions of Hindawi, a Wiley spokesman told AFP.

Wiley has now launched a “paper mill detection service” to detect AI abuse – which is itself based on AI.

“Vicious circle”

Oransky stressed that the problem lies not only with the paper mills, but in a broader academic culture that pressures researchers to “publish or perish.”

“Publishers have generated 30 to 40 percent margins and billions of dollars in profit by creating these systems that require volume,” he said.

Read also

According to US authorities, X’s AI chatbot spread false election information

The insatiable demand for more and more essays puts increased pressure on academics, who are assessed on their performance, and develops a “vicious circle”, he said.

Many have turned to ChatGPT to save time – which is not necessarily a bad thing.

Since almost all articles are published in English, Bik says AI translation tools can be invaluable for researchers – including herself – whose native language is not English.

However, there are also fears that AI errors, inventions and unintentional plagiarism could increasingly undermine society’s trust in science.

Another example of the misuse of artificial intelligence occurred last week when a researcher discovered what appeared to be a rewritten version of one of his own studies that had been published in a scientific journal using ChatGPT.

Samuel Payne, a professor of bioinformatics at Brigham Young University in the US, told AFP he was asked to peer review the study in March.

When he realized that it was “100 percent plagiarism” of his own study – although the text had apparently been reformulated by an AI program – he rejected the work.

Read also

Inbreeding, gibberish or just CRAZY? More and more warnings about AI models

Payne said he was “shocked” to find that the plagiarized work had simply been published elsewhere, in a new Wiley journal called “Proteomics.”

It was not withdrawn.

Source: AFP

Leave a Reply

Your email address will not be published. Required fields are marked *