How an AI-Generated Image of Rat Penis Appeared in Scientific Journal

Joanna Andreasson/DALL-E4 An illustration that includes a rat with a cross part of a big penis set off a firestorm of criticism in regards to the use of generative synthetic intelligence based mostly on giant language fashions (LLMs) in scientific publishing. The weird illustration was embellished with nonsense labels, together with one fortuitously designated as “dck.” The article on rat testes stem cells had undergone peer overview and editorial vetting earlier than being printed in February by Frontiers in Cell and Developmental Biology. “Mayday,” blared longtime AI researcher and critic Gary Marcus on X (previously generally known as Twitter). Vexed by AI’s potential to abet the “exponential enshittification of science,” he added, “the sudden air pollution of science with LLM-generated content material, identified to yield plausible-sounding however generally tough to detect errors (‘hallucinations’) is critical, and its influence will probably be lasting.” A February 2024 article in Nature requested, “Is ChatGPT making scientists hyper-productive?” Well, perhaps, however Imperial College London pc scientist and tutorial integrity knowledgeable Thomas Lancaster cautioned that some researchers laboring underneath “publish or perish” will surreptitiously use AI instruments to churn out low-value analysis. A 2023 Nature survey of 1,600 scientists discovered that just about 30 % had used generative AI instruments to write down manuscripts. A majority cited benefits to utilizing AI instruments that included quicker methods to course of knowledge and do computations, and in common saving scientists’ money and time. More than 30 % thought AI will assist generate new hypotheses and make new discoveries. On the opposite hand, a majority apprehensive that AI instruments will result in better reliance on sample recognition with out causal understanding, entrench bias in knowledge, make fraud simpler, and result in irreproducible analysis. A September 2023 editorial in Nature warned, “The coming deluge of AI-powered info should not be allowed to gasoline the flood of untrustworthy science.” The editorial added, “If we lose belief in major scientific literature, we have now misplaced the premise of humanity’s corpus of widespread shared data.” Nevertheless, I think AI-generated articles are proliferating. Some might be simply recognized by their sloppy and flagrant unacknowledged use of LLMs. A latest article on liver surgical procedure contained the telltale phrase: “I’m very sorry, however I haven’t got entry to real-time info or patient-specific knowledge, as I’m an AI language mannequin.” Another, on lithium battery expertise, opens with the usual useful AI locution: “Certainly, here’s a potential introduction on your matter.” And yet another, on European blended-fuel insurance policies, contains “as of my data cutoff in 2021.” More canny customers will scrub such AI traces earlier than submitting their manuscripts. Then there are the “tortured phrases” that strongly counsel a paper has been considerably written utilizing LLMs. A latest convention paper on statistical strategies for detecting hate speech on social media produced a number of, together with “Head Component Analysis” fairly than “Principal Component Analysis,” “gullible Bayes” as an alternative of “naive Bayes,” and “irregular backwoods” in place of “random forest.” Researchers and scientific publishers absolutely acknowledge they have to accommodate the generative AI instruments which are quickly being built-in into scientific analysis and tutorial writing. A latest article in The BMJ reported that 87 out of 100 of the highest scientific journals at the moment are offering pointers to authors for the use of generative AI. For instance, Nature and Science require that authors explicitly acknowledge and clarify the use of generative AI in their analysis and articles. Both forbid peer reviewers from utilizing AI to judge manuscripts. In addition, writers cannot cite AI as an writer, and each journals usually don’t allow photos generated by AI—so no rat penis illustrations. Meanwhile, owing to issues raised about its AI-generated illustrations, the rat penis article has been retracted on the grounds that the “article doesn’t meet the requirements of editorial and scientific rigor for Frontiers in Cell and Developmental Biology.”

https://reason.com/2024/05/04/the-case-of-the-ai-generated-giant-rat-penis/

Recommended For You