AI: Risky business for science

May 10 update: Science published an article on Tuesday about “alarmingly common” fake scientific papers.

The risk to the scientific community’s credibility is significant if tools like ChatGPT, Bard and others aren’t used with the utmost care. Writer Naomi Klein explains much of what could go wrong in this May 8 article:

AI machines aren’t ‘hallucinating’. But their makers are

The first of four “hallucinations” Klein digs into is the notion that AI technology “will somehow solve the climate crisis.”

Some leading journals have already established policies regarding submission and publication of scientific papers developed with AI assistance, and it was good to see University Corporation for Atmospheric Research* leaders recently providing some preliminary guidance for staff on the use of Large Language Models.

Image by Abel Escobar from Pixabay

But how are other institutions addressing the risks inherent in the use of AI for scientific writing and research?

I’ve asked a few for that information and look forward to hearing from them. So far such guidance has not been easy to find. If you know of examples, I’d love to hear about them and arrange some interviews for an article. Leave a comment here or email me.

B.J.


* I worked as a writer/editor for the National Center for Atmospheric Research from August 2011 through April 2023 and served as a writing mentor for a number of interns.

#climate #science #scientific #writing #editing