Climate scientist on AI: “We need critical thinking more than ever”

Little guidance on AI use in scientific writing

As scientists become aware of the risks associated with using artificial intelligence (AI) tools to create reports, papers and other communications, they’re finding little guidance from leading research institutions.

The risks include damage to individual reputations if AI isn’t used carefully and ethically, a loss of trust in fields like climate science, and even harm to humans’ health and safety. The New York Times and other media have reported that AI is already used to spread false and misleading information, and it has led climate change deniers in Europe to threaten meteorologists.

AI content is accurate much of the time, but the times it is not is when it becomes dangerous.

Climate scientist Katharine Hayhoe

Climate scientist Katharine Hayhoe, a Texas Tech professor and director of the university’s Climate Science Center, replied to a request for comment on the issue by noting that “AI content is accurate much of the time, but the times it is not is when it becomes dangerous. We need to be teaching and applying our critical thinking skills today more than ever.”

AI tools, Hayhoe said, “can manufacture information that sounds correct and can be difficult to disentangle from the accurate information it provides.”

She added, “For example, if I ask ChatGPT for ‘Katharine Hayhoe’s top 10 quotes’ it will include some I have said and some … I haven’t. I have heard from colleagues that it also creates references that do not exist. Imagine what would happen if we took those at face value.”

Guidance for scientists 

Even if AI is used with good intentions, it presents new challenges for the scientific community beyond the need to counter misinformation when it’s detected. Without official guidance from leadership on the ethical use of AI in writing, scientists have only themselves and evolving journal submission policies to go by.

The National Science Foundation (NSF) has provided no guidance to the scientific community on risks associated with the use of AI in writing research papers. To date, neither has the American Meteorological Society (AMS), although both are aware of the issue. On May 5, in fact, NSF announced a $140 million investment to establish seven artificial intelligence research institutes.

An NSF spokesperson cited the speed of recent changes in the use of large language models when asked about providing guidance on AI in scientific writing: “While applications that use large language models build upon decades of research into natural language processing, the recent transition of research platforms into product platforms has been rapid. After NSF evaluates this transition and its impacts, using a deliberate and intentional process, we will develop appropriate guidance to guide informed actions for our research community.”

Gwendolyn Whittaker, AMS director of publications, said in an email that AI is on the agenda for this week’s meeting of the AMS Publications Commission, which oversees peer-reviewed AMS publications: “The topic of scientific publishing and large language models is on the agenda so the commission can consider the potential impacts of LLMs on scientific writing, peer review, and publishing, and can consider whether AMS publications guidelines should be updated in any way.”

The University Corporation for Atmospheric Research (UCAR), a consortium of North American colleges and universities that manages the National Center for Atmospheric Research* for NSF, issued some staff guidance in April but has yet to make it public.

Few universities are known to have provided any guidance of their own. After asking his colleagues at a major research center, one source reported that “none had yet even considered the issue of large language models in writing papers.”

Journal submission guidelines

When they do consider it, the little guidance they find might come from the scientific journals in which they hope to publish their research papers. Science, for example, forbids the use of AI-generated content “without explicit permission from the editors,” and it doesn’t allow identifying AI programs as authors.

Nature addresses the issue in a single paragraph in its paper submission guidelines: “Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript.”

The American Geophysical Union takes a similar approach in its instructions for authors preparing to submit papers, requiring transparency about AI use while emphasizing accountability: “Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus responsible for any breach of publication ethics.”

Establishing a policy, of course, is one thing. When there are no reliable ways to detect the use of AI-assisted writing, threatening to accuse authors of scientific misconduct if violations are discovered might have to suffice.

* I worked as a writer/editor for the National Center for Atmospheric Research (NCAR) from August 2011 through April 2023. I also served as a writing mentor to a number of student interns in UCAR’s Significant Opportunities in Atmospheric Research and Science program.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.