Menu

Blog

Feb 7, 2023

What ChatGPT and generative AI mean for science

Posted by in categories: law, robotics/AI, science

Setting boundaries for these tools, then, could be crucial, some researchers say. Edwards suggests that existing laws on discrimination and bias (as well as planned regulation of dangerous uses of AI) will help to keep the use of LLMs honest, transparent and fair. “There’s loads of law out there,” she says, “and it’s just a matter of applying it or tweaking it very slightly.”

At the same time, there is a push for LLM use to be transparently disclosed. Scholarly publishers (including the publisher of Nature) have said that scientists should disclose the use of LLMs in research papers (see also Nature 613, 612; 2023); and teachers have said they expect similar behaviour from their students. The journal Science has gone further, saying that no text generated by ChatGPT or any other AI tool can be used in a paper5.

One key technical question is whether AI-generated content can be spotted easily. Many researchers are working on this, with the central idea to use LLMs themselves to spot the output of AI-created text.

Comments are closed.