Fake scientific abstracts and research papers generated using OpenAI’s highly-advanced chatbox ChatGPT fooled scientists into thinking they were real reports nearly one-third of the time, according to a new study, as the eerily human-like program raises eyebrows over the future of artificial intelligence.
Researchers at Northwestern University and the University of Chicago instructed ChatGPT to generate fake research abstracts based on 10 real ones published in medical journals, and fed the fakes through two detection programs that attempted to distinguish them from real reports.
ChatGPT created completely original scientific abstracts based on fake numbers, and stumped reviewers nearly one-third of the time.
Comments are closed.