The Complete Education Portal

Pallikkutam

How to Avoid Risks in Using Large Language Models in Science

To protect science and education from the spread of bad and biased information, clear expectations should be set around what LLMs can responsibly contribute

Large Language Models (LLMs) may pose a direct threat to science, because of so called 'hallucinations' (untruthful) responses, and should be restricted to protect scientific truth, says a new paper from leading Artificial Intelligence researchers at the Oxford Internet Institute. The paper by Professors Brent Mittelstadt, Chris Russell and Sandra Wachter has been published in Nature Human Behaviour. It explains, ‘LLMs are designed to produce helpful and convincing responses without any overr...

Go to index page »

SUBMIT REVIEW