Large Language Models (LLMs) may pose a direct threat to science, because of so called 'hallucinations' (untruthful) responses, and should be restricted to protect scientific truth, says a new paper from leading Artificial Intelligence researchers at the Oxford Internet Institute. The paper by Professors Brent Mittelstadt, Chris Russell and Sandra Wachter has been published in Nature Human Behaviour. It explains, ‘LLMs are designed to produce helpful and convincing responses without any overr...
SUBMIT REVIEW
Please email us: editor.rajagirimedia@gmail.com