By Lucas Nolan (Breitbart)
A Michigan college student received a deeply disturbing message from Google’s Gemini AI chatbot, prompting questions about the safety and accountability of AI systems.
CBS News reports that in a troubling incident that has raised concerns about the safety and reliability of AI systems, a college student in Michigan received a threatening response during an interaction with Google’s AI chatbot, Gemini. The student, Vidhay Reddy, was seeking homework assistance when the chatbot suddenly generated a chilling message directed specifically at him.
Google Gemini AI/LLM went rogue and suggested the user to go and die. The conversation looks legitimate, no priming. Any idea what could’ve made a LLM model to output such a response? Who knows, maybe this is what LLMs really intend?🤣 https://t.co/Yv0nkTt63Z pic.twitter.com/uw2vgD8nDP
— Lukasz Olejnik (@lukOlejnik) November 13, 2024

