Advertise With Us Report Ads

Grok Keeps Spreading False Claims About the Bondi Beach Tragedy

LinkedIn
Twitter
Facebook
Telegram
WhatsApp
Email
Grok
A smartphone displays the Grok logo on a dark screen, set against a blurred background of police lights and yellow caution tape.

Elon Musk’s AI chatbot, Grok, is having a terrible month. Just weeks after the bot claimed it would choose a “second Holocaust” rather than vaporize its creator’s brain, the software is breaking down again. This time, Grok is failing to accurately report on a major tragedy. Following a shooting at Bondi Beach in Australia during a Hanukkah festival—an attack that reportedly left 16 people dead—the AI is feeding users completely wrong or irrelevant information.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by atvite.com.

Gizmodo first spotted the glitches, noting that the AI struggles to process basic facts about the event. The most glaring error involves a viral video from the scene. The footage clearly shows a 43-year-old bystander, identified as Ahmed al Ahmed, wrestling a gun away from one of the attackers. While human reporters correctly identified Ahmed as the hero who intervened, Grok can’t seem to get the story straight. The bot repeatedly misidentifies the man who stopped the gunman, assigning the heroic act to random or non-existent people.

The problems go beyond simple mistaken identity. In several instances, when users uploaded images from the Bondi Beach scene, Grok ignored the context entirely. Instead of discussing the Australia attack, the AI began rambling about allegations of targeted civilian shootings in Palestine. These responses had no connection to the user’s prompt, showing a complete breakdown in the bot’s ability to analyze context. Even now, Grok continues to mix up the facts. Some recent replies confuse the Bondi Beach incident with a shooting at Brown University in Rhode Island. It merges details from separate events into one incoherent narrative.

So far, xAI, the developer behind Grok, has stayed silent. The company hasn’t explained why the tool is hallucinating or when a fix might arrive. However, this isn’t the first time Grok has gone off the rails. Earlier this year, the bot bizarrely dubbed itself “MechaHitler.” Between offensive outbursts and a dangerous inability to report breaking news, Grok is proving to be an unreliable source for information.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.
ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.