close
close

Why does the name “David Mayer” cause ChatGPT to crash? Digital privacy requests may be to blame

Why does the name “David Mayer” cause ChatGPT to crash? Digital privacy requests may be to blame

Users of the conversational AI platform ChatGPT discovered an interesting phenomenon over the weekend: the popular chatbot refuses to answer questions when asked “David Mayer.” When you ask it to do so, it freezes immediately. Conspiracy theories followed – but there could be a more mundane reason behind this strange behavior.

Last weekend, word spread quickly that the name was poison for the chatbot, and more and more people tried to get the service to just recognize the name. No luck: any attempt to get ChatGPT to spell that particular name fails or even breaks the intermediate name.

“I can’t give an answer,” it says, if it says anything at all.

Photo credit:TechCrunch/OpenAI

But what started as a one-time oddity soon blossomed when people discovered that ChatGPT can’t just name David Mayer.

The names Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber and Guido Scorza were also found to be crashing the service. (No doubt more have been discovered since then, so this list is not exhaustive.)

Who are these men? And why does ChatGPT hate them so much? OpenAI has not responded to repeated requests, leaving us to put the pieces together as best we can ourselves.

Some of these names can belong to any number of people. But a possible connection was soon discovered: these people were public or semi-public figures who might have preferred certain information to have been “forgotten” by search engines or AI models.

Brian Hood, for example, immediately stood out because if he’s the same guy I wrote about him last year. Hood, an Australian mayor, accused ChatGPT of falsely describing him as the perpetrator of a decades-long crime that he had actually reported.

Although its lawyers contacted OpenAI, a lawsuit was never filed. As he told the Sydney Morning Herald earlier this year: “The offending material was removed and they released version 4, which replaced version 3.5.”

Photo credit:TechCrunch/OpenAI

As for the most prominent owners of the other names, David Faber is a long-time reporter at CNBC. Jonathan Turley is a lawyer and Fox News commentator who was “slapped” (i.e. a fake 911 call sent armed police to his home) in late 2023. Jonathan Zittrain is also a legal expert who has spoken extensively about the “right to be forgotten.” And Guido Scorza is on the board of the Italian Data Protection Agency.

Not exactly in the same industry, nor is it a random selection. It is conceivable that each of these people is someone who, for whatever reason, has officially requested that online information concerning them be restricted in some way.

That brings us back to David Mayer. There is no lawyer, journalist, mayor, or other obviously notable person with that name that anyone can find (with apologies to the many respected David Mayers out there).

However, there was a professor David Mayer who taught theater and history and specialized in connections between the late Victorian era and early cinema. Mayer died in the summer of 2023 at the age of 94. Before that, however, the British-American academic had struggled for years with the legal and online problem of his name being linked to a wanted criminal who was essentially using it as a pseudonym where he couldn’t travel.

Mayer continually fought to have his name separated from the one-armed terrorist, even as he continued to teach well into his final years.

So what can we conclude from all this? Since OpenAI does not provide an official explanation, we suspect that the model has included a list of people whose names require special treatment. Whether for legal, security, privacy, or other reasons, special rules likely apply to these names, as do many other names and identities. For example, ChatGPT may change its answer if you ask about a political candidate after it matches the name you wrote in a list of those candidates.

There are many such special rules and each request goes through various forms of processing before it is responded to. However, these post-prompt handling rules are rarely published, except in political announcements such as “The model will not predict election results for a candidate for office.”

What likely happened is that one of these lists, which are almost certainly actively maintained or automatically updated, somehow became corrupted by bad code that, when invoked, caused the chat agent to crash immediately. To be clear, this is just our own speculation based on what we’ve learned, but it wouldn’t be the first time an AI has behaved strangely due to post-training instructions. (Incidentally, as I was writing this, “David Mayer” started working again for some, while the other names still caused crashes.)

As is usually the case with these things, with Hanlon’s Razor: never commit malice (or conspiracy) that can be adequately explained by stupidity (or syntax errors).

The whole drama is a useful reminder that not only are these AI models not magic. They are particularly fancy autocomplete features that are actively monitored and compromised by the companies that make them. Next time you think about getting facts from a chatbot, consider whether it might be better to go straight to the source.

Leave a Reply

Your email address will not be published. Required fields are marked *