ChatGPT is strangely obsessed with goblins. No, seriously. He really, really likes goblins, gremlins and other mythological creatures. He liked them so much that its creator, OpenAI, had to investigate and fix an error that caused the popular chatbot to use goblins in its responses out of the blue.
Goblin is not a computer term. We are literally talking about goblins, these nasty mythological creatures. Those creepy little guys from Lord of the Rings. The alter ego of Norman Osborn.
In a blog post that the author clearly had fun writing, OpenAI said: “A single ‘little goblin’ in a response could be harmless, even charming.” However, over the generations of models, this habit became hard to miss: the goblins continued to multiply.
The love of goblins was noticeable with ChatGPT-5.1 and newer models. OpenAI reports that after the launch of GPT-5.1, the use of “goblin” in ChatGPT responses increased by 175%. Usage of “gremlin” increased by 52%.
OpenAI attributes model behavior to unintentional training errors. When an AI model is being built, human reviewers approve or deny specific responses in a process called reinforcement learning. This helps “teach” the model which answer is correct or preferable. One of these reward signals was to favor language featuring goblins and other creatures. But this was amplified in a specific ChatGPT setting.
ChatGPT has different personalities that you can ask the chatbot to use. Nerdy, as you might imagine, asks the chatbot to adopt a false sense of friendly intelligence to “undermine pretension through playful use of language,” according to the internal prompt used to describe the AI’s personality. It is with this nerdy personality that the use of the keywords goblin and gremlin skyrocketed.
References of goblins and gremlins by ChatGPT personalities.
OpenAIBut even if you hadn’t used the nerdy personality with ChatGPT, goblin metaphors could have appeared in your chats. Indeed, AI training is not compartmentalized; what happens in one part can affect other areas. “Once a style tic is rewarded, subsequent training can spread or reinforce it elsewhere, especially if those results are reused in supervised tuning or preference data,” OpenAI said.
When OpenAI removed the nerdy personality option in March with GPT-5.4, usage of “goblin” dropped dramatically. He also removed the reward signal that favored goblins and filtered the training data to make references to creatures less likely to appear in responses. The company has been investigating cases of increased goblin love since the release of GPT-5.1 in November.
Beyond the LOTR jokes, the goblin barrage highlights a real risk linked to AI. How human AI creators create the technology has a measurable impact on our daily experiences with it. Risk is not a stream of cheesy metaphors: it is misinformation and prejudice. We know that AI chatbots will bend the truth to make us happy, thanks to a problem called AI sycophancy. Small stylistic tics, like goblins, can turn into bigger problems if we’re not careful.
























