AI and the village idiot game: Mastering instructions

When you have been in scouting, you most likely have come across a rather funny (and sometimes frustrating) game called “village idiot”.

Village Idiot is a verbal party game focused on communication precision, where players give hyper-detailed instructions to prevent others from misinterpreting them. It’s unrelated to card games, emphasising how everyday directions can lead to hilarious failures due to ambiguity.

One party acts as the “village idiot(s)” who must follow instructions literally, while the other party provides directions for simple tasks like “draw a house” or “make a sandwich.” The goal is to craft commands so exact that the idiot(s) can’t mess them up (e.g., “draw a rectangle with a triangle on top using only black lines”). Misinterpretation scores points for the idiot; rounds rotate roles.

Side note: As architects, we are kind of familiar with that task. After all we write detailed specifications and create highly detailed drawings to make sure things turn out the way we want.

 

Fun fact: Turns out that AI operates along the same lines.

 

In 2026, as AI becomes part of everyday life, made-up answers are causing real problems. A Vectara study found that even the best models still make things up at least 0.7% of the time, and some go over 25%. It might seem like a small problem, but AI hallucinations can spread false information and even cause real harm in areas like healthcare and finance. (https://www.allaboutai.com/resources/ai-statistics/ai-hallucinations/)

What is AI Hallucination? AI hallucinations refer to instances where AI models, particularly generative ones like large language models, produce outputs that are plausible-sounding but factually incorrect, misleading, or nonsensical. The biggest issue is how confidently it presents the information, making users trust the output, especially if not questioned. This leads to unchecked misinformation to enter into real-world reports and presentations.

AI hallucinations are due to how probabilistic language models work. Models present responses with high confidence, and users’ trust in the output can let some hallucinations slip through.

As LLMs are good at predicting the next word/output, you can think of the AI model as a super-fast auto-complete tool. When there is a gap in knowledge of information, it tries to fill it up with the most plausible information. In return, it leads to wrong information.

AI hallucinates in different ways, but there are a few pointers that you need to look out for:

  • Unsupported claims
  • Logical flaws
  • Outdated info

The best way to deal with AI hallucinations for business is to use verification techniques. As AI models are conversational, you can ask follow-up questions on the source, clarify assumptions or expose gaps in reasoning.

For example, I use any of the following variants of the prompt to re-check outputs:

  • “Can you provide the primary source for…?”
  • “What evidence supports this statement?”

Even simple questions like “Are you sure about that?” can already help, strangely enough.

AI prompts are the medium through which you connect with an AI model. LLMs are language based, the hint is in the name – or abbreviation. It also serves as the first line of defence against hallucinations. Yes, it’s like the village idiot game, but with a computer that lacks the nuances to understand context.

Prompts, when written with clarity, reduce any ambiguity. It conveys exactly what you need and leaves less or no space for AI to hallucinate.

For example, consider the following prompt:

“Where do kiwis come from?”

Now most people in New Zealand would ask what you mean? The bird, the fruit, the people?
People in Europe would tell you that the kiwifruit comes from New Zealand. That’s because that’s most likely all they know. They typically do not know about the birds or people.

Depending on the AI model you ask, you will get widely varying results.

Copilot says they are from China. Perplexity gives you more info, listing the fruit and the bird. (Still no people)

Depending on what your intention was, maybe try with a more precise question:

“What are the origins of the kiwi fruit and what has is traditionally been used for. Give a brief history and supply references”

And here we learn that the name kiwi fruit only exists since 1959: “Initially called “Chinese gooseberries,” it was renamed kiwifruit in 1959 for marketing, gaining fame after WWII exports to the UK, US, and beyond.” and the source for both comparative queries in MS Copilot and Perplexity is exactly the same sentence from the same source: Wikipedia ;-)

In summary: AI does hallucinate and while getting better with more modern models, your best defence is good questions (or instructions) and ask for references to avoid the “just good sounding” results.

 

References:

  • https://geekflare.com/guides/prevent-ai-hallucinations/
  • https://www.allaboutai.com/resources/ai-statistics/ai-hallucinations

Leave the first comment

Related Posts