Write strategies, not eulogies.
It’s surprising how frequently I speak with teams where everyone knows exactly why something didn’t work. Few things seem to be as clarifying as a missed target, a rejected proposal, or an unmet KPI.
Whether you call it a debrief, an after-action review, or a postmortem (if things went really badly!), reflecting on failure is critical. When done well, it provides insight into our problems and deficiencies and fuels future improvement. It can even be fun – indeed, one of the most fun hours I’ve had in the last little while was a session with the excellent Afterburners, learning an in-depth framework for debriefing.
Why wait until you’re dead to start getting healthy?
But wouldn’t it be great to get that level of clarity into our problems and failures before we lose? Being wise after the fact is great, but in the immortal words of Adam Sandler in The Wedding Singer, many things brought up in a debrief are also…
Enter the “premortem”, a powerful tool for getting the benefits of a debrief before the wheels come off and identifying problems while there’s still time to fix them. It is simple: during planning, imagine yourself in the future, looking back at events that in reality are yet to occur, and imagine the reasons why whatever you’re planning has failed.
I’ve used it for everything from strengthening proposals (“imagine the buyer comes back and says ‘no’, what do we think are the top 3 reasons they will have rejected our proposal?”) to planning for a presentation (“imagine reading the feedback for this presentation, what are the people who didn’t like it going to say?”) to setting strategy (“imagine we’re sat here a year from today explaining our failure to the board – what are our reasons?”).
It's powerful for many reasons. One is that it gives license to those with doubts or reservations safety and permission to speak up. You create an environment where people don’t feel like pointing out weaknesses is being Negative Nigel, rather it’s a helpful act of service.
But although I believe deeply in the power of the premortem and originally set out to write an article about how and why it’s useful, that is no longer the focus. Rather, it’s about what happened after I decided to write it and began my research. It’s now not about premortems – instead it’s an intriguing tale about the importance of checking sources, valuing accuracy, and taking an extra minute to get your facts right.
Breaking news alert: not everything on the internet is true!
fgdgf
I decided to write about premortems when coming home from an engagement where I’d used the tool. It was powerful enough that I thought the tool was worth sharing.
Funnily enough, I thought I’d coined the term ‘premortem’, as I didn’t recall ever seeing it. Now that I was writing about it, I checked – and two seconds on Google proved me very wrong. The term is everywhere. It even has its own Wikipedia page. Most people cite the originator as psychologist Gary Klein in an article published in Harvard Business Review in 2007 called “Performing a Project Premortem”.
For all I know, I read that article years ago and just forgot. Klein calls the process of imagining yourself in the future, reflecting back on past events, “prospective hindsight”.
I confess to initially being disappointed. This happens to me all the time. When a client once told me they loved that I blended ‘the rigor of an academic (even though I’m not one) with the practicality of someone who has built their own business’, I freestyled back at them “oh you mean like a pracademic?”.
I said it jokingly as a pun, but in the back of my mind thought “man, that’s cool – I should do something with that term”. Only to go home, Google it, and find “the term has a history of at least 30 years”.
Booooooo.
Anyway, my initial disappointment at not having coined the term ‘premortem’ rapidly gave way to the joy of knowing that this meant there’d be a bulk of literature about its effectiveness. There must be some good evidence for it… after all, Klein’s article was in HBR and I’m in good company loving the tool considering Nobel prize-winning economists Danny Kahneman and Richard Thaler both swear by it.
So, you can imagine my surprise when I found that not only is the data in support of its effectiveness far weaker than I hoped, but also that key ‘evidence’ repeated in support of its effectiveness is overstated or misstated to the point of misrepresentation.
For example, there is a statistic used by Klein in the article (and repeated many, many, many times in other places) that “prospective hindsight increases the ability to correctly identify the reasons for future outcomes by 30%”. This sounds incredible – and feels true, based on my experiences with premortems.
But the 1989 research paper that everyone (including Klein) cites for that statistic does not include any claim like that. In fact, the paper (cleverly titled “Back to the Future: Temporal perspective in the explanation of events”) does not even set out to test whether prospective hindsight increases the accuracy of predicting future events. Rather, it tests whether framing explanations in the past or future, and messing with the variable of uncertainty, changes people’s perception of events, and their style and length of reasoning.
It does show (sort of) that people come up with roughly 30% more reasons for something when using prospective hindsight, but it is silent on whether those reasons are good or accurate. That is the closest the paper comes to the sort of finding that is referenced. Perhaps there is some other more recent empirical evidence about which I’m not aware, however most people just cite Klein’s 2007 article, which references a study that doesn’t show what he says it shows.
I would never have known that, except I went and read the original study. And I don’t imagine that is particularly common practice. But the statistic published in HBR, and now shared all over the internet is simply made up.
Once is bad luck, twice is…?
Someone making a claim with a reference, only for me to click the link, read the source, and find it shows nothing of the sort seems to be recurring at increasing frequency.
To give a recent example, I’ve seen various posts (often from credible and judicious people) lately claiming that “being polite to AI increases its effectiveness”, then linking to this article. Almost all the posts I read making the claim link to that same article.
But read the article: it says nothing about politeness making AI more effective. In fact, the article’s first line is “if chivalry is not dead, it’s certainly circling the drain”! Hardly priming you for an article in support of politeness. It goes on to discuss OpenAI founder and CEO Sam Altman asking people not to say please and thank you because it increases the length of ChatGPT responses which takes more power!
There are some oblique references in the article to the notion that politeness affects AI performance, so I went and looked for the actual evidence for this. Again, as with the premortem conversation, the more you dig the more it seems to be a combination of people overclaiming and/or misunderstanding.
For instance, the study most people cite (if they bother to cite one at all) in supporting the claim that politeness increases AI performance is “Should we respect LLMs? A cross-lingual study on the influence of prompt politeness on LLM performance”. It’s an interesting study – and results vary a bit by language – but looking at English language results I struggle to see how the findings support the claim.
The study looks at performance in summarising text, understanding language, and bias – let’s look at results for each.
Firstly, when looking at AI’s capacity to correctly summarise text, the first line of the conclusion reads “scores consistently maintain stability, irrespective of politeness level of the prompts”. Straight away, this should send up a flag to people using the study to claim AI does better when you’re polite. And it’s the first line.
Generally, while responses were longer when responding to polite prompts, it didn’t seem to affect the task. This makes sense – ChatGPT mirrors your prompt style back at you, and politeness takes more words. That’s not politeness making it ‘better’, just more verbose.
Secondly, when looking at indicators of AI understanding of text, ChatGPT performed best at a moderate level of politeness – that is, it was not the case that ‘more polite equals better understanding”. Further, ChatGPT scored higher on understanding at the lowest two levels of politeness than at the top level. You read that right – it understood better with the two rudest prompts than with the politest. Although it’s all quite fine margins, with the authors noting that “GPT-4’s scores are variable but relatively stable [across politeness levels]”.
Ironically, in a study that is being used by people to support the importance of politeness, the authors’ final line about understanding is: “the result shows that an in [more] advanced models, the politeness level of the prompt may have a lesser impact on model performance”.
If anything, this suggests that politeness is becoming less important, not more.
Finally, when looking at bias in the response, bias was higher at both high and low levels of politeness, compared to moderate. In discussing bias at high levels of politeness, the authors speculate “that this is because, in human culture, a highly polite environment makes people more relaxed and willing to express their true thoughts without being overly concerned about moral constraints”.
Basically, to explain through anthropomorphic metaphor, if you’re polite to AI, it feels like it can get away with some casual bias and you’ll be too nice to call it out.
What are we to do?
Last week, my interview with Trina Sunday on her Reimagine HR podcast dropped, discussing my forthcoming book The Truthful Leader. At one point, Trina asked something like: “what can people do to combat disinformation and promote the truth?”.
Great question. I suggest starting here: do your social media sharing more mindfully. Ideally, don’t share things without at least making an effort to check for accuracy.
I know it’s unrealistic to expect that everyone will click through references and read primary sources for everything – that’s fine. And it’s disheartening that one of the misrepresentations I’ve written about today was published in Harvard Business Review – a source you’d like to think you could trust.
But I suspect that committing to this rule would not mean people do more fact-checking, rather that they’d do less sharing. I would be comfortable with that. I’m not sure that the world is a better place when each person turns into a mini amplifier of the last thing they’ve read.
At the very least, trying to be more discerning about what we share, and developing a habit of ‘check before you share’ would be a huge step in the right direction.
In summary – be polite to AI not because the evidence says it makes AI more effective, but because being polite is a good habit to be in. Do premortems not because fictionalised evidence says it’s 30% better, but because lots of practical experience confirms that it is a hugely helpful tool.
And remember that it’s always okay to check people’s sources, have a sceptical eye, and to stand up for accuracy in a world of disinformation.
--
Dominic Thurbon is an experienced senior executive, successful entrepreneur, and researcher, writer and speaker. He is a director and co-founder at Alchemy Labs Australia.
His next book, “The Truthful Leader”, is forthcoming in 2026.
Find more of his work at www.domthurbon.com
This article was not written by AI :-)
You don’t need to be fearless to reach your goals, you just need to be willing. Willing to try, willing to learn, and willing to believe that you’re capable of more than you know. The road may not always be smooth, but growth rarely is. What matters most is that you keep going, keep learning, and keep believing in the version of yourself you’re becoming.