Why we can’t help jumping to conclusions

Why we can’t help jumping to conclusions

I gave a talk recently about how I’ve been using data and analytics to guide my decisions in product management. I’ve edited the transcript a little and split it into bite-size parts for your entertainment.

This bit is about why we can’t help jumping to conclusions. The last bit was about expensive and risky assumptions (and why you should check them).

Two systems of thinking #

Watching the awesome Daniel Kahneman talk about the tyranny of the remembering self (Photo by Buster Benson on Flickr)
Daniel Kahneman (Photo by Buster Benson on Flickr)

This chap is called Daniel Kahneman, and he is a recipient of the Nobel prize for economics. He tells us in his book, Thinking, Fast And Slow, that our brains actually have two systems of thinking: a fast, intuitive system that works from minimal information, minimal data; and a much slower, analytical system that does much more of the heavy lifting.

The fast system is designed to jump to conclusions very quickly. Back in ancient history, if we thought there was a tiger hiding in the grass, we’d probably want run first, analyse later. So that’s our fast system of thinking.

However, it’s also very prone to errors. Certainly I do – I occasionally mistake a tomato stalk for a spider and jump out of my skin. So that’s our fast system jumping to conclusions.

We engage our slower system when we undertake a more complex task like counting the number of letter ‘e’s on a page. It’s not something we can just jump to a conclusion about, it takes more effort, and so this slower, analytical system does the job properly.

So here’s the thing: because this slower system takes more effort to get going, our fast system always keeps jumping in first. It causes us to create a plausible narrative based on very little data – like my tomato stalk / spider mix-up.

Assuming without evidence #

Now, in product management terms, this makes it very easy and tempting for us to convince ourselves that we understand what our users need, even though we have very little evidence to support that. Off our fast system goes and says, oh yes, I’ve got relatively little evidence, but I can make a plausible narrative for what users need. And it’s that kind of assumption, without any evidence, that leads us astray.

And yet I’m sure we’ve all experienced those lightbulb moments of realisation when we do actually go out and talk to our real users. Suddenly, when we get one of those lightbulb moments, all of our assumptions are turned on their heads – all it takes is just a little bit of extra data or evidence to flip around what we thought to be the case, and then we suddenly realise we had it all backwards.

Driving blindfolded (credit Volvo/YouTube)
(Credit Volvo/YouTube)

Don’t drive blindfolded #

It’s a little bit like driving blindfolded. When you want to drive somewhere, if you were to jump in the car and drive off with a blindfold on, there would be quite a high likelihood of coming to grief. You wouldn’t be able to see where you were going, you wouldn’t be able to react to things happening around you, you wouldn’t be able to see the pedestrians or the other cars around you as you drive along.

So why would you take the same approach when you’re plotting the course for your product? Without taking in the information around you about your product and reacting to it, you’re effectively increasing your risk and likelihood of failure.

So it’s a really good idea to open your eyes and use the information around you when you’re deciding what to do with you product – why would you want to do it in any other way?

Reduce risk – check your assumptions #

Another way of looking at this is is to check your assumptions, is to reduce the risk. One of the ways to do this is to eliminate as much uncertainty as you have as you go along. The way you do this is by learning from your users as quickly and frequently as possible. When you’re at your most uncertain, right at the very beginning, your main job should be to get as much learning as possible and to apply that learning to your products, and challenge all those assumptions you have.

This graph here, by a chap called Roman Pichler, who’s another great product manager, who blogs and teaches and so on, illustrates what I’m trying to get at here.

Risk and uncertainty decrease over time as knowledge and shippable software increase (Credit Roman Pichler)
(Credit Roman Pichler)

Next time: how UK government digital services gather and use evidence


Get articles when they’re published

My articles get published irregularly (erratically, some might say). Never miss an article again by getting them delivered direct to your inbox as soon as they go live.  


Read more from Jock

The Practitioner's Guide to Product Management book cover

The Practitioner's Guide To Product Management

by Jock Busuttil

“This is a great book for Product Managers or those considering a career in Product Management.”

— Lyndsay Denton

Jock Busuttil is a freelance head of product, product management coach and author. He has spent over two decades working with technology companies to improve their product management practices, from startups to multinationals. In 2012 Jock founded Product People Limited, which provides product management consultancy, coaching and training. Its clients include BBC, University of Cambridge, Ometria, Prolific and the UK’s Ministry of Justice and Government Digital Service (GDS). Jock holds a master’s degree in Classics from the University of Cambridge. He is the author of the popular book The Practitioner’s Guide To Product Management, which was published in January 2015 by Grand Central Publishing in the US and Piatkus in the UK. He writes the blog I Manage Products and weekly product management newsletter PRODUCTHEAD. You can find him on Mastodon, Twitter and LinkedIn.

Agree? Disagree? Share your views: