Modernising your product in 2017 – webinar for Logi Analytics

A talk I gave in November 2016 for Logi Analytics about why building products should be data-driven, and how to do it. You can find the audio, slides and transcript below.

Audio

Modernising Your Product in 2017 – Jock Busuttil

Slides

Transcript

Introduction

[JOCK] Hello, I’m Jock Busuttil. I’m an author and speaker, but first and foremost, I’m a product manager. And I’m very pleased to be joining Logi Analytics today, and you, to speak.

So let me explain a little bit why I’m talking to you today.

Backstory

First a brief bit of backstory. How did I get in to product management?

After deciding it would be best for all concerned if I didn’t join the Royal Air Force – I wasn’t a very good pilot, you see – I started off working in tech companies.

I found myself moving around different roles, everything from development to running training sessions to marketing. I realised I wanted a role that combined the variety of different roles I’d been working in, but with more of a focus on the product and its users.

At first, I didn’t even know there was such a job that would allow me to do all this. As it turned out there was, and this job was called “product manager”.

After several years at a few big companies, I realised I was good at managing products in a one particular way, but I guess I’d stopped learning, because I was only doing it in that one particular way.

I was only really learning when I changed products, or moved from company to company, and of course that was only happening every few years.

So that’s the point at which I decided to leave the job and set up Product People.

About Product People

We’re a product management company. It’s me and a handpicked group of experienced senior product managers or product directors, but also user researchers, content designers and generally really good people I’ve personally worked with. And they’ve got a ton of experience in their own right.

And typically what we do at Product People is to help companies, whether they’re large corporations or small startups, in lots of different sectors, like fintech or charities or retail, and quite a lot of government recently. And essentially what we do is help provide an extra pair of hands to people to help them with their product management.

The kind of situations we’re working in are perhaps when a company needs a product manager to fill in at short notice, perhaps when they’re doing some recruitment, or when a small company is just getting large enough to hire their first product manager and needs help to build a product team.

So because I’ve been working with all these new clients and different products every few months, I’ve had the benefit of many more opportunities to learn about what works and what doesn’t work depending on the context of the company, the maturity of the products and the state of the markets they’re working in.

So with that benefit of experience, I’d like to share a little bit of that with you, in particular how I’ve been using data and analytics to guide my decisions in product management.

Data-driven from the outset

So this slightly blurry memo you can see in typescript is from 1931. This is a memo that a chap called Neil McElroy wrote to his bosses. He used to work at Procter and Gamble – P&G. And he was responsible for the soap product Camay (which I think still exists today). And he was really frustrated because his product, his brand, was always playing second fiddle to another soap brand at P&G called Ivory.

And so he decided that the best way to get more of an advantage with his product was to propose the creation of this new kind of role, what he called a “brand manager”. And the reason I mention this is – because this is back in 1931 – this is really a prototype, almost, for what we call product managers today. And what he proposed was a role that would work and coordinate with other departments, and more importantly he advocated a data-driven approach.

As Neil says, the brand manager needed “to make whatever field studies are necessary to determine whether the plan [for the product or brand] has produced the expected results.”

So even at the very beginning, product management relied on data and evidence.

Product manager does not specify the product

Now there are plenty of examples of failed products that illustrate the value of having data and evidence.

We still get the situation where some companies are expecting the product manager to come up with the ‘requirements’ for their product – even though they’ve never spoken to the actual people who will be using the product.

Then there’s the other aspect that, unless you’re building a software product that will be used by product managers, the product manager isn’t representative of the users either. So really not the right person to be determining what the product should be in the absence of any evidence.

And if you need more on that, there’s plenty of really good stuff from Marty Cagan, he wrote a book called Imagine, and also Steve Blank, who’s done loads of stuff, great videos and also The Four Steps To The Epiphany book . And there’s plenty more detail there on why it’s so important for product managers and their teams to get out of the building to actually meet their users and gather some evidence.

Assumptions and risks

So there are plenty of examples of products that really illustrate the value of having data and evidence. One particular failure I quite like using is this one.

Back in 2001, right in the middle of the dot-com bubble, there was a brilliant engineer called Dean Kamen.

By then, he’d already had a decent string of successes under his belt, including an innovative and successful insulin pump, and a motorised wheelchair that could, amongst other things, go up and down stairs, which was amazing, but also allowed the occupant to raise themselves to standing height so they could see people eye-to-eye.

Now, unlike his previous successes, his next project was much more ambitious. This time, in his own words, he wanted to reinvent personal transportation for everyone. He said it was going to be as big a leap forward from the car as the car had been from the horse and buggy.

Now before he’d even launched this world-changing product, the patent application got leaked, and this sent the dotcom world into a frenzy of speculation about this thing called Project Ginger. And they knew it was about transport, but they didn’t know what it was. So some people were thinking it was something crazy like a jet-propelled scooter. And some other guys were thinking maybe it was a Star Trek transporter or something. But in reality it was actually much more down-to-earth:

Yes, Dean Kamen invented the Segway, which – if you’re not familiar with it – is a self-balancing, two-wheeled electric scooter. So not quite the Star Trek style of transporter people were expecting.

Now, the thing is, whilst it’s a bit of an anticlimax, actually as a piece of engineering, it was pretty marvellous – it did its job very well. If you think about what processors and computing power were back then, it did the job of being a self-balancing, electric scooter very well. So, given all of that, and given his track record, and given the actual build of the product was great, why was it not the world-changer he’d expected it to be? Why are we all not riding our Segways today?

Of course, no product could have lived up to the amount of hype the Segway received before it was launched. So let’s look at the assumptions that Dean and his team had made before they built the thing.

They made assumptions about:

  • the product – were they the right features?
  • how the product would be used – would it be convenient?
  • the market demand and the price people would be happy to pay for it
  • how the users of the product would feel about it; and
  • most crucially, they made assumptions about the regulations that surrounded the product.

When they launched the Segway, it was illegal to ride in 32 US states and the District of Columbia. That’s quite a sizeable barrier to market.

And with over $100 million US dollars already invested in the Segway, the company had to spend even more money to lobby each state to change its laws to allow the Segway to be used.

Now the reason I mention all that is: don’t you think it would have been much better to check – particularly this last point about whether it was legal to ride – before they’d got to the expensive business of mass producing it?

So really, the first takeaway I have for you is that your assumptions – the things you think you know about your product, about your users, all these assumptions you’re making without any evidence – translate directly into risks for your product.

We can’t help make assumptions – it’s the way our brains are wired up.

[NEXT SLIDE – Kahneman]

Constructing narratives from little evidence

This chap is called Daniel Kahneman, and he is a recipient of the Nobel prize for economics. And he tells us in his book, Thinking, Fast And Slow, that our brains actually have two systems of thinking: a fast, intuitive system that works from minimal information, minimal data; and a much slower, analytical system that does much more of the heavy lifting.

The fast system is designed to jump to conclusions very quickly. Back in ancient history, if we thought there was a tiger hiding in the grass, we’d probably want run first, analyse later. So that’s our fast system of thinking.

However, it’s also very prone to errors. Certainly I do – I occasionally mistake a tomato stalk for a spider and jump out of my skin. So that’s our fast system jumping to conclusions.

We engage our slower system when we undertake a more complex task like counting the number of letter ‘e’s on a page. It’s not something we can just jump to a conclusion about, it takes more effort, and so this slower, analytical system does the job properly.

So here’s the thing: because this slower system takes more effort to get going, our fast system always keeps jumping in first. It causes us to create a plausible narrative based on very little data – like my tomato stalk / spider mix-up.

Now, in product management terms, this makes it very easy and tempting for us to convince ourselves that we understand what our users need, even though we have very little evidence to support that. Off our fast system goes and says, oh yes, I’ve got relatively little evidence, but I can make a plausible narrative for what users need. And it’s that kind of assumption, without any evidence, that leads us astray.

Driving blindfolded

And yet I’m sure we’ve all experienced those lightbulb moments of realisation when we do actually go out and talk to our real users. Suddenly, when we get one of those lightbulb moments, all of our assumptions are turned on their heads – all it takes is just a little bit of extra data or evidence to flip around what we thought to be the case, and then we suddenly realise we had it all backwards.

It’s a little bit like driving blindfolded. When you want to drive somewhere, if you were to jump in the car and drive off with a blindfold on, there would be a quite high likelihood of coming to grief. You wouldn’t be able to see where you were going, you wouldn’t be able to react to things happening around you, you wouldn’t be able to see the pedestrians or the other cars around you as you drive along.

So why would you take the same approach when you’re plotting the course for your product? Without taking in the information around you about your product and reacting to it, you’re effectively increasing your risk and likelihood of failure.

So it’s a really good idea to open your eyes and use the information around you when you’re deciding what to do with you product – why would you want to do it in any other way?

Check your assumptions

Another way of looking at this is is to check your assumptions, is to reduce the risk. One of the ways to do this is to eliminate as much uncertainty as you have as you go along. The way you do this is by learning from your users as quickly and frequently as possible. When you’re at your most uncertain, right at the very beginning, your main job should be to get as much learning as possible and to apply that learning to your products, and challenge all those assumptions you have.

This graph here, by a chap called Roman Pichler, who’s another great product manager, who blogs and teaches and so on, illustrates what I’m trying to get at here.

How UK government digital services gather and use evidence

I spent about eight months recently as head of product for the UK’s Ministry of Justice – so, in government – and then a further three months more recently as the head of the product community for UK government as a whole at what’s called the Government Digital Service, or GDS.

And bit by bit since 2012, the whole of government has been moving to a very different approach for creating and managing the products – or services – that it offers people in the UK. And if you think about it, this is everything from applying for a driving licence all the way through to things like booking to visit a friend or relative in prison, or pretty much everything else, everything that government interacts with the public about.

The old way

The major revolution in thinking for them was that services exist to serve the needs of people first, not government. I know, it seems obvious, but it really wasn’t until relatively recently.

So they had a particular way before, the old way of doing things – and I know this happens a whole bunch in the private sector also – and it goes something like this:

A bunch of senior managers get together and say “we have a problem”. They usually decide the problem will be solved by a new CRM or ERP system – or sometimes both. Needless to say, they are usually wrong.

They then task several middle-ranking managers to spend several weeks or months collating a whole bunch of assumptions, guesses and outright lies into a massive document they call a business case. This is used to retrospectively justify the conclusion the senior managers have already reached. And they’re still wrong.

This hypothetical system needs a laundry list of specifications or requirements to flesh out what it needs to do so that the development team can get building. And again this is largely based on guesswork and results in an even larger set of documents than the business case.

Then some development happens, takes several times longer than everyone expected, not least because sweeping changes are needed mid-way through the build. And because the allocated budget has already been exceeded, whole sets of features are cut out again.

So the resulting product ends up being less capable than the thing it replaced, and largely makes life impossible for the people who actually have to use the thing, but who only got to see it the week before launch in what is often called user acceptance testing.

And so the users point out that – guess what – the thing doesn’t solve their problem, and that they in fact had a very different set of problems to solve, that the CRM or ERP system does nothing to solve them, and that the senior managers had completely missed the point in the first place.

Now I hope that doesn’t sound familiar, but I’m sure we’ve heard of places where that is certainly the case, and certainly in government that was very much how these large IT projects would play out. I’ve seen it a whole bunch of times, not just in government, but in private companies as well.

A better way

There is, however, a much better way. Instead – and this is the way that government tends to work now – the process starts with user needs, not government needs. So we’re putting the user needs right at the very forefront of our thinking. Whether from direct observation of data and analytics, or user feedback, we go in thinking that users have a particular problem that we can solve.

Then what we do is we go into a process called discovery, and we do this for a few weeks. This a combination of both desk and field-based research with real users to challenge our assumptions, and really to understand the size and shape of the problem, the people who have it, and whether it’s possible to solve, and indeed whether it’s worth solving. There’s no point in spending a million pounds or dollars to solve a ten thousand dollar problem.

The discovery team usually consists of a product manager, user researcher, designer and sometimes a developer or a business analyst if we need their particular skills to understand the problem.

It’s a perfectly sensible result for the discovery phase to end with the conclusion that actually what we thought was a problem isn’t a problem, or indeed whether it’s valuable or technically possible to solve. Now in the bad old way of doing things, project teams would only find out this much, much later on in the process.

So that’s the discovery phase, then the alpha phase is all about checking our understanding of the problem by running iterative tests and experiments, again with real users, that demonstrate we can solve aspects of the problem. By doing this, we’re learning more about the problem, and we start to learn about potentially how to solve that problem, about the solution.

At the end of alpha, we should have a pretty clear understanding of the users, their problems and the likely ways we’re going to solve it.

So then we move into beta. And all of these prototypes and experiments we’ve created up until now, we put to one side. Because now we start building the product for real. We want to build it as scalably, as robust as we need it, and as secure we need it to be, potentially – in the case of government – to be used by several million users.

The big difference here is that throughout beta, even though we’re not finished building the product yet, we’re still using the product out in the wild – we’re putting it in front of real users, and real users are using that product to solve their problems. They could be using it to apply for their driving licence or to renew their passport or things like that. And the reason why we do this is because it gives us this wealth of analytics and feedback from user testing and from people actually using the product, that helps us adjust and tweak the product to keep us on the right track.

When we’re able to demonstrate throughout this process that users are able to solve their underlying problem – whether it’s “I want to be able to drive a car, so I need a driving licence” or “I want to be able to travel internationally, so I need a passport” – then when they’re able to do that, then we stop adding new features. We can stop building things and we can shift our focus from building new stuff to more continuous improvement – small tweaks as needed to squash bugs or improve usability.

And then the majority of the team moves off onto the next major problem to solve.

The thing is that throughout the entire process, we’re running experiments, we’re gathering data, we’re doing analytics with real users, the actual people who will actually be using the product or service.

And it’s by doing this that we force ourselves to put aside our assumptions and engage our analytical part of the brain – evidence trumps opinion every single time.

Experiment template

Experiments don’t have to be daunting or scary. Here’s a very quick template you can use. An experiment can be very quick. One example was in the Ministry of Justice, one of my product managers and his lead developer were having a pretty heated argument about whether the users would understand what a particular feature did.

So rather than listen to them arguing for the rest of the afternoon, I packed them both off with paper prototypes to a nearby cafe and told them not to come back until they’d spoken to 20 people. And when they came back about two hours later, the product manager grudgingly reported back that 18 of the 20 had proven him wrong. And this was a great thing, because now he and his lead developer were working with evidence, not opinion.

Any experiment you’re running follows this template. You’ve got some user research or evidence that suggests something you believe to be the case, your hypothesis or your guess. So if we trying running a particular experiment or test – in this case going down to a cafe and asking people if they understood what this particular feature did – and measure the number of people who did or didn’t understand it, then we should be able to see whether or not people do understand that feature. And in this case, we had the overwhelming result that 18 of the 20 didn’t understand this feature, and so as a result the product manager was actually wrong.

So you can be doing this kind of thing all the time, it doesn’t have to be a big elaborate experiment with hundreds or thousands of people, you can do it relatively quickly for any questions you need to have answered.

Put users first

The next key learning for you is: put your users first. Don’t put the needs of your organisation before those of your users. Absolutely take them [your organisation’s needs] into account, but put your users first.

Benefits of open and transparent data

The next thing is getting into the analytics we use in government. Every digital service created by the UK government now has to measure at least four key performance indicators:

  • cost per transaction
  • user satisfaction rating
  • completion rate (how many people are actually able to achieve their goal or solve their problem)
  • digital take-up, which is whether or not users are preferring to use the online web service instead of phoning up someone or filling in a paper form, because that’s really the whole point of what we’re trying to do here.

Because every service must publish this data online, completely transparently – all the dashboards I think are all at gov.uk/performance if you want to take a look – it keeps everyone honest, but also changes the conversation from blame when things are going wrong to “what can be done to improve this?”

These four main KPIs make sense for the UK government because its broader goals are to encourage people to do more interaction with government more online and to make things easier for people to do. Other organisations with different sets of goals would probably want to measure different things that were aligned with their own particular goals.

The important thing here is that, in the context of the organisation we’re talking about, we’re measuring the right things.

We’re only really measuring things that would prompt us to take action if we saw the metrics going in the wrong way. If we saw a low completion rate, we could do some funnel analysis to see where people are dropping out, then run some experiments or user interviews to delve a bit deeper.

We really want to have this marriage of quantitative data and qualitative data, the quantitative tells us what is happening, the qualitative tells us why it’s happening. So when you’re seeing these patterns in your quantitative data, your web analytics or your other sources of data, and something looks a bit weird or piques your interest in some way, your first question should always be “why is that happening?” and your second question should be, “how can I test that to find out what’s going on?”.

Measuring vanity metrics like page views is basically pointless – they could go up or down for a variety of reasons completely outside of your control. Your page views might go up because an email campaign has just gone out, or perhaps because a search engine is indexing your site.

So the important thing here is that we’re measuring outcomes, not outputs.

We don’t really care how many people visited the driving licence website, or how many driving licences were issued.

We’re far more bothered about whether our visitors, our users, got what they came for. Whether they succeeded in getting their licence first time of asking, or whether they were able to update their photo on their licence, or even just find some particular information. Did they get what they came for? We’re really bothered about whether they succeeded in what they were trying to achieve and how easily they did it.

Outcomes not outputs

So your next takeaway is that you need to focus on measuring your user outcomes – what it is the users are actually trying to do, and what matters to them – not necessarily the outputs like page views or widgets created.

Roadmap driven by evidence, validated by observed data

This leads us nicely on to my last point which is about product roadmaps.

I bet that most of your roadmaps – these are the plans you have for your product, what you’re going to do in the next quarter or next few quarters – at the moment your roadmaps probably have items along the lines of “we’re going to build this feature” or “we’re going to add this capability”.

Guess what? If you’re doing that, you’re still focused on outputs not the user outcomes.

When you put anything on your product roadmap, or in your backlog of user stories, in particular product managers should always be able to say why that is in there, what purpose it serves, how you’ll know it’s been successful, and most importantly be able to point to the evidence that drove the decision to put that feature in or build that particular product.

So when you’re thinking about roadmaps, you really need to be thinking about what the user is trying to achieve (based on your user research, of course). Treat each roadmap item like an experiment. Remember:

data or user research suggest that this is the case …
so if we try this …
and measure that …
then we should see the following change …

Every roadmap item should follow that pattern. You should be able to know what success looks like from the user perspective as well as what finished looks like.

This is great for a few reasons.

If you’re the person looking after this product roadmap, it helps you remove items from your roadmap that don’t benefit anyone at all. If you can’t point to an item and say why is that here, why is that important, how does it benefit people, take it off your roadmap, don’t do it. If it doesn’t benefit anyone, whether it’s your user externally, or your internal users.

There’s no problem with putting in something to make life easier for your product support teams, or to make it easier for you to instrument and provide analytics in your product. Those internal users are perfectly valid users as well. But the point being is, whether it’s an external user, your customers, or internal users within your own organisation, everything on your roadmap needs to be there for a reason.

The second reason why you need to be able to point to this more analytical way of running your roadmap is that it sets you up for learning and iteration. Because every time you run an experiment, if something comes back with a different result to the one you were expecting, the next thing you should do is find out why, and then use what you learn to do it better next time.

Alignment

And then thirdly, and this is a really useful one, particularly for product managers, but also for people up and down the organisation, if you’re trying to align your team towards a common goal, it’s very easy for teams to get sidetracked and pull in different directions if all you’re bothered about is outputs – building widgets and features.

But if the discussion is centred around helping users to achieve their goals, then it’s much easier to judge whether something is worth doing.

Ask yourself “will this roadmap item get us closer or further away to what our users are trying to achieve?”

So on that basis, if you’re focusing on building things that provide a user outcome, your user stories should align with your roadmap, your roadmap should align with your team’s objectives for the quarter or year, and your team’s objectives should align with your organisation’s overall goals.

What that means is that from the very granular bit of what we’re going to build today, tomorrow, next week, it’s always aligned all the way up through to what the organisation is trying to achieve this year, next year, in five years’ time. And in particular, product managers are responsible for making sure that alignment, from those user stories all the way up to the company objectives, is happening within that team.

Measure user goals

So my last takeaway is that measuring user goals will help you to align your team much more easily, because they’ll care about what the users are trying to achieve. It’s a human thing rather than a feature or widget thing.

Recap and thanks

Okay, so just to summarise the main takeaways again:

we talked at the beginning about how assumptions translate into risks for your product and that’s usually because you think you know what in fact you don’t actually know for real;

and we talked about the fast system of thinking that jumps to conclusions and the slower system of analytics we need to engage;

we talked about government, the old way and new way of doing things by putting user needs first before government or before organisational needs;

and we talked about focusing on human outcomes, what it is that users are trying to achieve, not outputs like building widgets or features or that kind of thing;

and lastly we talked about how measuring these outcomes, these user goals, can actually really help to align your development teams, your product teams, and your broader teams within your organisation, all the way up from those granular bits and pieces you’re creating all the way up to your organisation’s longer term goals.

So those are really the four key things I hope you’ll take away from this presentation.

That’s it! Thank you for listening. I’d be happy to take questions in a second, over to you Josh.

Questions

[JOSH] One of the things, Jock, that you said at the end that people found interesting was about the the user roadmap. Obviously you look at a lot of different areas, so it’s hard to figure out if there’s a key trend, but are you seeing something from a user perspective, from interest-level demand that companies need to be thinking about at the aggregate level, whether you’re a software company or a Segway producer?

[JOCK] Ooh. It’s difficult to say, really. I guess the main thing that maybe I am seeing as a general trend from users is that users don’t care about your products. And I know that sounds quite harsh. But they’re bothered about solving their own problems first and foremost. So as long as you – as organisations creating products – as long as you can realise that users don’t care about your products, they care about solving their problems, and your products are a means to the end of solving that problems.

So they’re always going to look at your product in terms of: does this make my life easier or not? And in many cases, particularly in government, the big challenge we had was that some of the government systems were so hard to use beforehand – even the people processes, where they have to fill out a form and go to a particular place at a particular time to get a form signed and stamped and all that sort of nonsense. For them it was easier not to bother doing that thing at all than it was to try and go through that process.

So it was really, really, really important to make sure that whatever we created was better and helped them solve their problem in a way that made it a no-brainer for them to do that. And outside of government, whether you’re in software products or physical products or whatever else, users have a need, and their need is to solve a problem. It’s not necessarily to use your product to do that, but if your product helps them to solve their problem, that’s when your product is successful.

How aligned with marketing and persona development?

[JOSH] So let me ask you a follow-on to that. If you’re concerned about buyer needs and their objectives, how closely aligned do you suggest product organisations be with marketing and with the persona development to ensure they are creating a solution that does meet those needs in the market?

[JOCK] That’s a great one. Very important distinction here: if you are a B2C company (business to consumer company) – and let’s say for ease of comparison that government is dealing directly with end-users in the same B2C kind of way – then obviously your user personas, the people who are using your product are the most important things that you need to align with in terms of your marketing, your communication, or anything like that.

When you are looking more in a B2B scenario, so you are selling to businesses who are potentially selling to other customers or businesses themselves, then you’ve now got two different sets of personas to deal with: you’ve got your user personas which are for the people who are actually using the product on the ground, and you need to make sure that you understand those really, really well so that you can build a product that helps them solve their problems; but separately – and this is touching on what you said, Josh – you’ve also now got buyer personas.

Because very often in the B2B world, the people paying for the product or taking the decision to buy the product or not, are very different to the people who are actually using the product. So in that situation what I create is a different set of personas, buyer personas, that help me understand what it is those buyers are trying to achieve, whether it’s a more cost-effective product for their organisation, or something that’s easier to roll out, or whatever it might be. They’re going to have subtly different needs to the people on the ground actually using the product when it’s rolled out.

And in terms of marketing and messaging, it’s really important to distinguish who you’re talking with. Are you speaking to the end-users, in which case you need to give them the information they need to solve their problem? Or if you’re talking to buyers, you need to structure your information to address their questions or their concerns or their needs from the buying process. So I’d actually have different sets of personas and different sets of messages depending on whether I’m talking to a user or talking to the buyer.

What sources of evidence to build the product?

[JOSH] Now you’re speaking my language. I’m mired myself in buyer personas at the moment, so resonating very well with me. I have one last question for you here. One of the things that you said that resonated with me was that evidence trumps opinion. I think you mentioned that shortly after the Segway segment. I was curious – if you could go back and tell them to do things differently, what kind of sources of information would you suggest they rely on, or what sources of information do you suggest folks building their product plans today rely on, so that they can get the evidence they need to build the product that will meet the needs of end users?

[JOCK] So the answer is always: it depends – I know it’s the ultimate cop-out – but it depends on what you’re trying to demonstrate. The sources of data that you need to have will be dictated by what it is you’re trying to demonstrate or prove.

So let’s try to make this a bit more tangible. If for instance we wanted to figure out whether the product was allowing people to achieve a particular goal. Let’s pick an example of driving licences. If their objective was to successfully apply for and receive their licence first time of asking, first time through the process, then our measurements would be a combination of quantitative, web analytics that show that people can move through that conversion funnel, from point of entry, through the process of applying for their driving licence, through to a successful outcome in that situation of receiving their driving licence at the end of the process.

You’d be able to use the web analytics to find out that the majority of people – whatever threshold or percentage you were looking for – were able to do that first time around. But in combination with that, you’d also need that marriage of the qualitative data as well, so I’d want to supplement that with feedback from usability tests – actually sitting watching people making this application, ideally in their own environment, at home or at work, or wherever they’re actually doing this for real, and see how easily they’re able to achieve that successful outcome, and find out what things they’re finding easy to do, what things they’re finding difficult, or things they don’t understand so much in the process.

I want to make sure that we’re not just getting to the desired end-result in our quant, in our analytics, but also that people are getting to that result in a way that satisfies their needs, and that they’re happy that they understood what was going on. Because you don’t want to get into that situation where you’re reaching what you think is the right outcome but purely by accident because people are misunderstanding what’s happening throughout that process.

So I guess to summarise that slightly long-winded answer, you’re always going to need a mixture of qualitative and quantitative data. That stuff could come from your analytics, whether you’re using an analytics platform like Logi’s one, or whether it’s another analytics platform, or whether it’s more verbatim feedback from users, or whether it’s stuff from your usability testing, you’re going to have this mixture of different sources, but you always need to have that balance of the what and the why, the quant and the qualitative.


Get articles when they’re published

My articles get published irregularly (erratically, some might say). Never miss an article again by getting them delivered direct to your inbox as soon as they go live.  


Read more from Jock

The Practitioner's Guide to Product Management book cover

The Practitioner's Guide To Product Management

by Jock Busuttil

“This is a great book for Product Managers or those considering a career in Product Management.”

— Lyndsay Denton

Jock Busuttil is a product management and leadership coach, product leader and author. He has spent over two decades working with technology companies to improve their product management practices, from startups to multinationals. In 2012 Jock founded Product People Limited, which provides product management consultancy, coaching and training. Its clients include BBC, University of Cambridge, Ometria, Prolific and the UK’s Ministry of Justice and Government Digital Service (GDS). Jock holds a master’s degree in Classics from the University of Cambridge. He is the author of the popular book The Practitioner’s Guide To Product Management, which was published in January 2015 by Grand Central Publishing in the US and Piatkus in the UK. He writes the blog I Manage Products and weekly product management newsletter PRODUCTHEAD. You can find him on Mastodon, Twitter and LinkedIn.

Leave a Reply

Your email address will not be published. Required fields are marked *

*