Quantcast
Channel: user experience – disambiguity
Viewing all articles
Browse latest Browse all 21

Five dysfunctions of ‘democratised’ research. Part 5 – Stunted capability

0
0

This is the fifth and final in a series of posts examining some of the most common and most problematic problems we need to consider when looking to scale research in organisations. You can start with the first post in this series here.

Here are five common dysfunctions that we are contending with.

  1. Teams are incentivised to move quickly and ship, care less about reliable and valid research
  2. Researching within our silos leads to false positives
  3. Research as a weapon (validate or die)
  4. Quantitative fallacies
  5. Stunted capability

In this post, we’re looking at what happens when the research practice in an organisation fails to mature.

A great first step

Testing one user is 100 percent better than testing none – Steve Krug, Don’t Make Me Think

Many organisations get started doing research with customers and users off the back of encouragement from people like Steve Krug and his classic book ‘Don’t Make Me Think’. In this and other books Steve makes simple usability testing accessible and achievable to almost anyone.

Steve and others like him are evangelists reaching out to those companies who are afraid to engage with their customers to understand opportunities for them to improve. This is important work. Their message is usually that talking to customers is not hard or scary, and that we’ll be better for doing a bit of it, even not perfectly, than not doing it at all.

The first step can be scary

And they are right. Having anyone in the company talking to just one user (and hopefully some more) is a fabulous first step. But it is intended to be just that – a first step. An encouragement to realise the benefits of involving people outside our offices in the process of designing and developing products and services. And help to overcome the fear of engaging with customers and users and an opportunity to experience how beneficial this can be.

For those of us who work with research participants on a regular basis, it may be hard to recall exactly how terrifying those first few research sessions felt. Even trained and experienced researchers continue to experience some background fear (or exhilaration?)  of all the things that could go wrong in the research study – and there are plenty!

The thing about first steps, though, is that they are usually intended to be followed by second steps. Once we break through the fear (or in some cases, just lack of awareness), the idea is that we continue to increase the maturity of our practice.

And this is where many organisations seem to hit a roadblock. More and more people in the organisation might be out and eagerly involving customers in the process of shaping their products, but they often don’t invest in either improving their own skills in research or investing in hiring people who have training and experience doing research. 

Talking to users is not research

One important realisation we need to have on the path to maturity is recognising that ‘talking to customers’ is actually not the same thing as doing research. Talking to customers or watching customers use our products and services has many benefits – in particular it can increase our empathy for our customers and users, it can help expose us to scenarios of use that are dramatically different to our own and what we would expect, and it can provide clues as to where the biggest problems may like. All of these are good outcomes.

If we want to use research as evidence for decision making – either for product strategy or design decisions – then we need to be able to do more to ensure that the insights we are gleaning are sufficiently reliable and valid.

Research doesn’t need to be ‘perfect’, just valid and reliable.

 ‘I don’t need the research to be perfect, I just need enough to help me make a decision’.

Often this is said in response to the suggestion that the research we should be doing will take longer or be more difficult and expensive than our speaker would like. In this situation, there is often a pre-existing ‘hunch’ and they are looking to users for validation. Or perhaps they are stuck between two options and seeks a tie breaker.

Any specialist researcher has almost certainly had their recommended approach discredited as ‘too academic’, and sometimes it is true. Sometimes the research methodology is overdone for the question the business is seeking to answer. But what often follows is a bit of a race to the bottom where considered sample design and appropriate methodology are quickly discarded in favour of whatever is fastest and easiest.

Without the right experience and training, all too often interviewers ‘cut to the chase’ so we get more or less directly to the topic at hand. Somewhere in the world right now a product manager under pressure to make a decision is asking questions like these in a customer interview:

‘here’s what we’re thinking of making, what do you think about it?’

or, perhaps worse…

‘if we made this, would you pay for it’

It can be easy and tempting – so much faster and often quantitative – to mistake the research question for the interview question.

Even with training, it seems that the urge to be able to say that 10 out of 12 people said they would pay for it is almost irresistible. ‘Beating around the bush’ to get the question answered seems like a waste of everyone’s time, in this time where the bias to action and desire to ship at velocity is most valued. 

(It shouldn’t really be a surprise that lack of research capability maturity exposes us to the previous four dysfunctions).

Matching methodology to risk

Whilst we should have plenty of sympathy for this desire for lightweight research and simplicity, it is important to ensure that the methods employed are matched to the risk involved in the decision, rather than the most compressed timeframe.

As our organisations grow, the decisions we take using evidence from our customers can become more and more substantial – the gains of getting it right are greater and the risks of getting it wrong get uglier.

In the same way, our research maturity needs to continue to grow so that we can continue to match the size of the risk of getting it wrong.

This is not to say that mature organisations only ever do serious, time consuming research. Rather, that we invest where the risk is highest.

Investment might look like hiring trained researchers who can design and recruit the right sample and conduct the research in a way that reduces bias. Or investment might look like iterative research with an every increasing number of increasingly diverse participants, sprint after sprint – allowing the team to continue to learn, This can work beautifully when the team is able to be responsive to that learning over time.

Investing too much

Conversely, there are situations where the investment in research is far too high for the decision being made. This often happens where the organisations design process has broken down, or where designers have entirely lost confidence in being able to make relatively conventional design decisions. In these situation we design complex studies to ‘validate’ one micro design treatment over another. In this case,  the mismatch in risk to research investment can result in large quantities of what I would consider to be wasteful and often unreliable research. 

Beware Dunning Kruger

Dunning Kruger graph of confidence vs expertise

User Research is particularly susceptible to Dunning Kruger syndrome, wherein a relatively small amount of knowledge can result in an excess of confidence. Many people claim a ‘background in research’ when they could mean they watched someone else do a bunch of usability studies in their last job, or they did a research based degree at university.

Many designers and product managers are entirely happy with the outcomes they get from research and how it enables their practice – and often loudly object to the suggestion that anyone could get a better result from the research than they do.

Yet, at the same time, the harsh reality is that the work that is done is often resulting in misleading outcomes that can put their product and their organisation at risk. 

It also undermines the reputation of research in the organisation when people claim when a ‘researched’ product goes into the world and doesn’t succeed as expected. ‘We did research before and it didn’t work’.

In the same way that often both design and product management capabilities require an engineering led organisation to move through the stages from unconscious incompetence through to conscious competence ,  the very same is true for the research capability.

Achieving research maturity

And so, at the end of our five dysfunctions, what can be done to help provoke an organisation to not only involve users in the process of creating products and services but to start and continue to grow their ability to do so by revealing the important insights that are both reliable and valid?

Here are some things that have worked for me.

Perhaps through improving business fluency. By talking less about empathy and more about the risk to the business of getting it wrong. Talking less about customer obsession and more about the reliability and validity of the different types of evidence we can use to make decisions. And by running an open research practice – getting out of the black box, removing any mystery about our work, showing our workings and involving others in the process.

Make use of existing momentum – bringing new shape and substance to whatever your organisation is using to bring its attention to its customers – whether its an NPS survey, a customer convention, a feedback form, or a guerilla research practice – start by shaping the existing connections into something more insightful, more reliable and valid.

Be brave, but be patient and we’ll get there.


Viewing all articles
Browse latest Browse all 21

Latest Images

Trending Articles





Latest Images