Daniela Zahn
Published:
May 6, 2024
If there ever was a horse that’s been beaten to death, it is certainly the story of New Coke. It has been used as THE example of market research failure. Before launching New Coke, the market research team had done 200K taste tests, surveys and focus groups. There was some evidence especially from the focus groups that people might not be happy to swap their beloved original coke for the new improved one. When it came to making the decision about whether to launch New Coke or not, the qualitative evidence seems to have been ignored in favour of the quantitative one. Or at least it was not taken as seriously as it should have been. The end result was a30-million-dollar loss and serious damage to the company’s reputation. New Coke is not just an example of failed market research, it is also a stand-out example of what happens when you ignore or undervalue soft or qualitative data in favour of hard or quantitative data.
The core of the issue is that the overreliance on numerical evidence leads to flawed decision-making because there is a significant risk of missing out on crucial information and ultimately losing millions like in the famous New Coke example.
At this point you might ask: “Why should I care about flawed market research? I’m interested in the evaluation of my change program!” The answer is easy: “Market research and evaluation share that they are used for data-driven decision making!”
If you run an evaluation for your change program and the evidence is one-sided and does not provide the whole picture, any decisions about the program are likely to be flawed and more importantly might turn out to be very costly indeed.
Good news though - there is light at the end of this particular tunnel: in this blog post you will learn about the Success CaseMethod and how it combines story with numbers to create a rounded picture of what is going on in a programme and ultimately enables well-informed decisions about the programme. So how does the SCM achieve that? To answer this question, we need to take a step back and start at the beginning, i.e. with the crucial question: What can numbers actually tell us?
In broad and simple terms, the psychology of numbers (yes there is such a thing) tells us that we use numbers to make sense of the world around us. They help us see patterns in the chaos and make our environment somewhat understandable. This gives them an aura of objectivity. It is not by accident that much of scientific research relies on numbers and its quality assurance indicators (validity, reliability, and generalisability) are an expression of that. On a deeper level this perceived objectivity speaks to us emotionally. It can reduce uncertainty, it can make us feel safer and help us build trust in our decisions. This might go some way to explain why the Coca Cola’s senior management team decided to go with their surveys and taste tests.
However, here we get to one of the disadvantages of numerical data. You need to know and define what you want to measure before you start measuring it. Or conversely, how do you find out about what you don’t know, what you haven’t thought to test in advance? Moreover, how do we actually know what someone means when they respond: “Somewhat likely” to the question “How likely are you to recommend this training programme to a work colleague?”? What numbers cannot give us is detail, context, complexity – in short, the meaning and story behind a number.
We use qualitative data to give us context and complexity. That can take the form of individual interviews, diaries, focus groups, open-ended questions in a survey, stories etc. but also images, video, mind maps and more. What they share, a gain in broad and simple terms, is that this type of information cannot be measured or easily expressed using numbers. To a certain extent it depends on the use of written or spoken words aka language. Language is fundamental to human existence. It is how we express our experiences and understanding of the world around us, of others and of ourselves. And because of that, at least language-based qualitative data, there is room for detail, context, complexity, and exploration of what is unknown.
However, there can be drawbacks to using this type of data, the data can be seen as subjective, very context dependent, and difficult to verify. But just like quantitative methods, qualitative ones also have their own quality assurance indicators. Instead of validity, reliability, and generalisability, we speak of credibility, transferability, dependability, and confirmability.
So where does all of the above leave us? The answer is kind of obvious: combine the two, of course! That combination brings with it some significant advantages (and also some challenges).Important for the present context is that it enables a more comprehensive and detailed understanding of the evaluation of a L&D programme. Now you might say: “Yeah, all this stuff about combining qualitative and quantitative data sounds great but too time-intensive and expensive. That’s not going to work for my L&D programme in my organisation.” And that is where you are wrong.
The SCM is a form of rapid impact evaluation often used for L&D programmes. Some of the strengths include the simultaneous use of a number of different methods, flexibility in terms of the design, data collection and analysis in parallel, involvement of different stakeholders in all phases of the evaluation communication products that meet the needs of those various stakeholder groups.
Add to that Rob Brinkerhoff’s definition of impact. It starts at individual behaviour change and cascades forward into results and further down the line to organisational impacts. What you end up with is an evaluation method that can and does produce actionable results in a relatively short space of time. So how does the SCM do that specifically?
An SCM study follows a tried and tested process that begins with clarifying the client’s needs and planning the individual steps of the evaluation. Next, the program under evaluation is captured in an impact map that summarises how the programme creates behaviour change leading to successful application that impacts the intended results.
A well designed SCM questionnaire that is based on the impact map will produce sufficient hard data to get a broad picture of what the expected impact might actually look like. Therefore, quantitative data gathered via the questionnaire are used to build a general picture. Importantly, and this makes the SCM distinct, the quantitative data is used to identify programme participants that showcase outstanding application and results – so called success cases.
In addition, the quantitative data helps to identify programme participants who did not succeed. The rationale for doing that is the idea of learning from the somewhat extreme cases that do not represent average performance. We can get an understanding of what accelerates successful application and what hinders programme success.
And we learn about these things through interviewing success and non-success cases.
There are different ways in which qualitative data can be used in an SCM such as open-ended responses in a questionnaire or interviews.The interviews are a fundamental part of the SCM because we talk to the success and non-success individuals about their program and post-program experiences applying what they learned and the results they achieved. This provides a deeper understanding of the patterns observed in the questionnaire data. The interviewees the things that we didn’t know and think about, thereby adding crucial information that would otherwise have missed. At the end, questionnaire and interview data are combined to create that sought-after rounded picture of what is going on in a programme. This ultimately leads to reliable data-driven conclusions and recommendations.
In the beginning we asked: Success Case Method: Combining hard and soft data- What’s the value added?
The answer is: by combining the two, we create a comprehensive picture and make well informed decisions – that’s the value added!
Do you want to learn more about the SCM? Then sign up for the next instalment of our certification program. More information can be found on our website.
You can also listen to this blog here: