The best research requires in-depth interviews that reveal how successful companies uniquely solved a complex problem.
So many studies, so few insights.
The thought leadership research arms race is in overdrive. Every week it seems another consulting, technology, tech services or other B2B company publicizes the findings from an online survey in the hope of dazzling the world with uncanny insights. But they don’t offer any groundbreaking, this-changes-everything ideas.
With an explosion in survey reports, that’s a problem. It’s hard for anyone’s report to get attention. Not sure the survey space is overloaded? Just look at the acute interest in surveys about companies’ generative AI initiatives. ChatGPT showed me 13 reports published this year alone.
But the bigger problem is the over-reliance on surveys. Using survey data alone to formulate groundbreaking insights on solving complex organizational issues is a fool’s errand. Even the best-designed surveys are extremely limited in shedding deep light on this. They can skim the surface of how hundreds or thousands of companies are dealing with a certain problem. And, to be sure, survey statistics can be used to argue how pervasive some problem is and how much the average company spends on it.
But what a survey questionnaire cannot do – or, for that matter, any asynchronous exchange of information between researchers and their research participants – is to shed incisive light on how organizations are trying to solve the problem at hand, and what solutions work far better than others. Doing it well typically requires real-time discussions between real people – in this case the research team and representatives from dozens of companies. It also requires extensive background research on these companies, searching online for content written by and about them that, when stitched together, can provide a robust picture of how they solved the topic issue at hand.
Qualitative case study research has been the foundation of some of the most consequential new management concepts of the last 40 years. If you hope that an online survey will produce something similar, you are dreaming.
This article draws on 38 years of experience in conducting and being a team member in thought leadership research. Halfway through this article, I provide a framework for understanding the value and limitations of different research streams in shedding light on solutions to complex business problems. I compare three streams – surveys, expert network interviews and member-based research groups – on their ability to deliver deep insights on company practices. I then discuss how to use all three streams in a study, and increase the chances that a research team develops profound new insights.
But first let’s look at surveys as a thought leadership research tool.
The Limitations of Surveys in Thought Leadership
I’m not putting down surveys. They are a staple of the consumer products sector. They account for 70% of the $140 billion-a-year global market research industry, according to ESOMAR, an association for market researchers. Surveys are a key tool for companies to discern the needs, wants and preferences of customers and potential customers. Surveys, focus groups and other research tools ask what people think, how they act, and what they desire – data that will shape marketing campaigns, new products, present products, customer service and other key aspects of a business. They are used for internal purposes.
Surveys can be important tools for thought leadership research. They can give you lots of valuable statistics that warn a target audience of a big problem or opportunity. And if designed to identify “leaders” and “laggards” on an issue, they can point to companies that you should reach out to. (More on this later in the article.) However, surveys can only explain at a superficial level the key factors behind success and failure.
Case studies are the rock cracks and veins that researchers must mine to find the gold in thought leadership. Identifying the key factors of success and failure from dozens of case stories is what can lead to groundbreaking insights.
A way to think about it is through my thought leadership research design method (what I call the Problem/Solution approach). In short, surveys used in thought leadership research can quantify the magnitude of a problem and identify companies that claim to have solved it better than most. But they generate across-company statistics, not individual case stories. You need those case stories, gathered from interviews, to reveal the most effective practices in solving an issue. Secondary research can help you gather additional evidence and identify potential case studies. (See Figure 1.)
Figure 1: The Value of Surveys, Case Studies and Secondary Research
In the early 1990s, business reengineering guru Michael Hammer was once asked by a newspaper reporter how he invented the blockbuster concept. “I didn’t invent reengineering; I discovered it,” he told the reporter. Mike’s answer was that the research team he led, which conducted case study interviews of dozens of large companies in the late 1980s and early 1990s, revealed best practices in how large companies used information technology. (Asked why he didn’t survey these companies, Mike told me this: “I don’t care what executives think. I want to know what their companies are doing.”)
I’ve conducted dozens of surveys since 1988 to help clients quantify that many companies are dealing with a specific problem. In fact, I’m involved in a study right now (for my firm and two partners) on the state of thought leadership in the $1.7 trillion global tech services industry. A survey is one important component of our research, but not the only one. Another is case study interviews with present and former tech services executives. A third research stream is interviews with executives in other industries who buy their services. Our second and third research methods are purely qualitative.
All of these research methods are critical if you regard thought leadership, as I do, as the development, dissemination and delivery of superior expertise – i.e., uncommon capabilities in solving a specific business problem for clients. At the heart of developing that expertise is case study research – comparing multiple companies that solved the problem and others that didn’t, and determining what led to success or failure.
Surveys also suffer from the partial knowledge problem. A structured survey questionnaire requires those designing it to understand the key factors in solving the research issue at hand. Asking various close-ended questions about those factors (e.g., “Rate or rank the following in how critical they were to achieving results?”) assumes they are the only factors to gauge. But only open-ended questions that a researcher can ask live or over a series of exchanges (phone, videoconference, emails, social media posts and comments, etc.) allow a researcher to probe for factors that didn’t occur to them at the outset of study. It’s those unexpected-at-the-outset questions, asked after hearing unexpected comments that an interviewee tells a researcher, where groundbreaking insights can emanate.
But, you might be asking, what about having a few open-ended questions in an otherwise close-ended survey? For sure, that’s better than having none. Still, the researcher doesn’t get to ask follow-up open-ended questions based on answers to their first open-ended questions.
The problem of using close-ended surveys to drill for thought leadership gold – i.e., to create novel and proven solutions to complex problems – is even more pronounced when the topic is relatively new. In writing the survey questionnaire, how can you know the key success factors for solving a problem that has scarcely been solved?
Survey vs. Case Study Questions: Comparing the Insights
I’ll hypothetically illustrate how much richer the answers to a case study interview question (a synchronous interaction between interviewer and interviewee) can be vs. a close-ended survey question (asynchronous interaction). The hypothetical study here is on the impact of generative AI. The most important question is about the key success factors.
Here’s how that question might be asked in a survey:
The core finding from the answers to this question is that two-thirds of executives say the key factor in getting ROI from generative AI is executive sponsorship of the initiatives. Of course, that isn’t a revelation.
Now let’s compare the close-ended survey answers with one that might have emerged from an open-ended interview with a senior executive. (Note: This is a fictional interactive conversation.):
Q5: Think about your company’s most successful generative AI initiative. What do you think was most the most important factor in achieving the benefits that it generated?
A: Well, let me think about this a little bit. Having executive sponsorship was really important.
Q: Why was that important?
A: Because the manager in charge of the business function where we focused the generative AI project was opposed to the initiative.
Q: Which function was that?
A: Our sales function.
Q: Why was he opposed to the initiative?
A: He thought it would result in a major reduction in workforce in his department.
Q: Why did he care about that?
A: He didn’t care about layoffs per se. He cared more about running the biggest department in the company by headcount.
Q: Why did he care about that?
A: That gave him power in the organization, especially at budget time. If you have 500 people working for you and the next biggest department has 250, you have some grounds to ask for – by far – the biggest budget.
Q: So how was his objection to the initiative overcome?
A: Our CEO told him it was unclear at the outset how much, if at all, the company would reduce headcount in his area after the initiative. The CEO also told him the prototype that the initiative would develop would give a rough idea of how many labor hours would be saved in that one area.
Q: So are you saying that the chief sales officer was told that the initiative might not result in big layoffs, or any layoffs – until the prototype was developed?
A: That’s right. But, come to think of it, the chief sales officer then got on board the initiative. The next biggest obstacle was getting his salespeople on board. When they heard that generative AI was going to be implemented in sales, some of his star salespeople said they’re ready to look for jobs elsewhere.
Q: So what did the chief sales officer do about that?
A: Well, after asking the CEO whether the company would lay off its best salespeople as a result of the technology, the chief sales officer heard that would not at all be in the cards. The poorest performers might be cut.
Q: So what did the chief sales officer tell his employees?
A: He told his stars that they would not lose their jobs, and gave them written assurance.
Q: What did he tell the others?
A: They got no written assurance. They were told that their performance had to improve after the generative AI initiative – meaning they had to use the technology.
Now if numerous case study interviews echoed this finding – that the biggest hurdle of getting ROI on generative AI is assuring the biggest internal beneficiary of the technology their standing in the company would not be diminished if the initiative dramatically reduced their workforce – and then reassuring star employees in that function that they would not lose their jobs – it would be an insight to explore in more depth. For example, how did they assure these employees of this? Did they train these employees on using generative AI, and if so, how? Were there patterns in how they reorganized the work of these employees?
What’s more, a great case study interviewer would have continued asking questions about the success factor that she heard: that valued sales employees were told they would not lose their jobs. For example, were they told their jobs would be enhanced? If so, how? And in the prototype, did those salespeople generate more revenue than they did before? And, if so, did that result in bigger commission revenue? If so, what happened when other salespeople heard about the prototype and its impact? Did they get on board quickly, too? And did that help accelerate the return on investment?
As this hypothetical example illustrates, surveys preclude researchers from delving deeply into a topic with any individual research participant. In addition, they largely stop researchers from exploring issues they didn’t include in their survey questionnaires.
The Power of Naming Names
My own studies and studies for clients of “consumers” of thought leadership content over the last 15 years show one big factor moves them to reach out: a unique solution that produced huge monetary benefits at companies identified by name. Executives who use thought leadership to help decide which firms to put on their short list aren’t likely to be moved by anything less. Why? When you realize what’s on the line if they choose the wrong firm — their careers and often their company’s competitiveness — you understand why. They need to be skeptical, especially about research from firms they haven’t heard of before.
The most monumental management concepts I know of were built on the foundation of deep case study research: blue ocean strategy (Chan Kim and Renee Mauborgne), companies that are “built to last” (Jim Collins and Jerry Porras), the “Challenger” sales approach (Matt Dixon and Brent Adamson of the Corporate Executive Board pre-Gartner acquisition), disruptive innovation (Clay Christensen), and business reengineering (Michael Hammer and CSC Index) to name a few groundbreaking management concepts.
If your company’s aspirations for thought leadership include coming up with groundbreaking concepts at least occasionally, it’s important to understand which research methods are more or less likely to help. What follows is a way to think about that.
Mining for Big Ideas: Rethinking Your Thought Leadership Research
Unless you’ve done extensive case study research as part of your studies, it can be hard to understand what makes it far more valuable than online close-ended surveys for generating big insights. Consider weighing the strengths and weaknesses of primary research techniques on two dimensions, both of which are about the organizations whose practices a researcher is trying to document:
- How much the target audience knows about their company’s experience in dealing with the topic at hand. Depending on who in a company you reach and how much experience they have with your topic, your audience’s knowledge will range from limited to extensive. For example, if you are researching the use of generative AI and the person you survey was superficially involved in a genAI initiative or otherwise had superficial knowledge of their company’s experience, you’d be tapping someone with limited knowledge. And if they were involved in the initiative but in a lower-level job, they are likely to have far less knowledge than the managers running the project. (See the vertical axis in Figure 2.)
- The degree of access a research team has to the target audience. Consider such access, too, in a range from limited to extensive. For example, a manager filling out a 20-question online survey gives you limited access to their company’s experience on the topic – perhaps 20 minutes of their time answering questions with a “yes,” “no,” or perhaps a rating or ranking on some scale. In contrast, 6 hours of interviews with that person and three others in their company would give you far greater access to their knowledge and experience on the topic. (The horizontal axis.)
Figure 2: Comparing 3 Thought Leadership Research Models on Audience Knowledge and Access
With those two dimensions in mind, let’s now look at three primary thought leadership research models. A forewarning: I know I am oversimplifying the options here. But there’s value in understanding these three, at the very least to know when and why you should supplement them with research from the other models.
Because of their limitations, each model provides a research team with highly varying amounts of access to a target audience and access to the knowledge that audience has on the topic. Let’s start with survey companies.
Survey Companies
I’ve seen a half dozen new survey companies (or companies that largely conduct surveys based on their own panel or other panel companies) emerge over the last 10 years that specialize in thought leadership research or created new practices in this arena to go along with their other practices.
These companies will take your research topic and develop survey questions that they’ll put on their survey platform. Some have their own panel of managers and other people who can answer the survey (and get a monetary reward for doing so); others draw on survey panel firms to attract survey participants. Some supplement their own panels with other firms’ panels.
One of the biggest questions I have about these survey companies is their ability to get executives with deep knowledge about an initiative in their companies that the researchers want to learn about. Yes, you can and should insert survey questions that screen out respondents who don’t have deep knowledge. And every survey we’ve designed does ask this, for our clients (on their topics) and for ourselves (on our topic of thought leadership). However, I often wonder how many survey respondents lie about their knowledge, or click through a survey rapidly without much thought, to get the monetary reward for taking the survey.
In any event, whether survey participants are telling the truth about their knowledge or are thinking hard about the survey questions, surveys can generate hundreds or even thousands of responses, depending on how many regions of the world you want to poll, the size companies, and the target respondents. So if you want your research to show that a large percentage of your target audience is plagued with the problem you are researching, survey firms can give you that evidence (if it, in fact, is a problem that plagues many organizations).
Another big benefit of using survey firms is that B2B researchers don’t have to rely on the people in their own firms (the consultants, the lawyers, etc.) to open doors with their clients for research interviews. From my experience, most client-facing officers will resist this, especially if a client relationship is in trouble. In theory, expecting client officers to open thought leadership research doors is a nice idea – until it faces the reality of a firm’s highly protective sales force.
That’s a big value of surveys for thought leadership: You can get around your company’s gatekeepers. But using surveys to understand exactly how companies in your target audience have been dealing with the issue at hand is more difficult. To be sure, you can have survey questions with multiple choice answers, ratings and rankings to shed light on matters you want to understand. But that’s not the same as having a live conversation (or even an email exchange) with executives in your target company audience. The value of a live one-on-one interview is that you can probe and ask new questions, precipitated by surprising responses to the questions you asked.
Structured surveys, with mostly or totally all close-ended responses, largely tell you how many are dealing with an issue and what they are doing about it, as case study research expert Robert K. Yin said years ago in his classic book on research. But they fall short on determining causal relationships – e.g., explaining why: i.e., the factors behind companies that with success at dealing with the issue you are studying.
Researchers who connect the dots are the ones who come up with groundbreaking insights. Those dots are the key factors differentiating the successful from the unsuccessful firms at solving the issue.
The problem with structured surveys is that they can’t provide enough dots to connect. They can provide some – especially if you insert a question that separates successful firms from unsuccess firms on the topic. But they can’t provide the most important dots. These only come from speaking to executives who were part of the successful or failed initiatives, and thus who can expand on what went right and wrong.
One way to get to those executives, especially those who are no longer at their firms (and thus should be able to speak more openly) is to find them on expert networks.
Expert Networks
These firms can introduce (for a fee) a research team to subject matter experts, who include executives who used to work in a firm and are not under non-disclosure to talk about their knowledge of that firm. Consulting, investment, private equity, and law firms make heavy use of expert networks – and often NOT for the purpose of conducting thought leadership research.
One leading firm (Gerson Lehrman Group) has been in business since 1998. It has more than 1 million people in its expert network and in 2020 had $589 million in revenue and more than 2,300 employees, according to a public stock offering registration statement. (The company shelved its IPO in 2022.) Other expert networks include AlphaSights (1,500 employees), Guidepoint (1,500 employees) and Third Bridge (1,000 employees).
For thought leadership research, expert networks can fill in important details on companies you want to study. But since their experts are no longer likely to work at those companies (which gives them liberty to talk about them), a big weakness of expert networks for thought leadership is their knowledge may be dated.
But I see a much bigger weakness in using expert networks as the primary model for thought leadership research: Their experts have little career or employer motivation (beyond getting a fee) to talk about the practices of their former employer.
Why is an interviewee’s motivation important in thought leadership research? I’ve found two of the biggest motivators in getting someone from a target audience to talk about their company’s experience on an issue are a) career development and b) helping their company gain a competitive advantage. By career development, I mean being associated with a successful initiative. Talking about it and getting published as someone who worked on it can be great career PR. But even if that isn’t a motivator – let’s say the initiative was a failure – providing perspectives to a research team on it can have rewards, as long as their company isn’t identified by name as a “bad practice” example. They can be interested in being interviewed in exchange for learning later (when the study is complete) about what “best-practice” companies do differently.
That is one of the biggest motivators for a target audience in joining the third primary type of thought leadership research model: membership-based research consortia. Let’s look at this model.
Membership-Based Research Consortia
They need little introduction: Gartner, Forrester, Corporate Executive Board before its acquisition by Gartner, Hackett, and others. Get several dozen companies to pony up an annual fee for research on a topic of common interest, delivered in reports, webinars and meetings.
However, what needs much greater understanding is the research methods of the best of these research consortia: getting members to open up about how they are dealing with the research topic on the docket, comparing best and worst practices, and determining which practices are most important to success.
Research-based consortia give an adequately sized research team the access to the right people in the right companies: the people who are most knowledgeable about the issue or initiative that’s under the microscope. That’s why I put them in the top right corner of the diagram: A research team has the greatest access to pursue open-ended, deep inquiries with highly knowledgeable people who’ve been personally involved in dealing with the research topic at hand.
For example, if you wanted to understand how Fortune 500 companies are using generative AI software, there would be no better way to study that than by paying for access to 100 of the 500 through a research group. The quid pro quo for research member companies would be this: The insights can only be as good as the access you provide to people in your companies who know the most about your generative AI initiatives. With 100 companies opening up internal doors to a research team, the learning would be significant, and the patterns of success and failure would be illuminating. (Illustration by Chat GPT.)
This is powerful research: digging to get at the truth of the matter. You can scope out only some of this when you design a research project. The late, great reengineering guru Michael Hammer was known for saying that halfway through a research project, he was only beginning to understand the most important questions to ask on a topic. Research consortia enable a research team to take deep dives into companies that have a big incentive to allow such deep-sea fishing expeditions!
It’s why case study-only thought leadership research – which the companies were paying a membership fee or not – have led to some of the biggest management concepts in the last 30 years. Disruptive innovation (which began with studying disk drive companies), business reengineering (studying the best corporate users of IT in the late 1980s and early 1990s at a member-based research firm, PRISM) and companies that were “built to last” (Jim Collins’ case research on pairs of companies in several industries, one that was far more successful than the other) are just a few examples of groundbreaking, case study-based thought leadership research.
A caution: Much of the research that comes out of such consortia doesn’t produce groundbreaking ideas. I believe the blame should be placed not on the research method but on the research team’s skills. If your researchers aren’t analytical, creative, curious, and good communicators, they are not likely to be able to “connect the dots” or communicate those dot connections in a compelling, understandable way. In other words, they are not likely to generate a groundbreaking concept that is easily and memorably grasped.
Mixing the Models, and Creating a New One
In presenting only three models, I realize I am oversimplifying the options along these two dimensions. For instance, between drawing on expert networks and launching membership-based research consortia, a thought leadership research team can reach out to a few companies, a dozen, or even more, and ask for permission to interview them on the topic at hand.
Case studies derived from interviews and extensive secondary research can be a winning formula. To develop one of the blockbuster strategic planning concepts of the last 20 years (so-called blue ocean strategy), INSEAD professors W. Chan Kim and Renee Mauborgne investigated more than 150 strategic moves made between the years 1880 and 2000 by companies in over 30 industries. They compared successful moves into new markets (“blue oceans”) by new or existing companies against unsuccessful moves (into blue oceans or “red oceans” – already contested spaces) to determine the key factors in dominating the new spaces. (Their website will tell you more.)
Even far less case study research can produce important ideas that resonate in the marketplace. I have worked with several clients over the years on case study-only thought leadership research. I did this at on behalf of a client (Deloitte Consulting) for a study they were doing on business model innovation in 2002. We spoke with the CEOs of several companies, including Paychex, Wellpoint Health Networks, and Dermalogica. We supplemented those case stories with extensive secondary research that explored the founding of Walmart, Enterprise Rent a Car, Southwest Airlines and other companies. It led to a big idea: Some of the most successful companies in the U.S. found a way to turn markets that others thought were marginally profitable or unprofitable into huge money-makers. The final result of that study – a cover article (“Bottom-Feeding for Blockbuster Businesses”) in a print edition of Harvard Business Review – signifies how big an insight it was.
We achieved a similar result for another client, Talent Dimensions, in 2018 after conducting case study interviews with current and former executives at such companies as LVMH, Ecolab and Vail Resorts. We also collected extensive secondary research that underscored the trend we highlighted: Some of the most successful companies believe their most valuable employees aren’t all in the C-suite. In fact, some of the most valuable jobs are held by external partners. That research also ended up in Talent Dimensions-authored articles in HBR and a key HR journal, HR Executive.
However, companies that are serious about competing on thought leadership should never rule out a membership-based research model, especially big companies that can afford the sizable ramp-up investments. Software companies, in particular, could easily add on a membership-based research get-together to their annual user conferences – a place where paying members converge to discuss research results, before or after the conference.
Using Surveys to Identify and Open Doors of Best-Practice Companies
If you continue using surveys as your main research method, you should use them not only to gather statistic data, but also to identify companies that you should interview.
Ask your survey company to secure and schedule interviews. They should use the survey data’s breakdown on “leaders” and “laggards” to identify companies in both camps and entice them into one-on-one interviews. Those interviewees will need to be promised anonymity. But for those who represent best practice, you should appeal to their personal and company PR instincts. Few things promote a person and their company better than being published as a success story.
Clients who can line up case study interviews provide a critical component to survey-led thought leadership research. Tata Consultancy Services did that for us over several of the dozen studies we conducted with them in the 2010s. In three of those studies, the clients that TCS brought to the table enabled us to develop big insights, which led to three TCS-authored Harvard Business Review articles.
Survey companies that also line up case study interviews can reduce their clients’ reliance on expert networks for such interviews. Smart survey companies in the future will realize that setting up these interviews is as important to their clients as getting panel participants to fill out surveys.
A survey company (Curious Insights) that worked with us on two studies for staffing and consulting firm RGP, on the optimal mix of insiders and outsiders for strategic projects, lined up more than a half-dozen interviews. RGP anonymized the case examples, but it also brought to the table one of its clients, the then-CFO of a U.S. unit of a global pharmaceutical firm (Bayer). These case study interviews, which included a discussion with a Hollywood movie production manager, helped RGP come up with big insights on the composition of successful project teams, the biggest of which was they had more “outsiders” (non-employees) than the least-successful project teams, and that the outsiders brought critical expertise that employees didn’t have. The research landed RGP’s CEO a 2023 article in Harvard Business Review.
There’s another research option that’s emerging to gathering case studies, and it’s based on generative AI. I will write about this soon. I predict it will enable thought leadership researchers to learn directly from dozens or hundreds of companies about certain initiatives, and compare best and worst practices. But that’s all I’m ready to say about it now.
Whether case study research comes from interviewing companies directly, piecing together their practices from what they say about them on the Internet – or, preferably both – it is the wellspring from which researchers can create big ideas on solving complex business problems.
