Award-winning business author Chunka Mui is keenly focused on generative A.I.’s impact on thought leadership and how innovation can be used to fight climate change and social issues. Among his influential books, he co-authored the late 1990s bestseller “Unleashing the Killer App: Digital Strategies for Market Dominance.” The Wall Street Journal named it one of the five best books on the business impact of the internet.
His latest book, “A Brief History of a Perfect Future: Inventing the World We Can Proudly Leave Our Kids by 2050,” focuses on how technologies whose costs are dropping dramatically can be harnessed to solve grave social and environmental problems.
With five books and 25 years as a keynote speaker, Chunka is regarded as a top expert on the current and future business implications of emerging technologies.
Everything Thought Leadership host Bob Buday and Chunka have known each other for over 30 years, going back to when they both worked for CSC Index. The two thought leaders reunited to discuss Chunka’s career, the biggest changes he has seen in the thought leadership profession, how business and thought leadership have been impacted by generative A.I., and how organizations can (and should!) use thought leadership to address social issues.
Transcript: Chunka Mui and Bob Buday
Bob Buday: Hello, Chunka, it is great to have you on this show. We have known each other for at least 30 years, going back to the time when you joined CSC index back in 1991-92.
You’ve been in the world of thought leadership for more than 30 years. I remember the old Index (and then CSC) Vanguard research and advisory program. It was about the potential business implications of emerging technologies. You had an all-star group of technologists in that research service.
Since then, you’ve co-authored five books, including several bestsellers, and you’ve been a frequent speaker on technology-fueled innovation. If we take a step back from all that, what do you think are the biggest changes you’ve seen in how people like you research and develop your ideas? And then the changes in how you present them?
Chunka Mui: I’ll claim to be in the business of “thinking leadership.” I’ll let other people evaluate [whether it’s thought leadership]. I think it’s been an up-and-down experience.
Looking back on the last 30 years or so I think, there are two drivers to the stuff we’ve done. One, which I talked a lot about in my last book and call the “Law of Zero,” is like the cost side of [Gordon] Moore’s law — applied to all information technology.
The costs of doing the things we do are decreasing while the capabilities are growing exponentially. The costs of research, of publishing, of amplification, of getting that information out there – all have dropped precipitously. We can talk more about the positive and negative implications of that.
We have these powerful tools that amplify what we do. Of course, they amplify what everybody else does to the other part of it, which I think is less positive. But Gresham’s Law applies to us as well: the bad forcing out the good, the overwhelming noise that comes into the world, enabled in large part by the dropping costs of resources.
Thought leadership used to be about actual thought leaders. It was so hard to do, and there were natural mechanisms for separating the powerful ideas from everything else. Now that’s a lot harder. There’s a lot of thinking that isn’t really thoughtful but is contending in the same space.
Information Overload
Bob: The cost of becoming a thought leader has dropped precipitously. It used to be that you had to have a bestselling book, and a firm behind you to afford the development costs of that book. Now you can stand up a book on your own and self-publish it; you can create a YouTube channel or a LinkedIn channel and get hundreds or thousands of viewers. The platforms by which we disseminate the ideas are free.
Chunka: Creation [of the content] is a lot easier. Publishing is a lot cheaper. And as you just said, packaging and getting things out there … the natural filters have dropped away. Now we’re all inundated with information. Separating the good from the bad is not easy for the average consumer.
Bob: What does that mean for people who look at you and your career and say, “I want to be Chunka Mui; he’s had a great career”? What would you tell your younger self today?
Chunka: There are a lot of paths to the things we do. You came in from the writing and editorial point of view, and I just happen to write about the work that I did. I would do all sort of versions to crystallize my thinking to help other people learn from it.
But I never really thought of myself as being in the business of thought leadership, I thought writing was a way of creating the artifact around my work. From that standpoint, I think that’s the best way to approach this work: to be good at something and then write about it.
AI’s Business Impact
Bob: You’ve proven that it works. That leads to our next topic — about generative AI, and large language models (LLMs). Let’s put aside thought leadership for a second: What do you see as generative AI’s overall impact on business? And then what do you believe will be its impact on thought leadership?
Chunka: I think the effect is going to be astronomical. I just wrote a piece a few days ago that compared the lessons from the dot-com era [of the late 1990s and early 2000s] — both the boom and the bust and the rebirth– to the current AI boom. I think we will have an ample supply of great strategic opportunities, overwhelming hype, and inherent limitations that are not necessarily understood.
Every industry and every company needs to separate the opportunity from the hype and understand the limitations in the context of their own business problems. [Generative AI] is a broad horizontal set of technologies that don’t have one answer for everybody. And don’t have one calculus for everybody in terms of good, bad or indifference. You have to dive deeply into what it means for you.
Within that will be lots of opportunity. One thing I noted in my article was that if you look at the boom and the bust in the dot-com era and the kinds of companies that were created, there’s a shortlist of companies that are worth trillions of dollars today. But we also lost about a trillion dollars from the peak of the boom to the bust. So it’s not all good, and it’s not all bad. The question is, “What is it going to be for you?”
That has implications for thought leadership — and not just on thought leadership as a profession or industry, but also for its role for consumers of thought leadership.
Bob: There are more dead startups from the 1990s than there are thriving startups.
Chunka: There’s always more dead startups than thriving startups in any time. That’s the whole nature of startups. There’s value creation, and there’s value loss. I think what happens in these periods of exuberance is that the opportunities for massive value creation go up, but so do the risks. That’s why we call it beta: It’s risk. And higher risk creates opportunity for higher returns.
Understanding AI’s Risks
Bob: Your article was great. Would you then say that a lot of people are going to be jumping into generative AI and make a lot of money in making a lot of business process improvements? And there are a whole bunch of folks who are going to use it and squander billions of dollars?
Chunka: Both will be true. Even for the people who have the best understanding of it, there’s no guarantee for success. But the people who walk into without a deep understanding are going to fail.
The other thing that’s important to remember, since you’re speaking to an audience of corporate executives, is that the risk/reward ratio and investment profile for venture capital is very different than it is for a corporation. A VC will invest in 100 and hope that two or three will overwhelm the losses of the others. But you don’t have the same portfolio approach in a corporate organization.
The thing that bugs me the most in these times is when somebody says, “Look at all the VC money going into this field!” You have to remember that those really smart venture capitalists know that 90% of those investments are going to fail. Do you have that opportunity as a corporate innovator? No, you don’t. You’ve got to be much more judicious.
Bob: One or two failures in a corporation could be the end of somebody’s career, basically.
Chunka: Oh yeah. Absolutely. And that shouldn’t be the case. And that’s not why I’m saying this. Corporate innovation only gets so many chances, and you have to expect some failures there. But we’re not talking 98 out of 100 failures. It’s a different ballgame.
A Potentially Dangerous ‘New Intermediary’
Bob: Let’s talk about the impact of generative AI on this field of thought leadership. How could it impact the way content is researched, developed, delivered to the marketplace – and the way viewers use it?
Chunka: I think it’s important in all these cases to first ask this question: “How might it change reader or consumer behavior and preferences?”
One of the things [that generative AI software represents] is a new intermediary. I think that is because of the nature of what LLMs work on and what they do. They will have an impact on thought leadership that’s greater than what Google News did to the newspaper industry. You essentially have a new intermediary that’s going to manage or adjudicate who gets attention and who doesn’t. And it will change the economics of who strikes value from that content.
It could even have a potentially more harmful effect. This new intermediary will not only control the search space; it’s going to control interpretation. It’s going to do its own creation of that interpretation. It’s going to suck all this [thought leadership content] up. The first instance will be to tell the user, “Here’s generally how to think about it.” It won’t be “Go read this, that or the other thing,” or even what this, that or the other thing said about it.
It will be “Let me give you my regurgitation of the average response to this thing.” The danger is it’s so articulate in doing that, for a lot of readers it will be good enough. So [for these readers] there’s no need to go to the equivalent of The New York Times or the Washington Post or anything else.
[Generative AI software] will have its own product for you; that’s what it does. Content creators now have this problem of something else creating content that’s almost as good or sometimes better than theirs. And it’s going to do it with no punctuation errors and no grammar flaws. It’s going to sound good.
But it might misinterpret what you said and tell the reader that “[You] said this,” whereas you didn’t. So you have all these problems of somebody else – an 800-pound gorilla — sitting there. No matter how big you are, what top-tier professional services firm you are, or whatever you’re a head of, there’s this massive gorilla now sitting between you and your customers.
Well-Written Hallucinations
Bob: I’m trying to imagine this. So instead of saying, “Well, on the topic of, say, ‘digital strategy for telecommunication companies,’ I want to see what McKinsey or Bain or BCG has to say about this, I’ll just type a question into ChatGPT: “What’s the best thinking on this topic of digital strategy for telecommunications companies?”
And maybe McKinsey will be mentioned. Or maybe Bain, or maybe Accenture, or whomever. But ChatGPT will come up with its own answer.
Chunka: Yeah, it comes up with its own answer. It could also tell you that “McKinsey said X,” whereas they said the opposite of X. Everybody else said the opposite of X. Therefore, it just guessed. The first few times I used ChatGPT, I asked it about myself. “What did [Chunka Mui] write? What did that book say?” I was amazed. I found out later that my books were in its training. I was amazed at what it got right.
But there are times where it wasn’t so right. In fact, I asked it at one point, “What’s the most-read article that Chunka Mui ever wrote?” It played back something on the order of “He wrote this article with Mohanbir Sawhney at Northwestern University, and it was about X, Y, and Z.” Mohan Sawhney is a good friend of mine. We’ve never written a paragraph together. We certainly didn’t write that article. What it told me about the article it just made up.
Some people call that a hallucination. I call that a really, really big mistake. But it was very articulate. For anybody else who asked that question, they will walk away saying, “Oh, wow, this guy, he had this really big article, this is what it was about …” They would have walked away with a completely different impression of my view on some critical topics. Everybody has that risk.
Bob: Do you think most people will believe what ChatGPT or its competition tells them, as opposed to saying, “I don’t remember an article by Mohan and Chunka. I’m going to go on Google and see if they actually did.” You’re saying most people won’t take that step, right?
Chunka: If you build a trust, you’re not going to double check everything. You might think, “Here’s something so simple; how could that be wrong?” There’s lots of debate about how good or bad these things are. Most people come out saying, “It’s pretty amazing.” But one thing we all have to keep in mind is that this set of AI tools, unlike other kinds of AI tools, have no notion of semantics. It has no notion of right or wrong. It just [uses technology] probabilistically to determine the most important information to point me to as the likely answer to this question.
One of the impacts of “The Laws of Zero” is that some startup can afford to take all the internet and everything we know and compress it onto this dataset at a cost of a billion dollars. Depending on how you count, 10 years ago it would have cost them $100 billion. So it was impossible to do. In 15 years,
it’ll cost as much as your Starbucks latte. That is really powerful, but it will still have no notion of right or wrong or true or false or semantic, common sense. Some of those problems will be solved.
But if you’re in a business of thought leadership, nuances matter, and we can talk more about it. It’s one of the things I’ve been working on as a writer is to think about: How do I go up the stack and get involved in the interpretation of what I write, so that people can go to a large language model and get what I wanted them to get from my work, which is, unfortunately, not an easy task.
Making LLMs Work For You
Bob: What I hear you saying is, how do I use the large language model to help me and my readers understand my ideas, maybe versus others ideas on the same topics, rather than be at the total mercy of the large language model? And whether I show up on searches, I should be showing up, or questions I shouldn’t be showing up on and showing up correctly?
Chunka: Well, I wish I could do that. But I can’t. There’s no equivalent of search engine optimization here. But there is the opportunity for me to say to a reader, and this is now in the ChatGPT app store, that I have a guide to one of my books, so that you can ask it questions about my book. I’ve structured the content in a way that it can search it, or I’ve instructed it to try to search it in the right way.
That is really powerful in two ways. One is I can have some confidence that it will interpret my work [correctly], as opposed to “Read the article that I wrote with Mohan.” The other is that it extends my work because it can ask it questions, like, “How does his book compare to that other book?”
I don’t know if it’s getting that other book right. But I’m more confident that is getting my book right. You can also do things that are really interesting. As a corporate innovation adviser, you can say, “How do I relate what this book is saying to my business problem?” Hopefully, that will be more right than the alternative where, essentially, it tries to cheat, right? You ask it about me, it doesn’t really, really deeply search into me. It’s the equivalent of a browser’s few paragraphs and guesses at what the rest of my answer would be.
Bob: What would that look like? Does it mean I’m reading your interactive book and trying to figure out how the ideas apply to my business? If I worked at a Fortune 500 chemical company, would I be able to determine what it means to us?
Chunka: In our book, we talk a lot about how to write what we call “future histories” and visions of particular kinds of business situations in the future, given certain technological assumptions. It can do that now because it’s an LLM. It can give you a draft of a future history in your industry, something we might never have thought about. But we did write about things like: How does computing change? How does information change? What are the possibilities to genomics in healthcare? Stuff like that.
Then it can try to generalize and apply. I would say it’s probably a good start. It’s not a final answer. But it’s a good start — something that we didn’t write a chapter on.
I put a lot of attention into how my high school teachers use content here to teach certain topics. It’s something we never even talked about when we were writing the book. But it’s a natural extension. I think of it as if you were talking to a well-read reference librarian. You walked in and said, “You’d haven’t read this book, but what does it have to say about XYZ?” The person is not going to try to represent themselves as me, but can give a starting point to answer that question, which is something that the base model would never have done. I think over time — the same way when we write a book we write author guides, book club materials, audiobooks, and videos — this is a natural extension of the next stage of work you do after you write a book.
Bob: Do you look at it as a tool in which a thought leader who’s published a book, a big research study, or a big article or white paper can get into an interactive discussion with a prospective client?
Chunka: Absolutely. People have been talking for a very long time about active books — books that are not just words on paper, but ones in which you can go into them in different modes. This is an exciting step in that direction. It’s a way of drawing out the thinking of the authors and the research team.
The way you approach something like this is asking yourself questions: What’s this book really about? It’s all the questions that a great interviewer like you might ask an author. You try to make those explicit. And you try to train the model in those larger set of point of views about this work, so it can interpret the work in about a way I think could be powerful. I think it’s a natural next step for crystallizing thinking.
Bob: How many hours would that take?
Chunka: It’s not trivial. You have to structure your book differently. The worst thing to do is to upload a document and ask questions about it. I guarantee you, if it’s your document, you will be very disappointed by the answers to your questions. You’ll be very disappointed in the capabilities today. Now, the consumer may not be as disappointed because they haven’t read it. And the answer will seem very articulate. That’s the problem we have here: the ability to be very articulate but not right.
It’s just another tool on our tool bag now. You can hire a programmer. We don’t all have to do everything. But part of that team of structuring knowledge would include a book designer, copywriters, marketing people.
Thought Leadership and Competitive Advantage
Bob: Let’s talk about the role of thought leadership on topics at this intersection that you love, which is competitive advantage for an organization, the purpose of organizations, why they are in business, and societal purpose.
Chunka: Say it’s a more about the role of thought leadership? And how that I think thought leadership is synonymous with the drivers of innovation in organizations and their customers. How do you push the natural momentum toward faster, better, cheaper in what you’re doing already – and toward new opportunities?
I was head of innovation at various firms a long time ago. I never reported to marketing. Thought leadership isn’t marketing. The role of thought leadership is to explore the boundaries, not how do I package something to sell what I’m selling today. It’s how do I push the organization?
I always thought of myself as having two markets: the external market, which usually was asking “What is that big idea?” and the internal market, which sort of said … “Just go sell what I’m trying to try to do today.” Astro Teller, who’s head of the moonshots factory at Google’s parent company Alphabet, has this great line: “If I can solve a trillion-dollar problem, I’m pretty sure there’s a really large business opportunity there.”
So first I’m going to ask what’s the trillion-dollar problem I can solve. That question should be the motivation for everybody who wants to be in thought leadership. What’s the really big problem in your industry, or your markets or your customers that you can solve now, or better yet, you can solve in 10 years because of the Laws of Zero that you could start on today. Because if you don’t start on it today, your company won’t solve it in 10 years, or five years or whatever it takes — whether that is addressing climate change, healthcare, transportation, or education.
They’re these big looming opportunities out there. If we get ahead of the game, we’ll be ahead of the competition. That’s a great opportunity for thought leadership.
