Sundar Pichai on Google’s AI, Microsoft’s AI, OpenAI, and … Did We Mention AI?

The tech giant is 25 years old. In a chatbot war. On trial for antitrust. But its CEO says Google is good for 25 more. 
Black and white photograph of Sundar Pichai
Photograph: Gabriela Hasbun

Earlier this month, Sundar Pichai was struggling to write a letter to Alphabet’s 180,000 employees. The 51-year-old CEO wanted to laud Google on its 25th birthday, which could have been easy enough. Alphabet’s stock market value was around $1.7 trillion. Its vast cloud-computing operation had turned its first profit. Its self-driving cars were ferrying people around San Francisco. And then there was the usual stuff—Google Search still dominated the field, as it had for every minute of this century. The company sucks up almost 40 percent of all global digital advertising revenue.

But not all was well on Alphabet’s vast Mountain View campus. The US government was about to put Google on trial for abusing its monopoly in search. And the comity that once pervaded Google’s workforce was frayed. Some high-profile employees had left, complaining that the company moved too slowly. Perhaps most troubling, Google—a long-standing world leader in artificial intelligence—had been rudely upstaged by an upstart outsider, OpenAI. Google’s longtime rival Microsoft had beaten it to the punch with a large language model built into its also-ran search engine Bing, causing panic in Mountain View. Microsoft CEO Satya Nadella boasted, “I want people to know we made Google dance.”

Pichai’s letter, released on September 5, was buoyant, designed to inspire, and almost giddy in its discussion of the company’s astonishing journey. (You can read it here.) But behind the cheerleading, you could detect a hidden leitmotif. We matter more than ever. Despite what they say. One point pops up repeatedly: We are not going to lose in AI.

Pichai—who joined the company in April 2004, the same month Gmail launched—has been CEO for eight years. He speaks often of growing up in India, where technology provided a lifeline to better times. He’s widely recognized as a “nice guy.” But over the years he has made his share of tough decisions, including layoffs, product cancellations, and reorgs, like his recent forced merger of Google’s two semi-competing AI research centers, DeepMind and Google Brain. Now he faces even bigger decisions as the company withstands challenges inside and out—all while pursuing what Pichai calls “the biggest technological shift” of our lifetimes.

Just before releasing his blog post, Pichai spoke to WIRED about AI, fighting bureaucracy, and why he rejects the characterization that he is mainly a consensus builder. The interview is edited for length and clarity.

Steven Levy: You’ve just shared a note marking 25 years of Google. It’s upbeat and inspirational, but am I right to see a subtext here? It seems you’re rallying the troops around the idea that Google still exists to build technology for the world’s benefit, even though some people might be questioning that now.

Sundar Pichai: It’s definitely a reflective moment. Twenty-five years is a long time in the technology world. But I'm convinced that with the shift to AI, there’s a golden age of innovation ahead. As a company, we have as big an opportunity as we had 25 years ago, and a lot more responsibility. I hope to convey to the company that we should balance being bold and responsible, and meet that moment with excitement.

OK. But let me share a narrative that I’m sure you’ve heard: Google has always been a leader in AI. But in the past couple of years, despite building AI into products, it was too sclerotic or cautious to seize the moment, and other companies have taken your ball and run with it. When OpenAI and Microsoft came out with consumer large language models, Google was caught flat-footed and now is scrambling to catch up. What's your reaction?

This article appears in the November 2023 issue. Subscribe to WIREDPhotograph: Sinna Nasseri

You’re right that we've been thinking about AI from the very beginning. Obviously, when I became CEO in 2015, it was clear that deep neural networks were going to profoundly change everything. So I pivoted the company to be AI-first, and that's where we directed a lot of our R&D dollars. Internally, we had our LLM, LaMDA. Obviously, we were thinking about running large consumer products. But we definitely felt that the technology needed to mature a bit more before we put it in our products. People come to us with a huge sense of trust—they come to Google and type, “What Tylenol dosage for a 3-month-old?” You can imagine the responsibility that comes with getting it right. And so we were definitely a bit more cautious there.

So credit to OpenAI for the launch of ChatGPT, which showed a product-market fit and that people are ready to understand and play with the technology. In some ways, it was an exciting moment for me, because we are building that underlying technology and deploying it across our products. But we are still being deliberate where we need to be. The technology arc is long, and I feel very comfortable about where we are.

You had the tools and talent to put out something like GPT earlier than OpenAI did. In retrospect, should you have done it?

You can go back and pretty much take another look at everything. It's not fully clear to me that it might have worked out as well. The fact is, we could do more after people had seen how it works. It really won't matter in the next five to 10 years. It’s important to look at the signal and separate it from the noise. The signal is that AI is a profound platform shift, and it’s getting to a stage where you can deploy it more deeply. We are doing that to solve real problems, with a sense of excitement and optimism and responsibility. That, to me, is the signal. That is the opportunity.

After Microsoft put a version of ChatGPT into its Bing search engine, Google hastened to release its own version, Bard. Did Nadella make you dance?

In cricket, there's a saying that you let the bat do the talking. We have been innovating on AI, and also applying AI to search, every year. There’s always been competition. We've seen Alexa launch and Siri launch—this is not new. Around the end of last year, my thoughts were, how can we bring generative AI to search in a way that makes sense for our users? That’s what I'm thinking about, and that's what will matter in the long run.

Photograph: Gabriela Hasbun

I’m glad you mentioned search. The basis of Google Search—and almost your entire revenue stream—is that people query the search engine and find relevant links that they visit, and maybe spend money there. But your plan to use LLMs in search, called SGE, or Search Generative Experience, doesn’t send people to websites. You type a query into a Google Search bar, and SGE answers with a big block of text. How do you do that and not blow up your business model?

First of all, in search, people come looking for information. Over the past many years, you know, how we present that has dramatically evolved. But we are still trying to help people find the best information that exists online. Inherently, people are also looking for commercial information, and ads are very valuable commercial information, because they connect merchants and businesses, small and big, to users. None of that changes just because we are applying AI deeply. When we evolve search with generative AI, we’ll apply the same principles. It's important to us to connect users with what's out on the web, and we are working deeply to make sure that continues to work well.

But if I do a search by prompting an LLM, I’m going to get something quite different from a series of links. How will I know whether it’s sponsored or organic?

You would see the same thing. Even in a generative experience we would give you a set of sites that support what we are saying. We want to make sure users are consuming those sites. So I don't think the core part of the experience will change. We will have a space for ads in a way that makes sense for users and particularly on commercial queries. Our early testing shows that we'll be able to get it right. When we shifted from desktop to mobile, people asked versions of these same questions. It’s core to the company to evolve search while applying the underlying principles. I am confident we'll be able to get that right through this transition.

For years, DeepMind and Google Brain operated as different entities, maybe even competitive entities. This year, you ordered them to merge. Why? And are you seeing the fruits of that merger?

I always felt fortunate we had two of the best AI teams on the planet. They were focused on different problems, but there was a lot more collaboration than people knew. Google worked very hard on making sure we provided TPUs [Tensor Processing Units, optimized for machine learning] to support the AlphaGo game [a program that beat the world champion of the intricate game Go]. We realized we needed to build larger-scale LLMs, so it made sense to come together so that we could be more efficient around our use of compute. [DeepMind’s LLM] Gemini actually started as a collaborative effort across these two teams. And [Google Brain leader] Jeff Dean had a desire to reclaim a deep engineering and scientific role. I've spent time with the teams both in the UK and in Mountain View, and I've been thrilled to see the Gemini teams working closely with Google Search as I’m walking through the halls. I felt a sense of excitement that reminded me of the early days of Google.

The large language model winner in this merger seems to be DeepMind’s Gemini, which you are positioning as a next-generation LLM. What will it do that the current generation doesn't do?

Today you have separate text models and image-generation models and so on. With Gemini, these will converge.

Meanwhile, we haven’t heard much about Google Assistant. Should we issue a missing persons alert?

Part of the reason we built the conversational LLM LaMDA was that we realized we needed to improve the underlying technology of Google Assistant. AI will make Google Assistant fundamentally better.

The US government is putting Google on trial for alleged antitrust violations regarding what it calls your search monopoly. You might not endorse that term. So how would you describe the company’s dominance in search?

The case is happening at a time of unprecedented innovation. Step back, and look at the recent breakthroughs in AI, in new apps, options for people to access information. We make literally thousands of changes every year to improve search. We invest billions to constantly innovate and make sure the product works well for people and that it's a product people want to use. I'm looking forward to the opportunity to make that case. It’s an important, important process.

So you’re saying we should view this in a broader sense than just market share?

Think about all the ways people today get to access information. It's a very dynamic space, it's a broad space. We have to work hard to constantly innovate, to stay ahead.

If you weren't able to make deals to become the default search engine on third-party browsers and phones—something the government is objecting to—what would be the impact on Google?

We want to make it easy for users to access our services. It’s very pro-consumer.

Earlier you mentioned your in-house AI chips. Google Cloud, the enterprise service, recently announced its first profit, and a big part of a cloud service now is supporting AI. I find it interesting that you maintain a large partnership with Nvidia, whose GPU chips seem to be a critical, if not irreplaceable, component of the AI ecosystem. How important is it for you to preserve good relations with Nvidia? Do you think it’s dangerous for one company to have so much power?

We've had a long relationship with Nvidia for well over a decade, including working deeply on Android. Obviously, with AI, they've clearly demonstrated a strong track record of innovation. Many of our cloud customers are Nvidia customers, too. So the collaboration is very, very critical. Look, the semiconductor industry is a very dynamic, competitive industry. It’s an industry that needs deep, long-term R&D and investments. I feel comfortable about our relationship with Nvidia, and that we are going to be working closely with them 10 years from now.

You—and much of the industry—profess to welcome AI regulation. What do you think the regulation should include? And what regulation would you see as stifling innovation and thwarting the benefits of the technology?

The first and foremost thing I think you need to get right is making sure that regulation is a collaborative thing between the public sector, private sector, nonprofits, and so on. It’s important to let innovation flow and make sure anything you're designing isn’t onerous on small companies or people doing open source. Then you can consider initial proposals like, how do you test the cutting-edge models? What does safety testing look like? We should set up industry standards and benchmarks. You should also think about how systems will be deployed. They're obviously going to be deployed in a wide range of scenarios, from recommending a nearby coffee shop to deciding what insurance people should get, or maybe making a medical care decision. So obviously, it makes sense that they're tested for safety and don't have bias, and it makes sense that they protect privacy. But I would balance it by asking whether existing regulations cover it. Using AI in health care, for example, doesn't change the fact that you must go through a regulatory process, including getting approved by the Food and Drug Administration to do a lot of things. And for me, with US regulations, we should actually get federal privacy legislation done first. In privacy, AI raises the stakes even more.

OK, so I'll put you down for strong privacy regulation in Congress.

Yeah. We've called for it, and it'll definitely be good to get.

Photograph: Gabriela Hasbun

We're talking about AI in a very nuts-and-bolts way, but a lot of the discussion centers on whether it will ultimately be a utopian boon or the end of humanity. What’s your stance on those long-term questions?

AI is one of the most profound technologies we will ever work on. There are short-term risks, midterm risks, and long-term risks. It’s important to take all those concerns seriously, but you have to balance where you put your resources depending on the stage you're in. In the near term, state-of-the-art LLMs have hallucination problems—they can make up things. There are areas where that’s appropriate, like creatively imagining names for your dog, but not “what’s the right medicine dosage for a 3-year-old?” So right now, responsibility is about testing it for safety and ensuring it doesn't harm privacy and introduce bias. In the medium term, I worry about whether AI displaces or augments the labor market. There will be areas where it will be a disruptive force. And there are long-term risks around developing powerful intelligent agents. How do we make sure they are aligned to human values? How do we stay in control of them? To me, they are all valid things.

Have you seen the movie Oppenheimer?

I'm actually reading the book. I'm a big fan of reading the book before watching the movie.

I ask because you are one of the people with the most influence on a powerful and potentially dangerous technology. Does the Oppenheimer story touch you in that way?

All of us who are in one shape or another working on a powerful technology—not just AI, but genetics like Crispr—have to be responsible. You have to make sure you're an important part of the debate over these things. You want to learn from history where you can, obviously.

Google is an enormous company. Current and former employees complain that the bureaucracy and caution has slowed them down. All eight authors of the influential “Transformers” paper, which you cite in your letter, have left the company, with some saying Google moves too slow. Can you mitigate that and make Google more like a startup again?

Anytime you're scaling up a company, you have to make sure you’re working to cut down bureaucracy and staying as lean and nimble as possible. There are many, many areas where we move very fast. Our growth in Cloud wouldn't have happened if we didn’t scale up fast. I look at what the YouTube Shorts team has done, I look at what the Pixel team has done, I look at how much the search team has evolved with AI. There are many, many areas where we move fast.

Yet we hear those complaints, including from people who loved the company but left.

Obviously, when you're running a big company, there are times you look around and say, in some areas, maybe you didn't move as fast—and you work hard to fix it. [Pichai raises his voice.] Do I recruit candidates who come and join us because they feel like they've been in some other large company, which is very, very bureaucratic, and they haven't been able to make change as fast? Absolutely. Are we attracting some of the best talent in the world every week? Yes. It’s equally important to remember we have an open culture—people speak a lot about the company. Yes, we lost some people. But we're also retaining people better than we have in a long, long time. Did OpenAI lose some people from the original team that worked on GPT? The answer is yes. You know, I've actually felt the company move faster in pockets than even what I remember 10 years ago.

You’ve been CEO for eight years now, and the pressure has never been greater. You’ve been known as a consensus builder, but the time seems to call for a “wartime CEO.” Does that role resonate with you?

I've always felt that we work in a dynamic technology space. So this notion of peacetime/wartime doesn't fully resonate with me. In a given week, you can have both those moments. A lot of decisions I made over many, many years were not about consensus building. There’s a difference between making clear decisions and getting people to come along with it. What I've done this year is no different from what I've done over the past many years. I've always been focused on the long term. I’ve never forgotten what gives Google its strengths. It’s a deep technology, computer science, and AI company, and we apply that to build great products that make a difference for people. We do this across a much more diverse set of areas now. That doesn't change over time.

Three years ago, I asked you whether Google was still Googly, and you said yes. As the company continues to grow and age, what can you do to maintain its Googliness?

Being Googly is about staying true to our values, making sure we are working hard to innovate using deep computer science, and making products that really matter to people in their daily lives. As long as we keep that in mind, I think we'll be set.

In your 25th anniversary letter, you evoke your roots, growing up in India where technology was a premium. You're now the CEO of a trillion-dollar company and a very rich man. How do you maintain the connection to that person who first came to the United States?

In my personal experience, access to technology was an important driver of opportunity. I saw that in my life, and I've seen it in countless others. What inspired me to join Google and be a part of Google was the mission statement, which was about making information universally accessible and useful. With AI, it’s even more important to democratize access to what will be one of the most profound technologies we have worked on. So I’m deeply motivated to make sure we develop this technology in a way that the entire world benefits. Personally, when I was in India, every weekend, I used to spend time with my parents, and my mom would make my favorite food—dosas, South Indian crepes. I still do that pretty much every Saturday morning. My mom makes them for me. I keep things simple.


Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.