zBattle Blog Technology OpenAI builds apps into ChatGPT, in a bold bid to make AI the ‘universal interface’ to our digital lives
Technology

OpenAI builds apps into ChatGPT, in a bold bid to make AI the ‘universal interface’ to our digital lives

Hello and welcome to Eye on AI. In this edition: Jeff Bezos calls AI a ‘bubble’ of a kind…Anthropic’s Claude Sonnet 4.5 shows an alarming self-awareness…Microsoft demonstrates an AI-powered bioterrorism “zero day”…and is the U.S. economy now just “one big bet on AI”?

I want to tie together two big pieces of news from yesterday that, at first, might not seem related, other than that they both involve OpenAI.

The first was the announcement of a major new strategic partnership between OpenAI and chipmaker AMD. The deal will see AMD provide its M4150 graphic processing units (GPUs) to OpenAI’s data centers beginning in the second half of 2026. OpenAI will use these chips primarily for inference—that is, running its AI products such as ChatGPT, Sora, and its API, rather than for training new models.

Eventually, OpenAI has committed to purchasing six gigawatts worth of AMD chips, and AMD has granted OpenAI warrants that could, if certain conditions are met, give OpenAI rights to up to 10% of AMD’s shares. AMD’s stock initially jumped by more than a third on the news, before eventually closing up about 26% for the day. My colleague Sharon Goldman had more on the deal here.

The second news item yesterday was OpenAI’s announcement of a new apps feature in ChatGPT. This essentially allows ChatGPT to easily call on other services, such as Spotify for music, or Expedia for travel inquiries, and deliver responses from those apps directly into the chat.

At first, this might seem like OpenAI taking aim at the market for existing digital assistants, such as Amazon’s Alexa, Apple’s Siri, and the Google Assistant. All of these assistants already interact with third-party apps—Amazon called these interactions “skills”—to perform functions from the voice-based assistant. ChatGPT has already won over a lot of users—its 700 million active weekly users tops the 600 million “Alexa-enabled” devices that Amazon claims to have. And this move could be seen as a way to prevent the upgraded, LLM-powered versions of these other digital assistants from stealing back any market share.

But I think something else, something far more profound, is actually afoot. OpenAI wants ChatGPT to become nothing less than the new, universal interface to our digital lives. And while the idea of “ambient intelligence” (always-on AI that we can summon at any time, by voice or the touch of a button) has been around for a while, Siri and Alexa never lived up to that vision. They simply weren’t capable enough.

Now OpenAI seems to think that with ChatGPT—perhaps embedded in whatever piece of hardware CEO Sam Altman and former Apple designer Jonny Ive are cooking up (more on that in the news section below)—it can make ambient intelligence a reality. Forget AI-enabled browsers, like Perplexity’s Comet or Gemini in Chrome. If apps in ChatGPT become a big enough ecosystem, you won’t need a browser at all. (Although, hedging its bets, OpenAI is reportedly also at work on an AI-enabled browser.)

This is the great “platform shift” people have been anticipating since ChatGPT debuted. Now the only question is how quickly and how completely consumers and businesses will move to this model. What’s clear though is that if ChatGPT does become the universal interface to all things digital for a substantial number of users, OpenAI is going to need a lot more computing power. Already the company complains that it can’t get enough GPUs from Nvidia, the dominant AI chipmaker, to serve existing needs, let alone future ones. Hence deals like the one it signed with AMD.

While it is true that part of the logic of the AMD deal is to ensure OpenAI is not completely beholden to Nvidia, a lot of this is simply about ensuring OpenAI has enough computing power to serve its existing user base as they consume more and more tokens thanks to things like the ability to tap apps directly from ChatGPT. Altman went out of his way on social media to explain that the AMD partnership was “incremental to our work with Nvidia” and that “the world needs much more compute.”

(Ultimately, OpenAI will probably seek to design its own AI chips for inference—as many of the hyperscalers have done. Altman seemed to be laying the groundwork for this during a recent swing through Asia where he met with Samsung and SK Hynix to strike deals on high-bandwidth memory chips and reportedly held discussion with leading foundry operator TSMC.)

There are plenty of reasons to be skeptical about the AI infrastructure boom. It’s not clear where the energy to power the 250 gigawatts’ worth of AI compute OpenAI has talked about building by 2033 is going to come from. It’s not clear how OpenAI will be able to monetize its users in a way that will pay for all of that compute. It’s not clear if the platform shift OpenAI is trying to midwife will happen fast enough and fully enough to justify that level of compute in that timeframe. But, if Altman is right, and AI is essentially the new interface to all computing—a view Nvidia CEO Jensen Huang has also endorsed—then building out this kind of spending begins to seem less like an insane gamble, and more like a sound investment.

That’s the bull case for AI infrastructure that was sounded last week in a note from analysts at Bank of America. They said there are historical precedents for this kind of infrastructure spending when a major platform shift is underway. They looked at the build-out of 4G wireless infrastructure in the decade between 2010 and 2020 and noted that global telecoms firms spent $1.3 trillion dollars on installing new equipment during that decade to support the technology. What’s more consumers then spent $3.6 trillion upgrading their smartphones to take advantage of 4G.

Of course, not every telecom operator came out of that decade well—there were several bankruptcies, and the debt needed to build all that 4G infrastructure was a major driver of widespread consolidation in the industry. Meanwhile, much of the value of the 4G shift accrued to the social media companies and apps, not to the telecom providers, which emerged from the infrastructure boom as arguably less healthy businesses than they had been going in: The average debt-to-equity ratio in the industry more than doubled, for instance.

But most companies survived, and the infrastructure did get used. It wasn’t Tulip Mania. We’ll see what happens this time.

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Before we get to the news, two things. First, if you want to know how AI is transforming industries and how even non-tech companies are seeing real ROI from the technology, check out the latest edition of the Fortune AIQ Playbook. In this edition, contributors John Kell and Sage Lazzaro examine the strategy Honeywell has used to deploy AI across the company and why Coca-Cola says AI is “the real thing.” That and lots more too, including a look at all the women CEOs at companies in the Fortune AIQ 50, our first-of-its-kind ranking of Fortune 500 companies based on the maturity and success of their AI implementation.

And then, if you want to learn more about how AI can help your company to succeed and hear from industry leaders on where this technology is heading, I hope you’ll consider joining me at Fortune Brainstorm AI San Francisco on December 8th and 9th. Among those confirmed to appear so far include Google Cloud chief Thomas Kurian, Intuit CEO Sasan Goodarzi, Databricks CEO Ali Ghodsi, Glean CEO Arvind Jain, Amazon’s Panos Panay, and many more. Apply now to register.

With that, here’s more AI news.

This story was originally featured on Fortune.com



Source by [author_name]

Hello and welcome to Eye on AI. In this edition: Jeff Bezos calls AI a ‘bubble’ of a kind…Anthropic’s Claude Sonnet 4.5 shows an alarming self-awareness…Microsoft demonstrates an AI-powered bioterrorism “zero day”…and is the U.S. economy now just “one big bet on AI”?

I want to tie together two big pieces of news from yesterday that, at first, might not seem related, other than that they both involve OpenAI.

The first was the announcement of a major new strategic partnership between OpenAI and chipmaker AMD. The deal will see AMD provide its M4150 graphic processing units (GPUs) to OpenAI’s data centers beginning in the second half of 2026. OpenAI will use these chips primarily for inference—that is, running its AI products such as ChatGPT, Sora, and its API, rather than for training new models.

Eventually, OpenAI has committed to purchasing six gigawatts worth of AMD chips, and AMD has granted OpenAI warrants that could, if certain conditions are met, give OpenAI rights to up to 10% of AMD’s shares. AMD’s stock initially jumped by more than a third on the news, before eventually closing up about 26% for the day. My colleague Sharon Goldman had more on the deal here.

The second news item yesterday was OpenAI’s announcement of a new apps feature in ChatGPT. This essentially allows ChatGPT to easily call on other services, such as Spotify for music, or Expedia for travel inquiries, and deliver responses from those apps directly into the chat.

At first, this might seem like OpenAI taking aim at the market for existing digital assistants, such as Amazon’s Alexa, Apple’s Siri, and the Google Assistant. All of these assistants already interact with third-party apps—Amazon called these interactions “skills”—to perform functions from the voice-based assistant. ChatGPT has already won over a lot of users—its 700 million active weekly users tops the 600 million “Alexa-enabled” devices that Amazon claims to have. And this move could be seen as a way to prevent the upgraded, LLM-powered versions of these other digital assistants from stealing back any market share.

But I think something else, something far more profound, is actually afoot. OpenAI wants ChatGPT to become nothing less than the new, universal interface to our digital lives. And while the idea of “ambient intelligence” (always-on AI that we can summon at any time, by voice or the touch of a button) has been around for a while, Siri and Alexa never lived up to that vision. They simply weren’t capable enough.

Now OpenAI seems to think that with ChatGPT—perhaps embedded in whatever piece of hardware CEO Sam Altman and former Apple designer Jonny Ive are cooking up (more on that in the news section below)—it can make ambient intelligence a reality. Forget AI-enabled browsers, like Perplexity’s Comet or Gemini in Chrome. If apps in ChatGPT become a big enough ecosystem, you won’t need a browser at all. (Although, hedging its bets, OpenAI is reportedly also at work on an AI-enabled browser.)

This is the great “platform shift” people have been anticipating since ChatGPT debuted. Now the only question is how quickly and how completely consumers and businesses will move to this model. What’s clear though is that if ChatGPT does become the universal interface to all things digital for a substantial number of users, OpenAI is going to need a lot more computing power. Already the company complains that it can’t get enough GPUs from Nvidia, the dominant AI chipmaker, to serve existing needs, let alone future ones. Hence deals like the one it signed with AMD.

While it is true that part of the logic of the AMD deal is to ensure OpenAI is not completely beholden to Nvidia, a lot of this is simply about ensuring OpenAI has enough computing power to serve its existing user base as they consume more and more tokens thanks to things like the ability to tap apps directly from ChatGPT. Altman went out of his way on social media to explain that the AMD partnership was “incremental to our work with Nvidia” and that “the world needs much more compute.”

(Ultimately, OpenAI will probably seek to design its own AI chips for inference—as many of the hyperscalers have done. Altman seemed to be laying the groundwork for this during a recent swing through Asia where he met with Samsung and SK Hynix to strike deals on high-bandwidth memory chips and reportedly held discussion with leading foundry operator TSMC.)

There are plenty of reasons to be skeptical about the AI infrastructure boom. It’s not clear where the energy to power the 250 gigawatts’ worth of AI compute OpenAI has talked about building by 2033 is going to come from. It’s not clear how OpenAI will be able to monetize its users in a way that will pay for all of that compute. It’s not clear if the platform shift OpenAI is trying to midwife will happen fast enough and fully enough to justify that level of compute in that timeframe. But, if Altman is right, and AI is essentially the new interface to all computing—a view Nvidia CEO Jensen Huang has also endorsed—then building out this kind of spending begins to seem less like an insane gamble, and more like a sound investment.

That’s the bull case for AI infrastructure that was sounded last week in a note from analysts at Bank of America. They said there are historical precedents for this kind of infrastructure spending when a major platform shift is underway. They looked at the build-out of 4G wireless infrastructure in the decade between 2010 and 2020 and noted that global telecoms firms spent $1.3 trillion dollars on installing new equipment during that decade to support the technology. What’s more consumers then spent $3.6 trillion upgrading their smartphones to take advantage of 4G.

Of course, not every telecom operator came out of that decade well—there were several bankruptcies, and the debt needed to build all that 4G infrastructure was a major driver of widespread consolidation in the industry. Meanwhile, much of the value of the 4G shift accrued to the social media companies and apps, not to the telecom providers, which emerged from the infrastructure boom as arguably less healthy businesses than they had been going in: The average debt-to-equity ratio in the industry more than doubled, for instance.

But most companies survived, and the infrastructure did get used. It wasn’t Tulip Mania. We’ll see what happens this time.

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Before we get to the news, two things. First, if you want to know how AI is transforming industries and how even non-tech companies are seeing real ROI from the technology, check out the latest edition of the Fortune AIQ Playbook. In this edition, contributors John Kell and Sage Lazzaro examine the strategy Honeywell has used to deploy AI across the company and why Coca-Cola says AI is “the real thing.” That and lots more too, including a look at all the women CEOs at companies in the Fortune AIQ 50, our first-of-its-kind ranking of Fortune 500 companies based on the maturity and success of their AI implementation.

And then, if you want to learn more about how AI can help your company to succeed and hear from industry leaders on where this technology is heading, I hope you’ll consider joining me at Fortune Brainstorm AI San Francisco on December 8th and 9th. Among those confirmed to appear so far include Google Cloud chief Thomas Kurian, Intuit CEO Sasan Goodarzi, Databricks CEO Ali Ghodsi, Glean CEO Arvind Jain, Amazon’s Panos Panay, and many more. Apply now to register.

With that, here’s more AI news.

This story was originally featured on Fortune.com

Exit mobile version