The Future of AI Belongs to Tech Giants

Unlike traditional tech startups, AI startups like OpenAI and Anthropic need massive amounts of money because of the huge costs of training and running Large Language Models (LLMs). There is an ongoing race for better and faster AI, so they need even more money for the next iteration of their AI models. OpenAI may be the fastest-growing business in history, but so are its operating expenses and losses[1], and Anthropic is not far behind.

Even if the world is fascinated by AI, Artificial Intelligence is becoming more open and available for anyone to use. When ChatGPT was first released at the end of 2023, the difference between the free GPT 3.5 and paid GPT-4 was noticeable. Today, there are cheaper or even free alternatives to GPT-4o, with comparable or better features. This applies both for ChatGPT-like chatbots and AI models you can rent from most cloud providers.

Tech giants like Meta and Google, because they don't profit from AI chatbots directly, can offer the AI chatbots that are so popular for free for anyone to use. They incorporate AI in their products and benefit from companies renting their models in the Cloud. Take, for example, Meta's recent announcement of their public and open release of the latest version of Llama, which puts OpenAI closed model in a very bad light:

Until today, open-source large language models have mostly trailed behind their closed counterparts when it comes to capabilities and performance. Now, we’re ushering in a new era with open source leading the way. We’re publicly releasing Meta Llama 3.1 405B, which we believe is the world’s largest and most capable openly available foundation model. With more than 300 million total downloads of all Llama versions to date, we’re just getting started.

Meanwhile, Apple and Google control iOS and Android, the operating systems used in the world's mobile devices. They are in a privileged position to offer a unique and tight integration of AI features to iPhones and Android smartphones while enlarging their already huge moats. AI requires lots of memory and processing power, which creates a powerful incentive for customers to change to more powerful smartphones to make use of the new features.

It's also worth noticing that in Apple's announcement at WWDC in June of AI finally arriving to the iPhone, OpenAI plays only a marginal and optional part in the offering. Sam Altman was not even present on stage. The AI features coming to the iPhone—what Apple calls Apple Intelligence—run on Apple-developed and trained AI models on Apple servers. And of course, Android's AI features are based on Gemini, Google's own AI, and run on Google's cloud.

Even Microsoft, which a year ago relied on OpenAI as its sole AI provider, has hedged its bets by establishing partnerships with many other important players in the AI arena. Also, it has all but acquired Inflection AI by hiring their whole team and licensing their software. (And let's not forget that Microsoft has exclusive access to OpenAI's intellectual property and OpenAI's LLMs run on Microsoft Azure.)

AI per se is not a product. My take is that in the following years, massive consumer AI adoption will come from established players like Apple, Google, Meta, Microsoft, and Amazon. They are in an excellent position to offer an integrated and unique experience of AI in their products or services.

We are in an AI bubble, but when the bubble bursts, these companies will emerge as the winners. Meanwhile, companies like OpenAI and Anthropic need to reinvent themselves to find a sustainable business model or lose relevance and disappear.

[1] OpenAI's estimated operating expenses, according to the quoted article by Gary Marcus, were 7 billion dollars last year, and losses of 5 billion dollars.

Join my free newsletter and receive updates directly to your inbox.