Are the Tech Titans Mighty Enough to Break the Threatened NVIDIA-OpenAI Fetters?
Amazon, Facebook, Microsoft, even Google, & many others are paying hundreds of billions to NVIDIA, and hoping to spend tens of billions to make this a short-term dependency. This is the story of an...
Amazon, Facebook, Microsoft, even Google, & many others are paying hundreds of billions to NVIDIA, and hoping to spend tens of billions to make this a short-term dependency. This is the story of an industry trying to win its autonomy as more than a tax collector for Jensen Huang before all the bills come due…
In the ever-shifting landscape of tech, one constant remains: the drive to avoid paying high “taxes” to key chokepoint holders like IBM and Microsoft-Intel were, Google, Facebook, Amazon, and Apple are, NVIDIA has become, and that many fear OpenAI will be. For everyone, even the biggest market-capitalization behemoths, a Modern Advanced Machine-Learning Model—MAMLM—age in which NVIDIA-OpenAI rhyme with WIntel is something to spend a true fortune to, somehow, avoid. And M.G. Siegler has wise words to tell us on this…
Once upon a time nearly everyone in tech paid the IBM mainframe tax. Then nearly everyone in tech paid the Microsoft-Intel DOS-Windows86 tax. Now nearly everybody pays the Google-Facebook-Amazon advertising tax, or the Apple iPhone AppStore tax, or both. And so everyone in tech today, perhaps especially the collectors of earlier taxes, desperately wants to avoid having to pay any future NVIDIA-OpenAI taxes for long. But they do so while everybody except Apple, incuding Google, is frantically paying an immense NVIDIA tax now.
The very sharp M.G. Siegler watches Donnybrook Fair:
M.G. Siegler: The Race for AI Independence: ‘Today's: Amazon steps up effort to build AI chips that can rival NVIDIA…. A few days ago[:]… Amazon/// dangling billions… [if] Anthropic… would commit to using these new chips…. Google was early in building out their own chips. Microsoft… working on their own chips…. OpenAI… also…. Apple is already using their own chips and building more…. Meta seems perhaps most wedded to NVIDIA given the love-fest between Mark Zuckerberg and Jensen Huang on various stages around the world. But of course, they're also working on their own AI chips…. [Right now] everyone (aside from perhaps Apple) is buying as many NVIDIA chips as they can get their hands on…. And the spend. Oh, the spend. Amazon recently said it's going to spend $75B on capex this year, with much of it going towards the AI build up and out. The rest of Big Tech is in similar boats…. Wall Street is not going to like that too much. Probably sometime soon. And so, Operation: Independence is on.
But these chips are just one element of independence…. Microsoft… decoupling from OpenAI…. OpenAI… trying to break their own dependence on Azure (and NVIDIA chips)…. Apple… to think they're going to rely on third-parties for AI indefinitely is to think that AI is not going to be a big deal…. This cycle… everyone is quickly joining Apple…. The costs… ramping so fast with NVIDIA cornering the market so fast… Big Tech quickly realized they needed… to try to gain "AI independence"… <https://spyglass.org/the-race-for-ai-independence/>
What do I think of this rapid (and very expensive!) process?
It is a sudden shift in the tech rules of engagement. Things have started revolving around the rapidly evolving hub of Modern Advanced Machine Learning Models (MAMLMs)—”AI”— driving technological progress, and that is having lots of consequences.
First, I think that Siegler is 100% right in noting the principal reasons decisions are now being made. They are not now being made to push out the technology. There is a truly immense amount of duplication of effort here. Instead, decisions are rather being made to guard the possibility of future profit from MAMLMs. Even more, decisions are being made as firms frantically attemptto avoid the feared fate of having to become a mere frontend tax collector for someone in the future who will amass the market power that IBM, Microsoft, Intel, Google, Facebook, and Amazon have or had in their heyday.
Thus I think M.G. Siegler’s “The Race for AI Independence” captures the prevailing pattern of tech firms’ scramble to secure their positions, amidst disruptive innovations.I think that it gets it on the nose.
Second, and obvious, toda, MAMLMs and their AI-driven solutions—overwhelmingly language-interface to databases, and large-scale classification—are seen as the next big wave. Anyone who acquires an edge large enough to give them market power by providing a better voice interface or a better classification scheme will squeeze, and will clean up. In the meanwhile, companies that want to compete to become the possessor of that edge, or even to defend their current value propositions, need to spend money on NVIDIA chips and perhaps on MAMLMs like water. Pausing to build an alternative to NVIDIA and its CUDA, or to wait for a cheaper good-enough alternative to emerge, seems overwhelmingly risky in a tech world in which, in the past, too often first movers have become the last effective movers. This is the result of firms looking back at the lessons of history. Technology companies offering insufficiently differentiated products have found themselves, over and over again, facing immense price pressures due to their forced reliance on those controlling monopolistic control points in the tech stack. Depend too heavily on a single external supplier or customer, find yourself in a very vulnerable position. Others become fabulously rich, and then famous. You do a useful and essential but not wildly profitable job.
Thus, third, right now everyone fears Clayton Christensen’s disruption. That fear is the reason established firms are deeply, deeply concerned about falling behind in developign and deploying their own MAMLMs. If you don’t have the best language-interface and classification-scheme model, you have given your competitors a means to bypass your existing business more-or-less completely: they do what you do, or close enough, and they also provide MAMLM special sauce that you cannot match. Either your business dies away. Or you have to pay them all your profits to bold their technology onto yours as a front-end interface or a back-end data-store access tool.
Therefore, fourth, nxieties are greatly heightened.
Faced with what tech firms that have gotten very accustomed to printing money in unbelievable amounts regard as a truly existential risk, they are all trying to figure out how to deal. And the obvious first step in dealing is to buy as many NVIDIA chips as necessary so that you can build your own models, run your models for others, or at least run good-enough models on your own chips so you do not have to pay through the nose for language-interface and classification-schemes.
Fifth: And yet, and yet, riddle me this: One of the most notable characteristics of the current MAMLM landscape is the absence of any unique technological “sauce” that can be used to build a defensible market edge. Yes, MAMLMs are transformative. Yet they seem highly replicable in form and function. It seems that in the end a trained neural network is simply a very flexible function on a very large vector space filled with data. Training a network is a not terribly efficient way of calculating such a function. And we know how to build a function to maximize an objective. Data. Objective. Computational power. Where is the space for attaining a market power edge?
NVIDIA emerged as the pivotal player in the MAMLM infrastructure—as the company with the highest market capitalization in the world—due to its dominance in the AI-grade chip-design market, imposing what can be considered a “NVIDIA tax” on companies faced with the choice of paying a very high premium for NVIDIA’s cutting-edge chips or risking delays in model development. The fear is that falling behind in deployment could lead to substantial dropoffs in user engagement, and consequent rapid erosion of the profits from their current businesses. Everyone fears this.
Sixth: Even Apple fears this. In the past year, Apple has shifted to advertising and selling vaporware because it it knows it is behind, and fears losing iPhone customers to Android. And yet Apple, confronted with the choice between abandoning privacy-security commitments while paying enormous NVIDIA and model-renting tasks on the one hand and falling further behind on the other, chose to fall further behind and, as it tries to catch up, advertise and sell vaporware. “Apple Intelligence” looks to be a year and a half behind state-of-the-art, and to be likely to remain so for quite a while. I read this to be a combination of (a) Apple’s confidence that it can eventually do the job well enough to protect its current immensely profitable franchises without paying out a fortune in the NVIDIA tax, and (b) an Apple backup expectation that all of the others are spending so much money acquiring MAMLM capabilities that they will then be easy to play off against each other should Apple discover that it cannot do the job well enough, and has to buy a MAMLM backend from somebody.
Seventh, this is due to the absence of a secret sauce. And I think Apple is right to judge that if it has to—if it cannot make without paying the NVIDIA tax—it weill be able to buy cheaply enough from someone: take Facebook’s open-sourced LLama, hire programming firms for individual use cases, and run it on Googles cloud. This makes me wonder about the potential for a new type of tech boom that we have not seen since the salad days of the internet. Might we see, finally and miraculous, a tech boom in which monopolistic chokepoints are reduced, allowing for broader, decentralized innovation, and for a much more advantageous division of surplus between consumer surplus for us and oligopoly profits for behemoths?
Eighth: And yet, I wonder. Chips can be designed. Models can be built. Training can be optimized. But can anyone actually make leading-generation chips save for TSMC?. Yet, DeLong warns that while this trend reduces some bottlenecks, it creates dependency on semiconductor manufacturers like TSMC. There is one single point in the MAMLM supply chain that is already a monopoly. That they underbid when they made their past contracts with NVIDIA does not mean they are not a monopoly.
Thus one of the most perplexing phenomena in the tech industry today to me at least is NVIDIA’s valuation compared to TSMC. NVIDIA designs chips. Lots of very smart people can design chips. NVIDIA writes CUDA software. Lots of people can write software. TSMC makes leading-node chips. Nobody else does. Nobody else can.
NVIDIA’s current valuation looks to me to be the product of profits generated by short-term, panic-driven investments in hardware. I think NVIDIA’s next round of negotations with its single-source supplier TSMC will be very interesting.
But even then, ninth, if Intel and Samsung succeed in closing the gap vis-à-vis NVIDIA, then, I think, for the first time since the late 1990s we will see a tech boom without oligopolists waiting to levy extremely heavy taxes, and so slow deployment and the achievement of true scale. It might be glorious! If, that is, there actually do turn out to be truly, massively valuable use cases for language-interfaces and classification-schemes.
References:
Christensen, Clayton M. 1997. The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail. Boston, MA: Harvard Business School Press. <https://archive.org/details/innovatorsdilem000chri>.
Siegler, M.G. 2024. "The Race for AI Independence." Spyglass, November 12.
"It might be glorious! If, that is, there actually do turn out to be truly, massively valuable use cases for language-interfaces and classification-schemes."
And so far, there does not seem to any "killer app" implemented with AI. No one is making any profits, just huge losses. The paying customers are limited. If it turns out that the profitable way forward is making smaller systems based on curated data, then the huge investments in hardware and scraping data was a fruitless dead end. IOW, an AI platform is not like a potentially dominant OS, but rather more like custom software application that can be built with a suitable language. NO moats other than basic brand marketing, like O'Reilly.
Those expensive NVIDIA chips might become quite inexpensive once a bust sets in, like Aeron chairs after the dotcom implosion.
I might add that Nvidia's moat is as much CUDA as its hardware. One lesson from The Mythical Man-Month: software is harder than hardware.