MILKEN INSTITUTE REVIEW: Behind the Hype: What "AI" Is & Isn't
In the summer 2025 issue of the Milken Institute Review. Silicon dreams & nightmares & the real sakes of the AI frenzy comprised of machine minds, human motives, & a lot of tangled hype…
In the summer 2025 issue of the Milken Institute Review. Silicon dreams & nightmares & the real sakes of the AI frenzy comprised of machine minds, human motives, & a lot of tangled hype…
DeLong, J. Bradford. 2025. “Behind the Hype: What ‘AI’ Is & Isn’t”. Milken Institute Review. 58 (Summer). <https://www.milkenreview.org/articles/behind-the-hype?IssueID=58>.
The arrival of Modern Advanced Machine-Learning Models threatens or promises to disrupt time-honored patterns of white-collar work, but it has already upended the entire calculus of Silicon Valley’s platform power plays. Platform behemoths spend tens of billions to defend their turf. Investors chase dreams of AI riches that may never materialize. This is not your grandfather’s automation. The battle is for control of the mammoth profits thrown of by the “info” side of the Attention Info-Bio Tech Economy we are now building. The rules are being rewritten in real time.
Those betting on the next AI unicorn need to first ask themselves if they are joining the real revolution real, or dreaming feverishly of a rerun of crypto speculation. With understanding of what it really is—and isn’t—they may save themselves from being the next casualty of tech’s latest gold rush: economics, illusions, and realities behind the most overhyped hype cycle thus far in this millennium.
Behind the Hype
by j. bradford delong
illustrations by thomas kuhlenbeck
brad delong is an economist at the University of California, Berkeley, and creator of the blog Grasping Reality. He was deputy assistant secretary of the Treasury in the Clinton administration.
Published July 24, 2025
Less than three years ago, OpenAI, the nonprofit research organization focused on artificial intelligence, released what it viewed as a preview of incremental advances in the field along with a public demonstration of the technology. That’s not how the world received it, though. ChatGPT’s arrival was a cultural-financial-entrepreneurial-technological sensation, overnight conjuring our current AI-soaked (haunted?) world.
Stop a moment here to separate the signal from the noise. Recognize that the people who speak of AI as the keys to the universe are not trying to make you smarter. They are, for the most part, people trying to protect their own interests, which may or may not mesh with the public’s.
Reality Bytes
A half century ago Drew McDermott, the pioneering computer scientist at Yale, called the term “artificial intelligence” a misrepresentation. He preferred the term “wishful mnemonics,” which, of course, never caught on. A more neutral moniker back at the dawn of the digital age would have been “complex information processing.” Today, it would be more accurate to call the phenomena “modern advanced machine-learning models” or MAMLMs.
But whatever you call it does not change the fact that AI has revolutionary economic and social potential even if it never translates into artificial general intelligence that bypasses human capacity and sends white-collar workers shuffling toward the soup kitchen.
To get a more nuanced sense of what AI is and where it is heading, it’s important to understand the basics of two aspects of the evolving technology:
Natural-language interfaces.
Big-data, high-dimension, flexible-function classification analysis. (Yikes; but stick with me here.)
No less important, it makes sense to trace through the less-than obvious motives of the internet platform companies that are throwing tens of billions at AI.
Natural-Language Interfaces
Natural-language interfaces are, in themselves, general-purpose technologies that promise a material boost to human productivity in accomplishing white-collar work.
Until now, engaging with computers has required fluency in an alien tongue. Beyond the raw on-off digital switching of machine language itself, it might have been assembly code, or a high-level software language like Fortran or Python, or a WIMP (windows, icons, menus, pointer) interface. But all of these were effectively grammars foreign to human speech. Most everyone – nonprogrammers, casual computer users, even many professionals – remained dependent on translation layers that were brittle and opaque.
Modern advanced machine-learning models (the aforementioned MAMLMs) have changed this. Now, one can query, instruct, or collaborate with machines in English or any of dozens of human languages. This democratizes access to computation, lowering the barrier for millions – perhaps billions – of people to use, customize and leverage digital tools. Indeed, this ability to “converse” with our machines in human languages represents a rupture as profound as the mouse and graphical user interface in the 1980s or the punch card to keyboard transition before it.
This is not because the machine understands, but because it can mimic the shape of understanding well enough. The desk lamp does not understand photonics, yet it gives light. So, too, do these large language models give us something like dialog.
We must, of course, guard against illusions of intelligence. As clever as ChatGPT or Claude or Grok may seem, they are fundamentally just very sophisticated pattern-matchers – machines that reflect back a smoothed average of the internet’s many conflicting voices.
Danger thus lies in the human tendency to anthropomorphize and trust.
Remember: what seems like human thoughts and intentions behind the words coming in response to your questions is a summarized and averaged amalgam of the thoughts and intentions of the humans who replied on the internet to questions similar in some way to the one you have just asked.
Nevertheless, the consequences are substantial – no, monumental. When people first began using ChatGPT in late 2022 and 2023, the most striking thing was not the novelty of the technology itself, but the breadth of its reach. Suddenly students, lawyers, HR assistants, marketers, novelists and even software engineers found themselves offloading tasks. As in, “write me a brief memo”… “translate this email into Mandarin”… “draft code that analyzes this dataset.”
Accomplishing these tasks with help from computers had required formal training or specialized software, along with much painful and exacting fitting of human commands to the formal requirements of instructions that could be understood by the machine. Now they are accomplished by asking the LLM to do something in plain English – provided you understand how carefully the interaction needs to be managed and how rapidly and completely it can go off the rails if not checked, line by line, against reality.
This interface shift introduces a kind of fluid augmentation. We can now draft and iterate faster because the interface has become frictionless. But it is not all wine and silicon roses. The same tools that democratize access also introduce new dependencies. And to the extent that you allow ChatGPT to write even your first draft, it likely biases your conclusion.
Consider, too, that much of the use of natural-language interfaces become an exercise like the circus show of “Clever Hans” – the horse that appeared to solve arithmetic problems by tapping a hoof, but was actually reading his trainer’s nonverbal cues. In the case of Clever Hans, the human was still doing the work. With LLMs, often the human is still doing the real work in prompt-engineering and results-checking.
Calling these technologies MAMLMs rather than AIs is thus vital to gaining a cleareyed view of them. It strips away the anthropocentric fantasy and reveals the technical and economic understructure. We are not building minds, we are refining tools. And tools reshape societies not by becoming human, but by redefining what it means to be skilled, to be productive, to be competent.
The arrival of natural interfaces is likely to be the most consequential short-run effect of MAMLMs. They are already altering classrooms, courtrooms, boardrooms and bedrooms. What is certain is that we can no longer pretend these tools are marginal. And this brings us to the question of the effect on labor markets.
Automation via natural-language interfaces will not be implemented with factory-floor robots replacing workers. Instead, it will be manifested in offloading mostly white-collar tasks like entry-level writing, first-pass coding and routine organizational planning. The middle layers of intellectual labor, already at some risk of replacement in a digital economy, may in time become superfluous. This will happen not because “the AI” is capable of outthinking humans, but in large part because MAMLMs are incredibly efficient (and very fast) at guessing. Which brings us to the second set of technologies transforming AI.
Big Data, High-Performance Classification Analysis
The coming of the MAMLMs is also bringing a quantum leap in classification capabilities. Trained on petabytes (yup, a million-billion bytes) of data, their ability to parse nuance in language, recognize latent patterns and forecast likely outcomes vastly exceeds that of previous analytic tools. This has implications across fields ranging from drug discovery to internet marketing to weather prediction to legal analysis. In other words, the capacity to undertake what used to be called “statistical learning” has exploded.
Very big-data, very high-dimension, very flexible-form classification analysis is likely to prove even more consequential to AI than the arrival of natural-language interfaces. Putting scenarios of various sorts into metaphoric boxes, then taking routinized action depending on what’s in the box and the relationships among the boxes is at the heart of all complex human society. And much of society’s inefficiencies and wasted energy arises from the boxes being too large and the classifications too crude.
Stripped to essentials, MAMLMs are classifiers, taking an input, mapping it to an internal representation in multiple dimensions, constructing a model of similarity and closeness over the items in that space, and then using that model to assign probabilities or labels to consequences. What makes MAMLMs distinct is their high dimensionality, their enormous data capacity and their extraordinary flexibility.
Consider this: MAMLMs ingest millions or billions of variables. They do not have a single “model” but a stack of algorithms for optimization across unimaginable numbers of parameters. “Very big data” means what it says – all the texts ever digitized, all the clicks ever logged. “Very high dimensional,” as in, 175 billion parameters.
In the past, we could classify and search by keywords, but only with substantial difficulty. We could classify things yes or no: spam or not spam, cat or dog, fraud or fair play. Today MAMLMs sort and classify by placing the vectors they use to represent individual pieces of data in 3,000-dimension virtual spaces. And they swallow enough data to be able to nail down their classification relationships with great precision. Plus, they do so with enormous flexibility, able to fit data to models by determining in what regions of the classification spaces differences along some particular dimensions matter and in what regions they do not. MAMLMs are thus hyperdimensional classification machines – engines that consume a billion words or images or transactions and spit out a function that maps inputs to plausible outputs.
There is peril. When classification becomes automated and uninterpretable, accountability erodes. And so there is a trade-off: as we gain power to predict and classify, we lose power to explain.
What can these machines do? Quite a lot. First, of course, they can classify. This is the bread and butter of supervised learning, the backbone of online commerce and security. Second, they can predict, as in what ad you will click, what disease a patient will develop, what word should come next in a text. Third, they can summarize, as in turn documents into digests, compress images into tags, reduce complexity into signal. And fourth – most intriguingly – they can generate (sort of) original content in the form of poetry, architectural plans for buildings or even simulated economies.
This new mode of inference is now being used not just to recognize cats or complete sentences, but to make consequential decisions. Who gets a mortgage… who is flagged for possible credit card misuse… which job applicant gets shortlisted… where police patrols are dispatched. The underlying process is a form of classification based on a massive corpora of data, much of it unscrubbed and unverified.
The implications are threefold:
First, measurement comes to the fore. There is value for decision-making and taking action in recording situations and assessing identities and then scoring and classifying them in as many dimensions as practical. The ability to finely classify circumstances as a guide to understanding creates an enormous edge. Uncertainty and risk can be hugely reduced by understanding the situation in terms of its status within a very fine system of classification. We see this in our daily life primarily through the ability of LLMs to generate, say, an almost-adequate reply to a letter rather than a mere form letter with a few spaces left open for customization. But we will see this more and more as our new capabilities find their fit in analyses of situations.
Second, classification capacities of this magnitude bring power and wealth. Given the benefits of fine classification for individuals, digital traces (voluntarily given and systematically collected) will become the raw material for new modes of social organization.
As Marion Fourcade and Kieran Healy, the authors of The Ordinal Society, observe, this brings a “data imperative,” structured around three commands: “thou shalt count,” “thou shalt gather” and “thou shalt learn.” These imperatives drive the proliferation of behavioral tracking and data analyses, which, in turn, generate classification regimes enabling matching and ranking across all life domains on an almost case by case basis. A retailer that can anticipate customer churn, a hedge fund that can extract predictive signals from obscure datasets, a government that can efficiently prioritize resource allocation – all are tapping into a new layer of data utilization to become more effective at their tasks and to gain advantage vis-à-vis other organizations.Third, there is peril. When classification becomes automated and uninterpretable, accountability erodes. And so there is a tradeoff: as we gain power to predict and classify, we lose power to explain. The risk is that we will substitute mimicry based on statistical analysis for understanding. We will become Clever Hanses ourselves, trusting that four metaphoric hoof stamps mean the number four without knowing the invisible cues that created the impression.
Yet, in spite of the flaws and dangers, the promise of these techniques is very large and very real. These models have already shown remarkable utility in scientific discovery, in medical diagnostics, in environmental monitoring – in any domain where they have been tried in which large volumes of unstructured data must be sifted for meaning. For example, predicting the way huge protein molecules can fold on themselves to change potential impact, once an almost intractable problem in pharmaceutical innovation, has been dramatically speeded. Satellite imagery, once just pictures, is now mined for agricultural health, urban development and climate change signals.
The challenge is to frame these tools as instruments – not oracles. They are instruments that require calibration, scrutiny and often external validation.
Disrupting Platform Monopolies?
Context matters. For example, MAMLMs can serve as battering rams that aggressive business challengers can use to level the barriers protecting the market power of incumbent tech platforms. Indeed, their arrival has already upended the stability of the markets ruled by the big digital platforms. The Microsofts, Googles, Amazons and Baidus of the world are pouring billions into MAMLM infrastructure along with acquiring the upstream talent and data required to train the models. Their goal is to shore up their pricing power by entrenching themselves as indispensable infrastructure providers for the MAMLM-based economy.
Look closely at this leading edge of early 21st-century high-tech capitalism and what do you see? Extraordinary panic. The panic is not among the masses of consumers or even the millions of workers living with financial insecurity and worried that machines will replace them. It is among the princes of Silicon Valley.
MAMLMs, the platform behemoths fear, are the one force potent enough to disrupt their business models and rob them of profits built on decades of accumulated market power. Whether it is Facebook (Meta), Google (Alphabet), Apple, Amazon or Microsoft, they all worry some startup will build a natural-language interface that people will flock to because it is easier to use than their own. Social media loyalty has faded. Why bother with Instagram or TikTok if some scrappy upstart offers seamless social connection? Why use Google search, the rock on which Alphabet’s advertising empire is built, when Claude is slicker? For that matter, why be loyal to the iPhone, or Amazon or Office when MAMLM-enhanced alternatives await?
The tech-platform incumbents do not realistically expect to make truly serious money from AI. Their primary objective is to protect themselves from the erosion of profits in the businesses they already dominate.
Hence the platform giants are all spending tens of billions building natural-language interfaces to ease access. And they are spending comparable sums to build classification engines that improve their core services. As I understand it, the tech-platform incumbents do not realistically expect to make truly serious money from AI.
Their primary objective is to protect themselves from the erosion of profits in the businesses they already dominate.
This obsession with MAMLMs among the platform giants is not a sideshow. It is central to understanding their strategies for keeping the good times rolling in the face of disruptive technological change. Note that they are investing ginormous sums even though they see no clear path to direct profit.
The biggest winners are upstream. Think Nvidia, the company with a near-lock on production of AI-ready digital processing chips, TSMC, the dominant maker of advanced memory chips, and ASML, the Dutch company that manufactures the incredibly complex machines for making the advanced chips.
This rapid, radical shift in platform dynamics is not only the consequence of powerful new technologies that humans could benefit from. It is driving a multitrillion-dollar transfer of wealth from platform-service suppliers to platform users who are getting very valuable MAMLM services at little or no cost as the platforms pursue their survival agendas.
As best as I can see, then, the behemoths have no road to collectively earning revenue from their AI expenditures: what one gains, the other loses. Even worse from the platforms’ perspective, they are all paying an enormous Nvidia “tax” on their investments, with Nvidia’s profits coming nearly one-for-one out of reductions in the profits of the platform-oligopolists. Only Google and Apple seem to have a prayer of being able to design good enough chips of their own. And the fact that nobody has a secret sauce – that many companies willing to spend the money can build, train and utilize state-of-the-art MAMLMs – means that they will never recoup directly. That, in turn, almost certainly means that few among those building, training and utilizing MAMLMs in competition with the Big Boys will get rich either.
If I can see this near-inevitability, it stands to reason that AI-bullish tech bros launching their own AI startups see it, too. But they seem to have convinced themselves that they will be the ones to beat the odds, either finding a profitable niche where the platform-oligopolists will tolerate their existence or buying them out. Or (in their wildest dreams) becoming the AI-boom equivalent of what Facebook was for the social-media boom and Google was for the internet boom.
Their investors would have to be an order of magnitude more credulous to believe these startup dreams. But they are, and they do. Thus the stock-market’s AI boom appears (to me, anyway) to be as large an episode of irrational exuberance as was the dot-com boom of the late 1990s. And the other AI startups? The platform-oligopolist’s focus on preventing others from harvesting the eyeballs they regard as their own property greatly limits the startups’ ability to realize any substantial revenues from their own investments.
Huge investments, with no direct return to the investors on the horizon. And yet AI stocks go up and up? Nvidia profits from all the competitive churn since all the other players must pay the Nvidia tax. But is Nvidia’s astonishing market cap (close to $3 trillion as I write this) realistic? Nvidia’s bread and butter is sales to platforms that probably don’t expect to make money with the chips they are now buying by the 747-load. And that hardly seems sustainable. As Herb Stein, Richard Nixon’s economic advisor put it: “If a thing can’t go on forever, it will eventually stop.”
Where Value? For Whom?
By this point, I hope I have convinced you that the value you see as an investor in AI depends on where you stand:
You may be a plunger, the potential victim of enthusiasts who are deceiving you because they are ethically challenged or because they have first deceived themselves.
If this rings a bell, flee as fast as you can because the pyramid is inherently unstable.
Are you hoping to profit from investing in the startups trying to ride the MAMLM boom like pilot fish on sharks?
The technology they are creating is very real and is without doubt very valuable, but that doesn’t mean there’s much low-hanging fruit left to pick. You should cautiously ask whether your rate-of-return projections are realistic, given that the enormous investment being made by the platform-oligopolists are primarily intended to strangle AI startups in the crib.
Are you trying to figure out how to utilize the capabilities that the platform giants are developing, offered to you cheap or free as an inducement to stay close?
Then you should remember the defensive motives of the platform developers, whose primary goal is to insure against disruption that makes them vulnerable to competition. You should thus not expect much revenue to flow to you from these massive investments.
Make no mistake: AI technology is the Next Big Thing, likely to disrupt business (and society) in ways predictable and not.
It is also, on balance, almost surely more of a Good Thing than not, offering the prospect of rapid productivity change and a path toward dramatic improvements in services ranging from medicine to education.
But the journey from here to there will be littered with roadkill.
And in light of the peculiar development dynamic unfolding, the perils for unwary investors seem to be large and growing.
Déjà Crypto All Over Again?
Back in the Day, financial pundits in the UK offered a wry piece of advice with the acronym FILTH: “Failed in London? Try Hong Kong!” Hong Kong was regarded as the place where fast talk, an assured manner, and an upper-class British accent could enable those who did not bring much of real value to get a seat at the table.
Looking at the froth on Silicon Valley’s current machinations over MAMLMs triggers the analogy module of my brain. How about “Failed in Crypto? Try AI!” FICTA does not offer the double-entendre charm of FILTH. But you get the idea.
Certainly someone needs to. For looking back even semi-objectively, the careening evolution of “crypto” is an example of market capitalism at its most problematic. Yes, the blockchain technology under the hood is truly ingenious. And there ought to be powerful and important uses for it, for exploiting the anonymity built into the technology to create social trust where there is precious little to spare.
But it is now 16 years since Satoshi Nakamoto – whoever they may be – mined the genesis block of Bitcoin. Originally, investors wanted to own Bitcoin because there were going to be valuable uses for it. And since the supply was limited by design, it would pay to be on the ground floor.
After a while, that case for Bitcoin morphed into a claim that there would be valuable uses for the underlying technology. For this to matter, investors had to make the leap of faith that the future entrepreneurs who developed those uses would share some of their economic gravy with legacy Bitcoin holders.
But why would they do that? It stood to reason that the developer of the technology would be highly motivated to foreclose competition from copy cats. Indeed, if they didn’t think they could, why did they make the investment in the first place?
Today, you don’t hear much about digital anonymity or the blockchain. Bitcoin simply is digital gold, a safe-haven asset for no reason other than that it is expected to be a safe haven, a social-consensus store of value. When pressed, Bitcoin bulls say it has value because it has generated a positive expected return. And it will continue on the same path because the number of people who believe in Bitcoin will grow. Stands to reason…
Changing gears from crypto to AI, we can see why people who profited mightily from the former in spite of its circular-reasoning justification are trying to run the same playbook with the latter. And we can see why people who did not profit from crypto (but envy those who did) are rushing to pile in. The consequences, alas, will probably be the same.







For all the breathless hype, I don't see MAMLMs being able to make scientific or technological discoveries with outside-the-box approaches.
Take AlphaFold, a brilliant execution of such an MAMLM to predict the tertiary structure of a protein from its amino acid sequence. But that is all it can do. It is a protein folding tool. It cannot even conceive of how to fold RNA from the 4 bases, UACG. It cannot inform you where the protein domains are and if there is importance in where they are related. Each of these questions requires a human to do, or expensive new models to be trained to answer these questions.
Take mathematics. The big AI companies benchmark their MAMLMs against various math questions. Granted, these are hard, and a MAMLM may correctly solve a problem using existing math techniques. But they cannot invent new mathematical techniques, just apply existing ones. Certainly, MAMLMs may be able to solve (or help to solve) outstanding conjectures using existing techniques, but just as likely, they will not. Human mathematicians will be needed, perhaps to invent new math to solve the problem.
So I can see MAMLMs doing grunt work, much as we use power tools instead of hand tools, for construction.
Given the incentives, if they are that powerful, there should be MAMLMs already making bucketloads of money fairly accurately predicting securities prices. They may be doing so now, but are hidden from view. But if they are, they are going to be obvious when they try to use that money rather than putting it in a warehouse, and exchanges are going to note these fabulous winners. Do any such exist?
Like powertools, these MAMLMs will be very useful, as long as we control them, block their fabrications and BS, and generally make sure they do the task they are asked to do. Of course, there is also the danger that their human masters won't know when they lie or fabricate responses, and we are forced to trust them. But until MAMLMs achieve human-level creativity, I don't see them magically able to solve problems that are conceptually out of our grasp.
As for solving climate change. My guess is that an MAMLM will simply say, "The solutions are well-known, you just have to act on them. There are no new magical technological solutions that are needed. The list of needed actions is..."
Might be worth a read.
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
https://metr.org/Early_2025_AI_Experienced_OS_Devs_Study.pdf