Notes on the Berkeley "The AI Con" Book Launch Event
2025-10-02 16:00 PDT (Th): 470 Stephens Hall: Alex Hanna, Timnit Gebru, Khari Johnson, & Tamara Kneese. 2025. “The AI Con Book Roundtable”...
2025-10-02 16:00 PDT (Th): 470 Stephens Hall: Massimo Mazzotti, Morgan Ames, Alex Hanna, Khari Johnson, Tamara Kneese, & Timnit Gebru. The AI Con Book Roundtable…
So I went to see:
Mazzotti, Massimo, Morgan Ames, Alex Hanna, Khari Johnson, Tamara Kneese, & Timnit Gebru. 2025. “The AI Con Book Roundtable.” Center for Science, Technology, Medicine & Society (CSTMS); University of California, Berkeley, October 2, 4:00–5:30 pm, 470 Stephens Hall. <https://cstms.berkeley.edu/events/the-ai-con-book-roundtable/>.
Lots of interesting and true things were said. And I love the podcast of the book’s authors Emily Bender and Alex Hanna, “Mystery AI Hype Theater 3000” <https://www.dair-institute.org/maiht3k/>. And I have the book, and am going to read it straight through REAL SOON NOW. And we all owe a great cognitive debt to Emily Bender & al. (2021) <https://doi.org/10.1145/3442188.3445922> for coining the viral meme “stochastic parrots” as a metaphorical description of GPT LL MAMLMs—General-Purpose Transformer Large-Language Modern Advanced Machine-Learning Models.
But I came away very frustrated.
Why? Because there was a huge hole at the center of the panel discussion.
The hole? The hole was the near-complete absence of a description or a view of what “AI” is. There was great agreement on what it was not:
not a set of Turing-Class software entities,
not a research program that would lead to the construction of Turing-Class software entities,
and especially not something that would lead in short order to the creation of the DIGITAL GOD that is “ASI”—Artificial Super-Intelligence.
But very little on what it is. There was, if I recall correctly, one reference to “stochastic parrots”. There were two to “synthetic text extrusion machines”. There was one call I wholeheartedly endorse—there is a reason I write “MAMLM” rather than the wishful mnemonic “AI”—to, roughly (this isn’t a quore): break the spell, break the myth… [by] replac[ing] cases where they say ‘AI’ with ‘matrix manipulations’ instead, and how does that change how you think about it…
But that was pretty much it.
That, to me, does not cut it.
Yes, the AI hype boom is ludicrous and delusional. Yes, the castles in the air are insubstantial things made up of clouds in accidental arrangement.
But, still, there are signs and wonders on this earth.
There are real technologies being developed and deployed. There are things happening that have triggered and reinforced the very trange beliefs we see around us, beliefs that end with:
a remarkably large number of people whose judgment in daily life navigating our society seems quite good, but who now fervently believe that they are going to build DIGITAL GOD within the decade
a smaller but still remarkably large number of people who have societal power to commit hundreds of billions of dollars of real resources to enterprises, and have decided to commit them to the AI hype bubble build-out.
Thus it seems to me that there is one most important task for an AI-skeptic today. That is to explain what “AI” is, since we, or at least me and my karass, think we know what it is not, to help us see better why this is happening. What are those signs and wonders on this earth, really? What are the possible true ways of characterizing the technologies being developed and deployed? To lament that people are falling for the hype and wrong to do so is not, to my mind, sufficiently enlightening.
So I tried to provoke. I asked a question. This, below, is not my question, but rather an expanded version that I wish I had said:
Perhaps the most useful thing I can do now is to attempt to channel my friend Cosma Shalizi of Carnegie Mellon. He is one of the crankiest advocates I know of the “Gopnikist” line that these are distributed socio-cultural info technologies, and that the entire hype cycle is indeed extremely delusional. He views someone like an Eliezer Yudkowsky saying “shut this now or it will kill us all!” as the equivalent of someone raving about how library card catalogues must be placed inside steel cages RIGHT NOW lest they go feral. And he is equally dimissive of the other side of the doomer/boomer coin—that pairing was one of the great things about the book The AI Con, by the way. Consider Travis Kalanik boasting that his conversations with Claude have led him to the point of being about to make fundamental discoveries in physics.
Nevertheless, it is astonishing that this roiling boil of linear algebra can come as close as it can to genuinely passing the Turing Test. I mean, “It’s Just Kernel Smoothing” and “It’s Just a Markov Model” vs. “You Can Do That with Just Kernel Smoothing and a Markov Model!?!” The results are very impressive! And they were very unexpected!
Really old in ML dog-years are kernel smoothing in general. Really old in ML dog-years are Markov models for language.
But this!
What is it? It’s sorta like Google Pagerank, producing the next word a typical internet s***poster would say in response to a prompt rather than the link a typical internet s***poster would create in response to the keywords. It is definitely not a software entity carrying out human-level human-like thought. It’s is definiely not an embryonic DIGITAL GOD that just needs four more Moore’s Law-like doubling-cycles before it is too smart for us, and gets us all to do its bidding. It is not something that is going to get us all to do its bidding by flattering us. It is not something that is going to get us all to do its bidding by threatening us. It is not something that is going to get us all to do its bidding by hypnotizing us via its Waifu Ani avatar.
But it is definitely a something!
So what is it?
I need to know what is it before I can start to make sense of the hype cycle and the hype cycle’s remarkable persistence. I need an explanation of why people like Mark Zuckerberg have suddenly switched and become one of the chief boomer hypesters. Zuckerberg was pursuing a more cautious, open-source road. He had been having his organization build a natural-language interface to Facebook and Instagram that was good enough. He had been open-sourcing the core of LLaMa. Why? So that others would be unable to greatly monetize their own foundation models to fund the construction of competing social networks that might destroy his platform-monopoly profit flow. But then, in two months, he became the boomerest of boomer hypesters, the one who is going to spend more money than anyone else on ASI, on DIGITAL GOD.
What are he and others seeing in these GPT LLM MAMLMs that leads to these courses of action?
And the responses I got did not seem to me to hit the nail.
There were, if I remember things aright, three responses:
One response was, again roughly (this is not a quote from anyone): Meta just made explicit what Silicon Valley has long implicitly thought. Really and truly, AI is for ads. It is adding generative AI into apps and learning from user queries to target ads. The model is simple: learn about me by having me talk to their “God,” then monetize it. Generative AI amplifies signals, predicts intent, and shortens the path from query to ad. That prints cash. The ad funnel—attract, engage, profile, target—now has a new interface. Meta’s move confirms the trajectory: AI as attention refinery; ad-tech as processor. The web’s “original sin” tied civic communication to ads rather than subscriptions, public funding, or commons. Generative AI steepens that slope, and so spins the flywheel faster.
My take: Smart, and very possibly true for FaceBook—that they are finding ChatBots very useful in data-harvesting and MAMLMs very useful in subsequent ad-targeting, have recognized that promising “ASI” is necessary right now to keep cutting-edge MAMLM software engineers, and so Zuckerberg is saying what he thinks necessary and may or may not believe it. But that is only a local explanation. And, yes, I endorse 100% “the web’s ‘original sin’ tied civic communication to ads rather than subscriptions, public funding, or commons..” But I still am not satisfied.
A second response was, roughly (again, this is not a quote from anyone): Back when Meta started its AI lab, Mark Zuckerberg came to a machine‑learning conference, interviewed Yann LeCun, and said he proud he was to launch the lab. But people were leaving Meta-AI because Zuckerberg’s lack of commitment meant that it, relatively, lacked GPUs. Leaders boasted: OpenAI has 10,000. Elon Musk says he will have 20,000. So Mark Zuckerberg decides to say that he will have the most GPUs. These people are not just motivated by profit, they are also engaged in myth‑making and in status competition: I have the most, the biggest. We should not assume these people are super-smart, that if Zuckerberg is buying in, there must be something there. We do not have to believe that. Remember the Metaverse!
My take: Dominance-game bragging rights for TechBro who have won the wealth game and now want to win a within-their-circle status game—see Sergey Brin’s “I would rather go bankrupt than lose [the AI race]…”—does play a role. But surely these people are seeing something out there in the world that I am not? Again, I am not satisfied.
A third response was, roughly (well, closer to a quote): It’s on the tin. It’s the title of the book: The AI Con. There’s a reason that’s the title. We keep reaching for the right metaphor for “AI,” and we often miss. Suresh Sunderrajan—name likely mispronounced—made that clear this year by listing the stock frames: snake oil, normal technology, con game. The real question is: at what level should we think about it? Social and interactive? A collaborator? A big ball of computation? We chose “con” because we were looking at the ideological and political‑economic work done by the choice and use of the term “AI”. That’s the key metaphor for our purposes.
(With respect to this last I could not, using either Google, DuckDuckGo, or Dia Browser-AI, pull up anything like Sunderrajan’s list.)
My take: I agree that the term “AI” clouds the minds of men, and is, to put it politely, a “wishful mnemonic” that makes the world a worse place. That is why, as I said, I prefer to use MAMLMs—Modern Advanced Machine-Learning Models. Others prefer CIP—Complex Information Processing. And I agree that starting from the term “AI” is going to lead you to very wrong metaphors. But I do not think that to say “confidence game” and then to sit back cuts it. With crypto, yes, it clearly was (and is) a confidence game. You want to be convinced of that, just listen to Matt Levine and Joe Weisenthal’s response to Sam Bankman-Fried on Tracy Alloway and Joe Weisenthal’s “Odd Lots” podcast <https://www.bloomberg.com/news/articles/2022-04-25/odd-lots-full-transcript-sam-bankman-fried-and-matt-levine-on-crypto>:
Matt: (27:13): I think of myself as like a fairly cynical person. And that was so much more cynical than how I would have described [yield] farming. You’re just like, well, I’m in the Ponzi business and it’s pretty good.
Joe: (27:27): At no point did any of this require any sort of like economic case, it’s just like other people put money in the box. And so I’m going to too, and then it’s more valuable. So they’re gonna put more money in, and at no point in the cycle, did it seem to like, describe any sort of like economic purpose?
SBF: (27:42): So on the one hand, I think that’s a pretty reasonable response, but let me play around with this a little bit. Because that’s one framing of this. And I think there’s like a sort of depressing amount of validity [to it]…
But in a con game the con artists know what they are doing. And in a con game money reliably flows from the marks to the con artists. That is how you tell who the con artists are: who set the thing in motion, who winds up with the money, and who is, contrarywise, left as the bagholder. But the AI-hype scene—the tranche of hypesters (and there are many, but they are far from the majority) who are con artists who have migrated en masse over from crypto aside—is now well described as a simple con.
Perhaps the reason that Bender and Hanna are happy to accept their "confidence game” framework as sufficient and then sit back is that they are not economists. Indeed, a secondary flaw I found from the event was that there were no economists on the panel.
Zero.
On the podium and at the table we had, sequentially: A Ph.D. in Science Studies; a Ph.D. in Communications; a Ph.D. in Sociology; a B.A. in Journalism; a Ph.D. in Media, Culture, and Communication; and a Ph.D. in EECS.
Even though most of the conversation was about the economics of the AI hype boom.
There were occasional references to Marx, but of the “this is our totem” type—of the kind that have raised the hackles on the back of every single economist since at least the day back in 1953 when Joan Robinson lost it and said that while you have Marx on your lips while we have him in our bones.
There were frequent references to how the AI hype boom was somehow functional for “capitalism”.
There were an underlying consensus that bad actors were, consciously, using propaganda to run the confidence game that drives the AI hype boom so that they could make oodles and oodles of money.
Now it is true that there are some bad actors running confidence games. I think of venture capitalists who have found that their ability to get gullible investors to give them money for crypto startups was tapped out. I think of how they have moved on to AI. It is true that they are making money.
And NVIDIA is also making money. Oodles and oodles and oodles and oodles and superoodles of money. And loudly boasting about how great their chips are for MAMLM workloads, and getting greater. NVIDIA has benefitted by hitting the absolute goldest gold mine of all time with the coming of the AI hype boom. Its GPUs first had an important but limited niche in video and gaming. But competition was heating up. Its GPUs then and then had an important but limited niche in crypto-mining, but that threatened to turn into a total bust and take NVIDIA into a GPU winter wih it. And then its GPUs turn out to be the Super-Golconda in the AI-hype age. And Jensen Huang is covering himself with glue and standing, arms outstretched, in the money wind. And he is hyping for all he is worth. He is promising that:
AI is the greatest technology equalizer of all time…. Everybody’s a programmer now…. Just go up to the AI and say, ‘How do I program an AI?’ And the AI explains to you exactly how to program the AI.” Even when you’re not sure exactly how to ask a question, you say, ‘What’s the best way to ask the question?’ And it’ll actually write the question for you. It’s incredible…. Everybody’s an artist now. Everybody’s an author now. Everybody’s a programmer now. That is all true…
But who else is making money?
Some OpenAI employees have sold some of their shares to newcomer investors, and so gotten some money out of the system. NVIDIA has made and is making fortunes. Further upstream, TSMC and ASML are cursing themselves for not having made revenue-sharing deals for their machines and services that are actually building the chip NVIDIA designed and sells. The VCs who have come over from crypto, found gullible investors, and collected their money up front have gotten some money out of the system.
And that is it.
Every other organization—bad actor, neutral actor, good actor—involved is not making money.
Every other organization involved is taking unbelievably huge piles of money AND LIGHTING THEM ON FIRE.
They then, as people gather around and watch, spend their time assuring themselves, each other, and passers-by that this sacrifice will, somehow, lead to them making all the money back and much more in the future. And enough investors and the stock market are, so far, believing them.
The entire panel—I think because it had precisely zero economists—missed this entirely.
The consensus belief among the members of the panel was that the AI hypesters are doing something that is (a) functional for capitalism and (b) highly profitable for their enterprises.
This strikes me as not right. And very obviously not right.
Mind you, it is far less delusional than an Eliezer Yudkowsky’s saying “shut this now or it will kill us all!” It is not nearly as delusional as a Travis Kalanick’s boasting that in his nightly conversations with his favorite ChatBot he and it have reached the point of being about to make breakthroughs in fundamental physics.
But, still, it is not right, in ways that seem very obvious to this economist.
An economist on the panel would have been likely to at least raise some questions about these issues, rather than allow the entire panel to skate over them.
And so I left, frustrated:
Myself when young did eagerly frequent
Doctor and Saint, and heard great Argument
About it and about: but evermore
Came out by the same Door as in I went…
Appendix: Event Description (Edited):
Mazzotti, Massimo, Morgan Ames, Alex Hanna, Khari Johnson, Tamara Kneese, & Timnit Gebru. 2025. “The AI Con Book Roundtable.” Center for Science, Technology, Medicine & Society (CSTMS); University of California, Berkeley, October 2, 4:00–5:30 pm, 470 Stephens Hall. <https://cstms.berkeley.edu/events/the-ai-con-book-roundtable/>.
Massimo Mazzotti is Professor of History at UC Berkeley. His work probes how mathematics and technology help “order” the modern world, from Enlightenment debates to algorithmic life. He’s written The World of Maria Gaetana Agnesi and Reactionary Mathematics: A Genealogy of Purity, the latter tracing a nineteenth‑century turn toward “pure” mathematics as both a technical and political project. He directs Berkeley’s Center for Science, Technology, Medicine & Society, and has broad public‑facing writing on algorithms, design, and mechanization.
Morgan G. Ames is Associate Director of Research at Berkeley’s Center for Science, Technology, Medicine & Society and Assistant (Adjunct/Practice) Professor in the School of Information. She studies the ideological roots of inequality in tech—especially utopianism, youth, and learning—and how computing shapes identity. Her award‑winning book The Charisma Machine (MIT Press, 2019) uses ethnography of One Laptop per Child to dissect promises vs. outcomes, and her current work spans Minecraft cultures, AI narratives, and programming origin stories. She leads the Designated Emphasis in STS and is active in algorithmic fairness.
Alex Hanna is a sociologist and Director of Research at the Distributed AI Research Institute (DAIR), where she studies how datasets and computational systems reproduce inequalities across race, gender, and class. Formerly on Google’s Ethical AI team, she co‑authors The AI Con and co‑hosts ”Mystery AI Hype Theater 3000”, bridging social science and computer science to interrogate algorithmic bias and the political economy of AI. Her work spans social movements, data labor, and fairness in machine learning.
Timnit Gebru is a computer scientist and founder of DAIR, known for landmark work on algorithmic bias and the ethics of AI. A co‑founder of Black in AI, she helped establish the field’s critique of large language models (e.g., “Stochastic Parrots”) and co‑authored Gender Shades, revealing racial and gender disparities in commercial facial analysis. Her public scholarship and institutional leadership foreground accountability, data provenance, and harms to marginalized communities.
Khari Johnson is a technology journalist focused on AI’s social impacts, labor, and governance, bringing sharp reporting and accessible analysis to how automation reshapes institutions and everyday life. His work traces the gap between corporate narratives and lived outcomes, documenting the politics of deployment, the incentives behind hype, and the consequences for workers, public services, and democracy.
Tamara Kneese is a media scholar and Senior Researcher at Data & Society’s AIMLab, examining platform labor, responsible AI, climate tech, and the cultural politics of death and care; her book Death Glitch (Yale, 2023) dissects techno‑solutionism’s limits. She studies how infrastructures and secondhand economies shape social life, advocating feminist STS approaches to algorithmic systems and sustainable computing.
Is artificial intelligence going to take over the world? Have big tech scientists created an artificial lifeform that can think on its own? Is it going to put authors, artists, and others out of business? Are we about to enter an age where computers are better than humans at everything?
The answer to these questions, linguist Emily M. Bender and sociologist Alex Hanna make clear, are “no,” “they wish,” “LOL,” and “definitely not.” This kind of thinking is a symptom of a phenomenon known as “AI hype”. Hype looks and smells fishy: It twists words and helps the rich get richer by justifying data theft, motivating surveillance capitalism, and devaluing human creativity in order to replace meaningful work with jobs that treat people like machines. In The AI Con, Bender and Hanna offer a sharp, witty, and wide-ranging take-down of AI hype across its many forms. For more information please visit the AI Con website.
References:
Alessandrini, Giulio, Brad Klee, & Stephen Wolfram. 2023. “What Is ChatGPT Doing ... and Why Does It Work?” Stephen Wolfram Writings, February 14. <https://www.wolfram.com/language/what-is-chatgpt-doing-and-why-does-it-work/>.
All-In Podcast. 2025. “Winning the AI Race Part 3: Jensen Huang, Lisa Su, James Litinsky, Chase Lochmiller.” Video, 1:04:38, July 23. <https://www.youtube.com/watch?v=9WkGNe27r_Q>.
Alloway, Tracy, Sam Bankman-Fried, Matt Levine, & Joe Weisenthal. 2022. “Sam Bankman-Fried & Matt Levine on How to Make Money in Crypto”. Odd Lots, April 22. <https://www.bloomberg.com/news/articles/2022-04-25/odd-lots-full-transcript-sam-bankman-fried-and-matt-levine-on-crypto>.
Bender, Emily M., Timnit Gebru, Angelina McMillan‑Major, & Margaret Mitchell. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, & Transparency, 610–623. New York: Association for Computing Machinery. <https://doi.org/10.1145/3442188.3445922>.
Bender, Emily M., & Alex Hanna. 2025. The AI Con. New York: Verso. <https://thecon.ai/>.
DeLong, J. Bradford. 2022-2025. “SubTuringBradBot”. Grasping Reality. <https://braddelong.substack.com/s/subturingbradbot>.
Farrell, Henry. 2025. “AI as Governance”. Annual Review of Political Science 28:375–392. <https://www.annualreviews.org/content/journals/10.1146/annurev-polisci-040723-013245>.
Farrell, Henry. 2025. “Understanding AI as a Social Technology.” Programmable Mutter, September 12. <https://www.programmablemutter.com/p/understanding-ai-as-a-social-technology>.
Farrell, Henry. 2025. “Large Language Models Are Cultural Technologies. What Might That Mean?” Programmable Mutter, August 18. https://www.programmablemutter.com/p/large-language-models-are-cultural>.
Farrell, Henry. 2025. “The Political Economy of AI: A Syllabus.” Programmable Mutter, July 12. https://www.programmablemutter.com/p/the-political-economy-of-ai-a-syllabus>.
Farrell, Henry. 2025. “Markets, Bureaucracy, Democracy, … AI?” Programmable Mutter, June 30. <https://www.programmablemutter.com/p/markets-bureaucracy-democracy-ai >.
Farrell, Henry. 2024. “Vico’s Singularity.” Programmable Mutter, May 1. <https://www.programmablemutter.com/p/vicos-singularity >.
Gebru, Timnit & J. Boulamwini. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of FAccT, 77–91. With J. Buolamwini.
Gebru, Timnit. 2021. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Proceedings of FAccT (2021): 610–623. With E.M. Bender et al.
Gopnik, Alison, Henry Farrell, Cosma Rohilla Shalizi, & James Evans. 2025. “Large AI Models Are Cultural & Social Technologies”. Science 387:1153–1156. <https://www.science.org/doi/10.1126/science.adt9819>.
Hanna, Alex, Timnit Gebru, Khari Johnson, & Tamara Kneese. 2025. “The AI Con Book Roundtable.” Center for Science, Technology, Medicine & Society (CSTMS); University of California, Berkeley, event page, October 2, 4:00–5:30 pm, 470 Stephens Hall. <https://cstms.berkeley.edu/events/the-ai-con-book-roundtable/>.
Khayyam, Omar. 1859 [1120]. The Rubáiyát of Omar Khayyám, trans. Edward FitzGerald, 1st ed. London: Bernard Quaritch. <https://www.gutenberg.org/ebooks/246>.
Robinson, Joan. 1953. “An Open Letter from a Keynesian to a Marxist”. In On Re‑reading Marx, Cambridge (UK): Department of Applied Economics, University of Cambridge. <https://jacobin.com/2011/07/joan-robinsons-open->.
Shalizi, Cosma Rohilla. 2023-2025. “‘Attention, ‘Transformers’, in Neural Network ‘Large Language Models’”. Bactra.org, last update Aug. 23. <http://bactra.org/notebooks/nn-attention-and-transformers.html>.
Shalizi, Cosma Rohilla. 2025. “On Feral Library Card Catalogs, or, Aware of All Internet Traditions.” Bactra.org, last update Aug. 23. <http://bactra.org/weblog/feral-library-card-catalogs.html>.
Brilliant, but that hardly needs saying. But the current reaction to LLCMs reminds me of the one course in AI that I took at Berkeley, in the new Computer Science department (Dept of Electrical Engineering, their first affirmation that Computer Science was a thing). One of the subjects covered was a new idea which was then being abandoned: Artificial Neural Networks (a term I had to look up, praise be to the Internet, having forgotten it after all these years) which had briefly looked like huge advance when implemented in software. Hadn't worked, of course. Seeing this, I thought of Moore's Law, an unscientific thing that no one believes any more, which why it is being cited all the time with due apologies. So I applied it to the time from 1966 to the present.
Most interesting. Makes a software guy really appreciate hardware.
I have a few comments.
1. "AI" as a term has been in common parlance since the 1950s, at least. It is an overarching term for the many techniques that have been used, including ML.
2. Intelligence is not well defined. Usually more operational and descriptive, but not carefully defined. Therefore, it is bandied about from academics to the "Man on the Clapham Omnibus."
3. LLMs have reached the stage where modern humans can anthropomorphise the LLMs much as the ancients did about natural phenomena. We are wired that way. In a fairy tale, we have substituted talking animals for a talking box.
4. The pouring of capital into infrastructure is the same as any other "mania", tulips, South Sea, etc. Even "smart people" convince themselves that there will be a return. We have had the phenomenon in railways, the internet, and fiber buildout. If anything, I would encourage this, as long as we can ring-fence the economic fallout when the bubble bursts. As in the Gilded Age, super-wealthy people are running the show. Maybe the fallout will persuade us to limit the wealth disparity that can lead to such hubris.
5. NVIDIA is selling shovels to gold mining companies. Cynically, the company may be selling the dream of "Gold in them thar hills" by salting the rivers with a few nuggets. More likely NVIDIA believes the hype and will support it as long as they can keep selling high-end GPUs. After all, it just did a revenue-sharing deal with POTUS in exchange to selling to China. Maybe that sucked in POTUS too?
6. Egos. At this point, it must be clear that this is a bubble that will burst. Huge egos cannot allow themselves to exit and admit they were mistaken. So the merry-go-round continues, until there is a "winner," perhaps selling expensive services to governments to generate some sort of hoped-for ROI.
My hope is that there will be enough value in the work to democratize MAMLM use in edge devices. [See Pete Warden's blog https://petewarden.com/2025/10/02/how-to-try-chromes-hidden-ai-model/ on accessing Google's NanoLLM in the Chrome browser. At a minimum, there will be talking stuffed animals, like a super Teddy Ruxpin, as in Aldiss's "Super Toys last All Summer Long" (filmed as A.I.)]
If A.I. is a con, it is some people conning themselves, in an infomedia system that is largely just cheerleading. [IIRC, Hitler was duped by Von Braun into diverting huge resources into building vengeance rockets that did nothing to win the war. In some respects, he did it again with the "Space race" that briefly put flags and footprints on the Moon, but did increase US global standing. Maybe a sort of AGI will do the same?]