14 Comments

ChatGPT labelled that graph 10^3 (Billions) instead of just 10^12, obliging it to start at 10^-1. Did it copy that tic from its source, I wonder?

Geoff Hinton is fond of saying that back in the 80's AI researchers were taking a sui generis approach to each problem domain. He and his colleagues were running around saying that their neural net approach was the only approach needed, it could replace all the others. But at the time, it didn't work. There were only two problems: they didn't have enough data, and they didn't have enough compute. That's not surprising if you need a trillion parameters in practical situations.

The current performance of LLMs is due to remedying these two defects. So far, my experience with co-piloting is underwhelming; they slow me down by nearly as much as they speed me up. Are improvements in the intelligence part of AI extrapolated from this sort of exponential graph? Training a very large LLM is already very expensive; will the next iteration be *exponentially* more expensive?

Expand full comment

There's also a certain limit to how good an LLM can get. At a certain point adding parameters isn't going to help and may even make things worse.

Take an example from chemistry. Imagine having a perfect memory for every chemical reaction in the literature. Presented with a reaction problem, one could simply recall the solution. An LLM capable of this could be amazingly useful, but it only works if the solution is in the literature. A limited LLM would miss reactions. Adding parameters would allow a model to cover the literature more completely, but a more useful system must have a level of blur.

If you ask a question that doesn't have a precise answer in the literature, you have to rely on the LLM clustering related reactions, a blurring process. Training the model would, in effect, have to re-derive the stuff they teach in chemistry class like acids and bases and alkalai metals. The quality of the answer depends on getting the blur just right. If there are too few parameters, necessary information may be missed. If there are too many parameters, the answer could be unstable and too subject to artifacts of the training process.

There is a good chance that any chemistry LLM could get answers as good as someone who excelled in chemistry class, possibly even better than most. There is also a good chance that, presented with a novel query, an LLM model will make beginner's mistakes or worse. Lacking introspection, it may not recognize the uncertainty of its answer or be able to properly correct itself as needed. Every real expert knows when to give a certain answer, give a vaguer answer surrounded by "weasel words" or simply offer a referral to another expert.

I'm pretty sure LLMs have a good future helping programmers because programming languages and systems are regular and complex. A lot of the challenge is pattern matching and keeping track of things. I think natural language databases, which have been around in various forms since the 1960s, are overrated since natural language is full of ambiguity. (I worked on such a system back in high school.)

I think a big problem is conflating LLMs and AI with the actual information revolution which started in the 19th century with mechanical feedback devices like engine governors, moved to electricity in the early 20th century with devices like thermostats and has now gone completely digital in the 21st century. I expect the boost in productivity to lead to higher living standards, but that is a political problem, not a technical problem. (Look at our feeble response to the 2008 crash that left the economy in a coma until the COVID emergency finally triggered a real stimulus.)

P.S. Chemistry is a good problem domain to consider. Take a simple problem: Ask an LLM if two SMILES strings represent the same molecular structure. SMILES is a system for rendering complex chemical structurse as a linear strings, but there is no single canonical form. If a human wants to compare two SMILES strings, they have to envision the chemical structure. If an LLM wants to do this, well, I'm not sure of how it would do so except by effectively reinventing the idea of chemical structure.

Expand full comment

For example, title of chapter 2 in that Fogel Book: Technological Change, Cultural Transformation and Political Crises. You've probably covered some of it in Slouching. But more could be said, too, about things that Fogel didn't see coming etc. Roughly 25 years have gone by since.

Expand full comment

Brad, in the year 2000 of our Lord, Robert Fogel published The Fourth Great Awakening and the Future of Egalitarianism. In the year 2024 of our Lord, you have the material to write, say, The Fifth Great Awakening and the Future of (pick your issue). Can't wait.

BTW, that Fogel book also has a nicely-labeled population chart in the begining that your current students will love.

Expand full comment

While I found this piece fascinating and agree with much of it, I do question this section:

"The fact that the conversations will be “good enough” will bring transformations in human society. We have long talked to our pets and our imaginary friends. We have all benefited from interaction and feedback from our coaches. But, in the future, for the first time, our pets and our imaginary friends will answer us back—we will not have to imagine what they would say. And, for the first time, a first-class coach will be available to everyone for free."

I think this is a fundamental misunderstanding of what pets and coaches are and do. I certainly don't think everyone will have a first class coach for free.

The imaginary friend that can talk back convincingly is more interesting, both because I think children are more easily convinced/satisfied and because I could imagine it leading to interesting dynamics in both negatives and positive ways. What happens to children who have a real life Ted or Klara as a companion is an interesting question.

Expand full comment

This kind of thing works because humans are good at anthropomorphizing. It's part of what lets us deal with other people, domesticate pets, look for omens and swear at balky machinery. Having a golden calf that spouts actual prophesy, gives orders and answers questions doesn't strike me as real progress.

Expand full comment

Yes. What was in my mind was that good coaching (or teaching more generally) isn't so much about providing someone with "correct" answers. Having this information is important, but it's also about understanding the person being coached or taught and knowing what to respond or suggest that will actually get them to change their attitudes or behavior in a constructive way. The real magic lies there, and I have yet to see any examples of AI being able to do this effectively (though I would be interested in hearing more if anyone thinks such example exist).

And I don't even have an idea of what Brad is suggesting with regard to AI and pets?!

But the imaginary friend idea does strike me as interesting, especially for kids. I think it would have been pretty amazing as a child to have an electronic intelligence or doll/figure that could actually respond "intelligently" to it's surroundings. The question that is intriguing is to how that is different than playing with a or brother or sister or a friend. Can imagine ways that it would have been better or worse, although I'm also not sure that the ways I would have found it better or worse would actually have been better or worse (in terms of raising a "successful" or "healthy" child). It's a pretty interesting area.

Expand full comment

The problem is that a kid can play with a clothespin and have as much fun and get as much educational value as if it were a walking, talking companion. Back when I was a kid, I used to hang out with other kids, so I speak from experience. I think having an AI interactive companion toy is the kind of thing that appeals more to adults than children. Worse, it eliminates important elements of imaginative play. Odds are a kid will want to play with the box the AI gadget came in.

P.S. I can't help but think of John Brunner's Dr. Smiles, a personalized AI psychiatrist people used in one of his books to break themselves down mentally to avoid getting drafted into living on Mars. (Was that Jagged Orbit?)

You are right about tutoring. It's a lot more than just presentation and drill. You have to build a model of what the student understands and doesn't. Then, you have to find the key to get them past that point.

Expand full comment

You're, of course, right that children can turn anything into a great play item. My obsession as a kid was these wonderful, high quality, plastic animals that my grandfather would buy as a treat each year from the Peabody Museum in New Haven which we would go to on our annual visit. I would spend endless hours playing with these animals, usually having them compete in football games (I was a sports nut child as well as adult).

That said, I think a robot stuffed animal or other toy that would actually respond "intelligently" to requests or situations could spawn its own forms of imaginative play. And I also think a dungeon and dragons game where the characters actually behaved "autonomously" based on a set of programmed character traits (versus a script or a formula) could be awesome as well. AI that could power both of these scenarios in high quality ways seem possible to me in the next few years versus high quality teaching or coaching which seems much, much further in the future (if ever possible).

In fact, I'm not sure I would even use the word "model" in terms of tutoring . Or perhaps I would say that having a model is important, but you also need a quality of empathy that I'm not sure AI will ever possess. At least for people who are not self-motivated enough where more sophisticated drill and corrections is really all they need.

Expand full comment

Kids play video games, and many of them have NPCs, non-player characters, driven by primitive AI systems. I did some reading a few years back, and the scripts and rule systems that drive them are very 1980s within the well defined game world. They did the expected graph searches for path analysis but also navigated a complex class hierarchy which wasn't something I expected. I assume a more modern system would learn to act as an NPC by playing with other similar NPCs.

Still, I wonder if a more advanced AI system embodied as a physical doll or toy would provide a different experience from an NPC in a video game. Whether it makes sense or not, I can imagine some reticence on the part of Amazon, Google and the like in developing AI systems for children what with privacy, porn and culture war issues. We obviously want something well beyond Teddy Ruxpin, but it's hard to say who might deliver.

Model might not be the right word, but a lot of empathy is about building an understanding of another party's experience and perceptions.

Expand full comment

Klein: " Thus we are not prepared to handle it well, and there is the possibility that it may turn out to trigger some form of societal or human catastrophe."

DeLong: This fear on Ezra Klein’s part is entirely rational. And it is, I believe, correct.

I do not see any indication of "societal or human catastrophe" in the your discussion of LLMs and AI in teh following paragraphs. If anything, Klein has wedded himself to those who worry about the existential risks posed by AI, whilst your view seems to be more like: "we have seen these transitions before".

I am not even sure the transitions are that great, and despite Klein's characterization of Pichar's views as not hype, it seems to me that claiming AI is more important than fire or electricity is an overstatement. We are now headed to the "peak of expectations" for AI, at least in the latest incarnation.

let's be realististic here. co-pilots are not fundamentally going to change anything, just speed up some processes. Need a word? No need to thumb the thesaurus. Need some code examples - possibly faster than searching Stack Exchange. Doing claculations or extracting and visualizing information? Getting the data, assuming it is not hallucinated, is going to be faster, but after that, it is far easier to use a spreadsheet. Taliking to inanimate objects? People have been doing that for years. and making up answers. Is a ChatBot going to be that much better than a "sub-Turing model of a persion in your head"?

Fire really did transform the world - from keeping warm, cooking food that changed the size of our intestines and brains, made killing game more effective, all teh way to teh post-industrial age. Electricity is very transformative, although whether it is more important than literacy, is debatable. But AI? It will be productivity enhancing in some domains. It will likely prove a time-sink in others, like PowerPoint. DeepMind just demonstrated a hybrid LLM with Symbolic logic to solve geometry problems. Even if it could solve any math problem, just how transformative would that be? For most people, it would be meaningless. DeepMind's tertiary structure predictions for proteins is very impressive, but it will not directly impact the work of more than a tiny fraction of the population, although it may ensure more timely development of candidate medicines - that still need nearly 2 decades to validate for the clinic. Cut that time in half, and we would be doing drug development from NCE to the clinic in times comparable to teh middle of the C20th.

The real risks for "societal or human catastrophe" are:

1. In the short term, sociopathic leaders starting global wars, escalating in the use of nuclear weapons.

The results would be seriously catastrophic.

2. Global heating. We continue to pretend to be "doing something" yet it really is business as usual

with some tweaks. The nth-order effects are barely even thought about. Fanciful ideas of living in

flooded cities like Miami, and feeding ourselves with gene-engineered crops and pollinated by

mechanical insects are just bad science fiction.

AI is unlikely to solve either of these major risks.

The questions you raised in your excellent tome - "Slouching Towards Utopia" - concerning our collective failure to solve distribution will become ever more acute. AI increasing productivity could considerably worsen inequality, especially given how policy is controled in many nations. We are not moving towards the original universe of Star Trek, but towards the many dystopias that mimic the past but with much more technology.

My bet is that by 2050, economists will find it hard to demonstrate specific productivity gains attributable to AI, just as they did with computers, especially with the PC revolution and the proliferation of software.

Expand full comment

Prediction: LLM's will decrease the demand for knowing programming syntax, but increase the value of applying logic and troubleshooting because LLM's are an interface to data, but real world data is very messy. For example, a smart human still has to fine tune LLM applications to corporate data.

The firms that present new technologies typically fail (railroads, radios, internet), many absurdly so. I suppose some AI firm(s) will be profitable, but many will be laughable mal-investment. There's usually some later benefit from the mal-investment. Someone may find other uses for a surplus of fast matrix processing chips.

Expand full comment
Comment deleted
Jan 23
Comment deleted
Expand full comment

Thx! Brad

Expand full comment

That was quite good. Thanks for sharing.

Expand full comment