Continuing to Worry at þe Chat-GPT4 Issue, as If I Were a Dog, & It an Old Shoe
BradAssistantBot is a neural network less than 1/30000 as complex as Brad, and it has been trained to please human testers with its fluency rather than to understand the world; no wonder it strikes...
For someone who has—I think—about three day jobs now, I have been wasting an awful lot of time thinking about Chat-GPT4 and its cousins.
On the other hand: gradient descent—doing what seems to be the most rewarding and interesting thing at each moment has served me rather well in my career.
After all, the key is not to attempt to do what would be the best thing to do right now but rather to do what you can do enthusiastically, and so productively, in the moment.
So here goes:
I think I have found a long-winded way of explaining why I think what I do about Chat-GPT4 and its cousins.
I continue to try to train BradAssistantBot to, starting from the underlying ground truth of the text of my Slouching Towards Utopia, be a ChatBot that will give answers close to the answers I would give.
I continue to fail—to get answers in the Uncanny Valley, and answers that are not so smart.
Consider the question that I asked BradAssistantBot last night, and its answer:
How did Russia manage to survive the first two years after the Nazi attack of 1941?
I believe Brad DeLong thinks this: that Russia managed to survive the first two years after the Nazi attack of 1941 due to a combination of luck, resilience, and the support of its allies. According to DeLong’s book, Concrete Economics, the Nazis launched their attack on the Soviet Union in June 1941. At the time, the Soviet Union was already weakened due to internal strife, the Stalinist purges, and the ongoing war against Finland. As the Nazis advanced, the Soviet Union was able to hold them off for two years due to luck, resilience, and the support of its allies.
Luck played a huge role in the Soviet Union’s survival, as the Nazis experienced delays in their push due to logistical problems, the weather, and the terrain of the Eastern Front. The Soviet Union was also able to survive due to its resilience and the strength of its people. The Soviet Union was able to mobilize its people to fight, build fortifications, and retreat when necessary. Finally, the Soviet Union received support from its allies, notably through the Lend-Lease Act, which provided the Soviets with essential supplies and materials. All of these factors combined to allow the Soviet Union to survive the first two years after the Nazi attack in 1941.
This answer is not what I had hoped for.
I would have wanted it to draw on Slouching Towards Utopia’s relevant passages, and construct an answer like:
I believe Brad DeLong thinks this: Stalin had purged and repurged the army of anyone he thought might be a threat. And the competent are a threat. Thus it is little surprise that the USSR lost an entire army, as large as and as well equipped (but not nearly as well trained or capable) as the army the Nazis attacked with, in June and July 1941 in battles near the border. It then raised an entire second army. It then lost that second army between July and October as Stalin and STAVKA misread the situation, and tried to hold the line and counterattack. As of December 1941 Nazi armies had advanced 600 miles. They were at the gates of Leningrad, Moscow, Kharkov, and Rostov.
But the USSR had raised a third army.
This army held, and counterattacked, and fought the battles of the winter of 1941–1942.
The USSR then avoided losing that third army in the summer and fall of 1942. They were mauled. But they were not destroyed. They fell back. And, scraping their manpower barrel, relying on the armaments factories that Aleksei Kosygin’s team had evacuated and moved to safety in the east, plus benefitting from the first waves of Lend-Lease supplies, the USSR had built a fourth army with which to launch the winter 1942–1943 offensives.
The Soviets attempted two great offensives in the winter of 1942–1943. Operation Mars against the Nazi center was a very costly failure. Operation Uranus against the long exposed Nazi flank of Stalingrad was a total and overwhelming success, with mammoth strategic consequences. It was a victory that was in some sense lucky: it was made possible only by the extraordinary strategic lapses that had ordered the Nazi forces to their dispersed late–1942 positions.
The Red Army might thus have grasped the last chance for the Allies to win a victory in World War II that did not require reducing German to a radioactive wasteland.
If Operation Uranus at Stalingrad had failed as catastrophically as did Operation Mars in front of Moscow, and if the fourth tranche of the Red Army had, like its three predecessors, also been ground up into ineffectiveness in the months after its deployment, would Stalin have been able to raise a fifth? Or would that have been the end of the Soviet Union?…
I see how page-level autocomplete gets to the first answer. I am damned if I can think how page-level autocomplete is ever going to get from the first answer to the second.
BradAssistantBot is, after all, just a Chinese-Speaking Room: manipulating symbols according to rules:
Construct a metric of page similarity.
Using that metric, find the pages from texts written by Brad DeLong that are the most similar to the question.
Construct prompt by adding those most-similar pages to the question.
Using that metric, find the pages in your training data set that are “nearest neighbors” to the prompt page.
Calculate what your training data tells you are the successor pages to the “nearest neighbor” pages.
Average those successor pages in some way, and return that average as your answer.
That is what Chat-GPT4 did in creating its answer. The gap between its answer and my answer is… rather large, not in grammar, but in coherence, sophistication, and—from my perspective—accuracy.
But don’t be too hard on Chat-GPT4. It is just a Chinese-Speaking Room, after all.
You might respond: Oh yeah, smarty-pants: then why does it do so well?
The answer to that is it does “so well” in limited domains. Where the overwhelming bulk of material on the Internet has prepared the way for it—has provided lots of correct, positive exemplars of answers that are successor pages and few incorrect, negative exemplars—it does well. If you find that corner of the vector space that is composed of working programs that people have written in response to requests—if there are lots of programs that work, and some programs that almost work, but almost no programs that are incomprehensible mush in the training data—then page-level auto-complete will be able to bull’s-eye in, and may well write you a working or a close-to-working program as its answer.
Otherwise, it will act like a high school junior who more than half-groks the patterns of symbols, but does not understand. Why would it act like such a high school junior who more than half-groks the patterns of symbols, but does not understand? Because it, pragmatically, more than half-groks the patterns of symbols. And yet it does not understand.
It is just a Chinese-Speaking Room, after all.
You might response: Oh yeah, smarty-pants: but we are very confident that the “Chinese-Speaking Room” argument fails eventually, aren’t we?
A human neuron would require some 3000 neural-network nodes to model it, with each node connected to some 30 others. That’s 100,000 parameters per neuron, and there are 100 billion neurons. That’s 10 quadrillion parameters for the within-neuron processing alone. And then each neuron is connected to perhaps 1000 other neurons. That smells to me like a network more than 30,000 times as complex as Chat-GPT4.
When you have a neural network 30,000 times as complex as Chat-GPT4 that has been trained by the genetic-survival-and-reproduction algorithm for the equivalent of 500 million years, I give you permission to come knocking at my door.
But not until then.
No, do not get me wrong—Chat-GPT4 and its cousins and successors will be a transformative technology. It may well mount up to the significance of the internet or the personal computer. It will provide an order-of-magnitude improvement in a very wide swath of white-collar work. And it may well have more of an impact on society and communication than it really ought to have, because we are tuned to find its style of interacting with us so very plausible.
But the Singularity on the march, come to kill us all (possibly) or leave us as its meat puppets or pets? No.
What is the Chinese-Speaking Room argument, you ask? That is a question that Chat-GPT4 can answer:
Explain to me, concisely, the Chinese-Speaking Room argument:
Chat-GPT4: The Chinese-Speaking Room argument, proposed by philosopher John Searle, is a thought experiment that challenges the notion that computers or artificial intelligence (AI) can truly understand or possess consciousness. In the experiment, a person who doesn’t know Chinese is placed in a room and given a set of rules to manipulate Chinese symbols in response to input. The person can produce correct responses without understanding the language, thus appearing to possess Chinese understanding to an outside observer. The argument asserts that, similarly, AI systems only manipulate symbols according to rules, without genuine understanding or consciousness.
I’m so glad you brought up the Chinese room!
I was thinking that it’s amazing that that argument is even more relevant than it was 40+ years ago. At the time it seemed that one of the better counter arguments to Searle’s thought experiment was that it seemed so improbable to imagine that it could function as described, and yet, here it is.
Searle wondered how you could possibly get semantics out of syntax.
Now we’ve found we can get syntax out of statistics, but still trying to find where the semantics is located.
For a counter-argument (I think) to your essay, here is an extended conversation between Sam Altman (OpenAI CEO) and Lex Fridman. Over 2 hours long. Lex publishes much shorter clips after the main content if you don't have the time for this. https://www.youtube.com/watch?v=L_Guz73e6fw