15 Comments

I’m so glad you brought up the Chinese room!

I was thinking that it’s amazing that that argument is even more relevant than it was 40+ years ago. At the time it seemed that one of the better counter arguments to Searle’s thought experiment was that it seemed so improbable to imagine that it could function as described, and yet, here it is.

Searle wondered how you could possibly get semantics out of syntax.

Now we’ve found we can get syntax out of statistics, but still trying to find where the semantics is located.

Expand full comment

Brad,

For a counter-argument (I think) to your essay, here is an extended conversation between Sam Altman (OpenAI CEO) and Lex Fridman. Over 2 hours long. Lex publishes much shorter clips after the main content if you don't have the time for this. https://www.youtube.com/watch?v=L_Guz73e6fw

Expand full comment
author

thx much... Brad

Expand full comment

I recall hearing about Searle's Chinese Room in the 80's. It always seemed a bit suspect as an argument then, and it still does. But now I can flesh out my suspicions a bit:

Consider algebra and calculus. I did really well at these subjects, and I recall thinking that was because I was able to suspend my "what does it mean?" reflex for a time and just manipulate the symbols. If I could faithfully carry out the rules, I could solve for x, or take a derivative, or integrate a function. So I did, and I got answers. People did that even more and go things like Maxwell's equations and Schrodinger's equation. This led us to understand things that we didn't understand before - simple symbolic manipulation.

I grant you that understanding is required to both set up the problem and interpret the results. But the Chinese Room addresses itself to the middle - which is where we just apply a bunch of rules.

Not that I think ChatGPT is all of it, or that the robots are soon to take over. I just thought the Chinese Room idea to be very suspect.

Expand full comment
author

But there does—for me—come a point where the near-rote manipulation of symbols transforms into _**understanding**_...

Expand full comment

I do, however, find getting semantics from statistics somewhat more plausible than semantics from syntax. Thinking about how synonyms must be encoded in LLMs: it’s clearly not procedurally building a thesaurus and doing lookup. It must be that if you replace a word with its synonym, it generates a similar output. Semantics by clustering? ...

Expand full comment

Right, and you flesh out some approximation of what that scale might be in your post. I think that for a machine to be "human" it also needs an amygdala. A thing sending signals at a fairly primitive level. Signals like "want food", "Looks dangerous", and "What dat?"

And wow, do I not want these sorts of things built into robots.

Expand full comment

For this line of questioning, it would be nice to understand GPT’s logic algorithm. Deductive logic draws very solid conclusions. Whereas inductive logic is fraught with potential pitfalls, but is necessary when database is incomplete in order to avoid paralysis.

Expand full comment
author

It would be nice. But not even the designers understand it...

Expand full comment

Ethan Mollick left an essay on comparing Google’s to Bing’s AI program.

https://open.substack.com/pub/oneusefulthing/p/acceleration?r=yvpor&utm_medium=ios&utm_campaign=post

I find it hard to believe that the exponential improvement between Google and OpenAI is a freak accident.

Expand full comment
author

Tell me more...

Expand full comment

All I meant by this is that for most skills, such as lab or surgical techniques, and programming techniques, there is an academic or industry taught basic skill. Then there are the secret sauces that distinguish the standouts. Given that there were probably incremental improvements from GPT1 to GPT4, the devs probably know what those sauces were.

Regarding Google Bard, I not certain if it is a case where it didn’t answer the question, by jumping it (answering the anticipated next question, thereby killing two birds with one stone) or that it’s conclusions were wrong. I can’t imagine that they are too far behind. Perhaps it’s inductive leaps may need to be shorter?

Expand full comment
author

Yes: the "secret sauce" is always very important...

Expand full comment

Among other things, BradAssistantBot can't smell, taste and has surely never eaten anything. It can tell you only what people have smelled, tasted, eaten and written about the sandwich. If most people have written that Bologna sandwich tastes like crap, it will also likely say that it tastes like crap -- and that would be a good thing (because it does!). The Bot tastes and smells nothing to feel that way about the sandwich and to be able to say why people have felt that way, reasonably or unreasonably. I guess, I'm dinging the Artificial part of the AI and you are being more reasonable by dinging the Intelligence part so far.

Expand full comment
Mar 23, 2023·edited Mar 24, 2023

Wow just wow. But ... Hofstadter I think would argue that our brains are just ever more complex layers and at some point "intelligence" emerges --- a network/graph phenomenon.

And, what if our evolved human algorithms are as simple as those in many neural nets? Then, the emergent properties become a function of how much data is fed to them, not the complexity of the algorithms. You can do that slowly with biological scale evolution, or quickly with computers.

Expand full comment