15 Comments

I’m so glad you brought up the Chinese room!

I was thinking that it’s amazing that that argument is even more relevant than it was 40+ years ago. At the time it seemed that one of the better counter arguments to Searle’s thought experiment was that it seemed so improbable to imagine that it could function as described, and yet, here it is.

Searle wondered how you could possibly get semantics out of syntax.

Now we’ve found we can get syntax out of statistics, but still trying to find where the semantics is located.

Expand full comment

Brad,

For a counter-argument (I think) to your essay, here is an extended conversation between Sam Altman (OpenAI CEO) and Lex Fridman. Over 2 hours long. Lex publishes much shorter clips after the main content if you don't have the time for this. https://www.youtube.com/watch?v=L_Guz73e6fw

Expand full comment

I recall hearing about Searle's Chinese Room in the 80's. It always seemed a bit suspect as an argument then, and it still does. But now I can flesh out my suspicions a bit:

Consider algebra and calculus. I did really well at these subjects, and I recall thinking that was because I was able to suspend my "what does it mean?" reflex for a time and just manipulate the symbols. If I could faithfully carry out the rules, I could solve for x, or take a derivative, or integrate a function. So I did, and I got answers. People did that even more and go things like Maxwell's equations and Schrodinger's equation. This led us to understand things that we didn't understand before - simple symbolic manipulation.

I grant you that understanding is required to both set up the problem and interpret the results. But the Chinese Room addresses itself to the middle - which is where we just apply a bunch of rules.

Not that I think ChatGPT is all of it, or that the robots are soon to take over. I just thought the Chinese Room idea to be very suspect.

Expand full comment

For this line of questioning, it would be nice to understand GPT’s logic algorithm. Deductive logic draws very solid conclusions. Whereas inductive logic is fraught with potential pitfalls, but is necessary when database is incomplete in order to avoid paralysis.

Expand full comment

Among other things, BradAssistantBot can't smell, taste and has surely never eaten anything. It can tell you only what people have smelled, tasted, eaten and written about the sandwich. If most people have written that Bologna sandwich tastes like crap, it will also likely say that it tastes like crap -- and that would be a good thing (because it does!). The Bot tastes and smells nothing to feel that way about the sandwich and to be able to say why people have felt that way, reasonably or unreasonably. I guess, I'm dinging the Artificial part of the AI and you are being more reasonable by dinging the Intelligence part so far.

Expand full comment
Mar 23, 2023·edited Mar 24, 2023

Wow just wow. But ... Hofstadter I think would argue that our brains are just ever more complex layers and at some point "intelligence" emerges --- a network/graph phenomenon.

And, what if our evolved human algorithms are as simple as those in many neural nets? Then, the emergent properties become a function of how much data is fed to them, not the complexity of the algorithms. You can do that slowly with biological scale evolution, or quickly with computers.

Expand full comment