6 Comments

This conversation is one of the best I've seen showing that we are not dealing with an artificial intelligence, we are dealing with a simulated intelligence. It's fascinating. It's cool. But, if traders, for example, mistake AI for SI, then there is going to be room for some really big money to be made from the old wetware in the skulls of Charlie Munger types. I recall so clearly 2000 ASSA conference with packed ballrooms of DOW 36,000 acolytes and the new dot-com technical analysis and how today's market was different and how financial theory of the past was out...and I recall a little participated talk given by a quite elderly Modigiliani who said, "Seen it before. It's a bubble."

Expand full comment

"The market capitalization of Tesla that I provided in my answer “13 of 30” above was based on the data as of July 22, 2023, from various sources. "

Everyone should have a problem with "from various sources."

Expand full comment

I wonder if the results of this conversation has to do with the finite number of tokens allowed per discussion.

Expand full comment

What I love about the conversation is the frantic backpaddling to come up with an explanation for a wrong answer. It reminds me of some conversations I've had with conspiracy lovers or very high people. (Also observed dialogs with right wingers)

Expand full comment

Hmmm- Porsche and Volkswagon are pretty much the same company. Geee - Perhaps AIs re not very ummm, expert.

Expand full comment

That was a very amusing exchange with Bing. In a human this would be called BS, and I might even infer the Bing GPT-LLM (were it mistaken for a human) as having Dunning-Kruger. It has misplaced confidence in its "expertise".

However, I think you draw the wrong conclusions based on current raw LLM technology. I have little doubt that when integrated as a front-end to a hive mind of expert system AIs that adding context and memory will provide a better result, as the data and analysis will be better. [Memory being what it has access to, rather than ST memory of the conversation.] As for the TESLA errors, I would expect a good AI to caveat its answer with a reference source and date of information, noting the extreme volatility and with hyperlinks to the latest data. As long as we don't have to turn the planet into Computronium to do this, I expect that LLMs integrated with other computational algorithms will in 5 years provide very good answers. OTOH, that may be as optimistic as thinking we should have excellent self-driving cars by now. ;-( [Does Rodney Brooks have a good sense and date predictions of the performance quality of LLM AIs?]

Expand full comment