9 Comments

"The pandemic might have caused a one-time decline in the economy’s capacity to supply goods and services, and supply is now catching up…" Isn't that what "temporary" as in temporarily-above-target means? Now it does not explain the details, whether some of the above target inflation might have been a mistake.

Expand full comment

Yeselson: Yes, but how much is that our (US) main problem. Does it explain why environmentalists who should know better do not support taxation of net CO2? Does it explain why we are not recruiting more "model minorities? Reform of land use and building codes raise property values, mainly benefiting older Americans. Resistance to tax increases to decrease deficits, on the other hand, could be age driven as its more likely that lower consumption of older Americans would supply the greater investment that smaller deficits yield and future generations will have the higher incomes.

Expand full comment

Re: Humphrey Appleby

And this has become exponenially worse:

https://m.xkcd.com/386/

Neal Stephenson's "Anathem" has his alternative world having an intenet where the information is rated by "trust". Given what we now have experienced, that was a naive suggestion, made very explicity by the changes at Xitter.

We live in a world that is a supercharged version of this well-known expression: “A lie can travel around the world and back again while the truth is lacing up its boots.”

Expand full comment

Re: GPT-LLM-ML: Alfonso Reyes

I think he is over egging his claim. We had GOFAI as it is now called that used various techniques to craft expertise into automated systems. This was expensive and brittle. Then we had techniques to automate the processes of developing rule based systems, like decision trees that extracted information from labeled tabular data. The various forms of huge data manipulation to find patterns in teh data is a result of teh greater computational power available today. But to denigrate these ML techniques as not AI is stupid. Intelligence is about extracting information from teh environment to provide superior responses, that in animals improved survival and reproduction. If ML is embodied, it would provide the basis for updating models of responding to the world - that is intelligent behavior.

AI doesn't have to be human intelligence, or even cat intelligenec, and if we get to AGI, it will not be human intelligence.

Expand full comment

So why not call it GPT-LLM-ML? The word "intelligence" is in AI only to lead people to confuse it with human or human-like inteligence...

Expand full comment

I disagree with your characterization of teh meaning of AI. "Intelligence" of varying degree is part of all animals, especially those with brains. It may be very weak in earlier evolved phyla, but it is quite clear in primates, mammals (and marsupials), birds, and cephalopods. It is true that much of AI is aimed at the sort of "intelligent" things humans do, but we also talk about "intelligent agents", and "smart devices", which are both of limited "intelligence" as we would recognize it in animals.

Clearly teh "artificial" is to distinguish machine intelligence and its algorithmic nature from "natural intelligence" that wetware brains have.

However, to use a [poor] analogy, saying "GPT-LLM-ML" is like saying "That's using logic", or "your answer is just rhetoric", a method of thinking [Kahnemann's System 2]. If that is what you intend to mean, then I agree with you. We do express our reasoning in certain ways - descriptive, story-telling, maths, statistics, economic models, etc., and I see no reason why we cannot do the same for different methods used under the general banner term of AI.

But AI is a general term, just as IQ or intelligence is a general term for hyuman intelligence, even though we know it is composed of different skills. [I thought Chuck Lorre's "Young Sheldon" episode brought this out beautifully when Sheldon and his sister, Missy, were subject to intelligence tests measuring different aspects of intelligence.]

Mea culpa. Because my background is in biology, rather than humanities, I think of terms like intelligence across species, if only for comparison to humans. For example, while people often talk of "fairness" as a human trait and its impact on human affairs such as the economy and politics, we know some other animals share this trait. Douglas Hofstadter made it quite explicit in his book "I am a Strange Loop" that intelligence is a continuum across species. IIRC, he even gave up eating fish because of it. Dennett has made similar arguments in several of his books on intelligence.

In summary. You may be correct that most people think of AI as some manifestation of, and comparison to, human intelligence. I think of it in more general terms. However, as I noted above, if you want to be more explicit about how that particular "intelligent skill" is manifested, then I see every reason to use "GPT-LLM-ML", although I still think the more descriptive term "stochastic parrot" is more catchy, if now derogatory.

I hope my answer clarifies my thoughts on this.

Expand full comment

re: GPT-LLM-ML

Over the weekend I tried the Google NotebookLM that uses Bard. It allows one to import documents and then query them. I thought - "At last, a quick way to extract useful information from journal papers."

No such luch. I loaded 5 papers on astrobiology. 2 test questions were met with the reply that none of the sources had any information regarding the request - clearly false. A third question just hallucinated results on sample numbers and machine learning accuracy from one of the papers. The 3 "helpfully provided" prompts at best gave partially correct answers.

IOW, absolute garbage. Google might as well shut down Bard in this implementation.

In other news, I note that Stability.AI no longer offers a free version of its Stability Diffusion "art" AI. It just demands that one upgrades to the paid version, this despite the FAQ stating that the free version is still available. Huh?

I read that Shane Legg of DeepMind now has categorized the stages to AGI, and says the state of the art LLMs are still only at the very first stage of AGI - "emergent". https://www.technologyreview.com/2023/11/16/1083498/google-deepmind-what-is-artificial-general-intelligence-agi/

Are we past the peak of the Gartner technology cycle of "inflated expectations" and entering the "trough of disillusionment"?

Expand full comment

My poor results may have been a result of the 50,000 word limit. I tried again with a single paper, and got better results, although some hallucination was still evident. However, the "stochastic parrot" nature of Bard was evident when trying to prompt it to explain the why certain terms were used. All it could do was repeat phrases like a person unable to understand the meaning of terms. As the algorithm can only fake understanding, empathy, or any number of human traits, I expect that.

As a tool, I suppose one just uses it in appropriate ways, rather like a specialist. Mu problem with it remains in the hallucinations that require checking the sources provided to verify the accuracy of the output. This is a little like checking my writing for autocorrect errors before sending, but somewhat defeats teh purpose of the tool to extract information from a source. ;-|

Expand full comment