10 Comments

I think the choice of the word "hallucinate" to describe an AI system producing nonsense was rather unfortunate. The systems are not hallucinating in the sense that they are misinterpreting memories or sense data. They are working exactly as expected, interpreting the data they are presented and the dataset of tuning parameters they were trained with perfectly correctly. The problem is that the algorithm is solving a problem that has nothing to do with the meanings of the tokens as they would be interpreted by a human being.

Maybe fifty years ago, whenever someone said an organism evolved something, someone would correct them and point out that organisms don't evolve. Populations evolve. No one points this out anymore. If you look at what computers can do, technically, from a computer science point of view, they cannot add numbers, not even simple integers. What a computer does, as an electronic device, is manipulate currents and voltages in a manner such that that manipulation can be interpreted as an addition by a human being. I don't think anyone ever corrected any layman saying computers add numbers, but the idea is the same. Don't confuse what an AI program, especially a statistically based large language model, with reasoning about the real world in all but the most limited sense.

We can get away with saying an organism evolved something or a computer added numbers, but we are still far from being able to comfortably say that an AI program answered a question or figured something out. It may have produced a useful set of tokens, but it may have produced nonsense of varying degrees. We may call that nonsense a hallucination, but that's just convention.

Expand full comment

Re: Human Capital

All I can say is that humanities-educated people had better become highly aware of how to deal with polluted information if it is to be useful.. That is not to say the STEM-educated will not be mislead, but at least there are good methods to eliminate the garbage.

Unlike those who hype "big data" and now MAMLM, I do not see real creativity that will create a new era in solving complex issues or generate cutting edge science. I see it reducing the menial gruntwork. Like Star Trek computers, just tell the AI what you want and have it work out how to achieve the goal, execute it, and deliver the results. But asking the right, and interesting, questions will remain a human skill for some time to come. If these AIs cannot effectively curate the good from teh bad to build quality models, then I don't see much future for them, except in area where accuracy is not important.

Gary Marcus on information pollution and LLM training.

https://garymarcus.substack.com/p/information-pollution-reaches-new

Expand full comment

Roubini: Hopefully Chinese attempts to go overboard with encouragement of manufactured exports (that is, more in a “state capitalism” model than a market capitalism) will be met by exchange rate adjustments rather than import restrictions by other countries. [The US needs to move toward a weaker dollar/less negative goods trade balance/less capital importing stance, anyway.] But either way, state capitalism has its risks.

Expand full comment

Carroll or one of three.

Brooks chart: Why does the European 10% gap look larger than the US 10% gap on the same scales?

Expand full comment

Tracy Alloway & Joe Weisenthal: Curious. They seem to be redefining "landing" as arriving at some (ustated) real growth and (real?) interest rate combination. I thought the idea of "landing" meant achievement of 2% PCE inflation at whatever combination real growth and EFFR was consistent with that. Do they mean that perhaps a 2% target is not consistent with the most rapid possible real growth? Why not? The 2% target did not result from a real growth rate optimization search. Is this just another version of speculating that r* is a lot higher than in the past? But there could be an r* for any inflation target, income maximizing or not.

Expand full comment
Apr 12·edited Apr 13

Brad, the Sean Carroll video was excellent. Thanks! Among other things, Sean discussed physicist/complexity theorist Geoffrey West's ideas on scale. Here is one of West’s videos on that topic: https://www.youtube.com/watch?v=XyCY6mjWOPc I used that video in a course I taught on complexity about the problems with the ever-growing speed of innovations and cities. You (and, especially Noah) may find it interesting.

In a comment on one of your other posts, someone suggested you have a conversation with Bob Wright, who wrote the classic book "Nonzero." In his recent blog, he criticizes Marc Andreesen and wrote about Geoff West's issue of the problems with the increasing speed on innovation:

"Some of us believe that artificial intelligence is about to bring a burst of change so dramatic as to have unprecedentedly high potential for social destabilization. And we think its disruptive effect on employment, dramatic though that will be, isn’t the half of it. We’re not against the full unfolding of AI, and we see the great benefits that may bring, but we’re against it happening too fast, with too little reflection and guidance. And we think the only way to keep it from happening too fast involves government intervention—and, ultimately, coordinated intervention by governments around the world." https://nonzero.substack.com/p/marc-andreessens-mindless-techno

Expand full comment

Gerben Wierda: "(2) creativity and 'hallucinations' are technically the same thing. They are the same. We just stick two different labels on them based on if we like/want the result or not…"

The proposition was gamely defended in the commments section. However, while it may be the same in terms of the LLM technology, it definitely is not the same when put in context. Back in the dinosaur days of symbolic AI, Roger Schank put forward the idea of frames. For example, when visiting a restaurant, there is a "script" to follow. This is a different script from attending a theater, and so on. Imagine if an Econ student claimed that Keynes was a monetarist. You say that is incorrect. The student says that it is "creative thinking". That is unacceptable in an academic setting [frame]. If you want an LLM to review a journal paper and add the paper reference and other supporting references for the review, inaccurate information is unacceptable. The creativity in the content is not allowed to leak over into the citations, nor into the content that is factually incorrect. Conversely, writing a fictional review of a fictional book cabe very creative and acceptably so, as Stanislaw Lem showed. Context matters.

There is no universal "creativity" control that can be applied to LLMs, even if that was possible. The creativity allowed is very granular and context dependent. Perhaps specific prompts might be able to substitute for context clues, but clearly LLMs are not using any sort of referent frames, which we might call behavior norms. (Sit down, and talk with your table companions, but don't applaud when in a restaurant; sit down, but stay silent and applaud in a theater.)

As we know that humor is a difficult concept (Lt. Savik, Star Trek II) even for humans, and impossible, so far, to correctly moderate in social media, I think that we will find this extend to LLM style AIs, and indicate that learning context and behavior norms as children and adults will not magically emerge in AIs. Inappropriate responses ("hallucinations") will bedevil LLMs until they can be taught the appropriate behavioral responses without experiencing them as humans do, through mimicry and training during development.

Expand full comment

Sandbu: The rap on minimum wages is not that they are a drag on _productivity_, but that they promote inefficiently large per hour productivity. The correct response is a) with elasticity of demand for labor < 1 it may be worth some reduction in hours worked to transfer income to low wage workers or b) “OK get me the equivalent in a wage subsidy.”

Expand full comment

Real estate in general as exemplified by SF real estate here reported demonstrates that wages are not the only kind of price that is downwardly sticky and therefore requires a FAIT > zero.

Expand full comment