Discussion about this post

User's avatar
Alan Goldhammer's avatar

Isn't ChatGPT just another way to waste time on the Internet? I may be one of the few Americans who does not have either a Twitter or Facebook account and my only guilty pleasures are a handful of Substack writers. If the Sustackers start to slack off in terms of content, it's easy to say adios and move on to something else.

Kaleberg's avatar

There was a contest at Carnegie Mellon back in the 1980s to come up with the best meaningful use of “Colorless green ideas sleep furiously”. The winner was along the lines of:

"Through the icy winter, the seeds lie dormant. Before the frost sets in, eager gardeners peruse colorful catalogs promising the fertile glories of the spring. They plant with hope, and through the long winter those colorless green ideas sleep furiously."

It isn't about intention. It's about having a useful model. You can ask what country is south of Burundi and get a good answer without a model, but can you then ask what country is south of that or to its west? How about two countries south, or three? With a model, for example a mental or physical map of Africa, those answers are easy. Without a model, getting a good answer requires someone to have asked the question before and enough conversational ability to track the conversation and supply an antecedent for "that".

The lack of a world model is a major problem. One glaring issue with image generators, for example, is that they have no model of number. If you ask for three eagles, you will likely get three eagles, but if you ask for four or five, you have run into one of those primitive intelligences that cannot count. There's a similar problem with protein folding or reaction prediction. If the space has been densely explored and the question involves simple proximity, the system can give fairy good answers. It falls apart completely in parts unknown or when needing to make an actual inference.

8 more comments...

No posts

Ready for more?