10 Comments

Isn't ChatGPT just another way to waste time on the Internet? I may be one of the few Americans who does not have either a Twitter or Facebook account and my only guilty pleasures are a handful of Substack writers. If the Sustackers start to slack off in terms of content, it's easy to say adios and move on to something else.

Expand full comment

There was a contest at Carnegie Mellon back in the 1980s to come up with the best meaningful use of “Colorless green ideas sleep furiously”. The winner was along the lines of:

"Through the icy winter, the seeds lie dormant. Before the frost sets in, eager gardeners peruse colorful catalogs promising the fertile glories of the spring. They plant with hope, and through the long winter those colorless green ideas sleep furiously."

It isn't about intention. It's about having a useful model. You can ask what country is south of Burundi and get a good answer without a model, but can you then ask what country is south of that or to its west? How about two countries south, or three? With a model, for example a mental or physical map of Africa, those answers are easy. Without a model, getting a good answer requires someone to have asked the question before and enough conversational ability to track the conversation and supply an antecedent for "that".

The lack of a world model is a major problem. One glaring issue with image generators, for example, is that they have no model of number. If you ask for three eagles, you will likely get three eagles, but if you ask for four or five, you have run into one of those primitive intelligences that cannot count. There's a similar problem with protein folding or reaction prediction. If the space has been densely explored and the question involves simple proximity, the system can give fairy good answers. It falls apart completely in parts unknown or when needing to make an actual inference.

Expand full comment

as a WV Dem, there is a lot to be said about Manchin, and he's just trying to do what Robert Bryd did before him, and what Jennings Randolph did before him. What I think gets lost in the noise is he delivered a lot for Biden and the Dems. He's good, not great - but what the progressives fail to realize the enemy of good is perfect. He's not perfect. Manchin is a more reliable vote than Sinema.

Manchin (and Capito) have brought coal trains full of cash to WV for infrastructure, broadband, etc. Every municipality is happily taking the money, but I feel both of them will be voted out for not being trumpy enough.

Expand full comment

Hi Brad- love your work and writing. When I give ChatGPT the same prompt, it tells me a very similar story, though mine has a magic tree instead of a purple flower. Upon clicking "Regenerate response" I get a new story about a girl named Sofia - this one with a magic pond and the same conclusion. I work in AI in a large professional services organization and previously worked in the derivatives industry. Color me cynical but I smell a rat. It would be all too easy to have a set of Mad-Libs type story templates at the ready, all of which lead to an uplifting conclusion. Upon more trials I get more of the same story with minor variations - always a female protagonist who always comes up flourishing and wise in the end.

So I do not think that the prompt "Tell me a story" invokes some deep language engine that is endowed with intention. Rather I think there is one template for a ~500 word story that has a female protagonist, a mystical element (flower, wizard, pond, jewel, whatever), and an uplifting conclusion. Very nice, but I'm reminded of the ancient Eliza game, mashed up with choose-your-own-adventure formulaic storytelling. And I am reminded of how good humans are at hypothesizing deep patterns and linkages when presented with simple phenomena. All too easy to look at generative AI output like an astrologer would look at the stars. Again color me cynical, but I think we are seeing simple bags of tricks, wrapped in some moderately adroit engineering. If there is any intelligence in this picture, it is in the interpretations readers are making on procedurally-generated text. in my humble opinion of course.

Expand full comment

ChatGPT and other modern language models don't have explicit templates. They take a large pile of text and build a statistical structure. Not surprisingly, this results in template like behavior. In that training text, there is a most likely response to a "tell me a story" and its components have varying levels of probability that allow for variation - just like Mad Libs. The big advantage is that one doesn't have to come up with the templates. One just has to come up with text that lets them be properly derived.

Surely, the text being used to train ChatGPT includes lots of, but not all, dumb kid's jokes, dad jokes, proverbs and aphorisms, and popular movie lines. I wouldn't be a bit surprised if they didn't throw in a bunch of SAT practice tests, so they'll probably do moderately well on SAT questions. Make some minor modifications to flip the semantics, and, odds are, ChatGPT will get totally lost. (Has anyone tried a catechism? That would be good for an article on the state of ChatGPT's soul.)

Expand full comment

Could you please write me the first 500 words of a catechism on AI?

Sure! Here is a catechism on AI:

What is AI?

AI, or artificial intelligence, refers to the ability of a computer or machine to perform tasks that typically require human-like intelligence, such as learning, problem-solving, and decision-making.

What are some examples of AI?

There are many different types of AI, and examples include:

Machine learning: This type of AI involves algorithms that can learn from data without being explicitly programmed. For example, a machine learning algorithm might be fed a large dataset of images, and it can learn to recognize patterns and classify the images based on their contents.

Natural language processing: This type of AI allows machines to understand and process human language. For example, a machine might be able to recognize and respond to voice commands, or it might be able to read and understand written text.

Expert systems: These are AI systems that are designed to perform tasks that typically require specialized knowledge or expertise. For example, a medical expert system might be able to diagnose diseases based on a set of symptoms.

Robotics: AI can be used to control and program robots to perform a variety of tasks. For example, a robot might be used in manufacturing to assemble parts, or it might be used in healthcare to assist with surgeries.

How does AI work?

AI systems are typically based on algorithms, which are sets of instructions that tell a computer or machine how to perform a task. These algorithms can be trained using large amounts of data and can learn to recognize patterns and make decisions based on that data.

For example, a machine learning algorithm might be trained on a dataset of images that are labeled with the objects they contain. The algorithm can then learn to recognize the patterns in the images and classify new images based on those patterns.

How is AI used?

AI has many practical applications in a wide range of fields, including healthcare, finance, manufacturing, and transportation.

In healthcare, for example, AI might be used to analyze medical images to identify abnormalities, or it might be used to analyze patient data to identify trends and predict outcomes.

In finance, AI might be used to analyze market trends and make investment decisions, or it might be used to detect fraudulent transactions.

In manufacturing, AI might be used to optimize production processes or to improve supply chain management.

In transportation, AI might be used to optimize routes for delivery trucks or to assist with autonomous driving.

What are the benefits of AI?

AI has the potential to greatly improve efficiency and accuracy in many different fields. By automating tasks that would otherwise be performed by humans, AI can help to reduce errors and save time and resources.

In addition, AI has the potential to improve decision-making by analyzing large amounts of data and identifying trends and patterns that may not be apparent to humans.

What are the potential drawbacks of AI?

One potential drawback of AI is that it has the potential to automate tasks and jobs that are currently performed by humans, potentially leading to job displacement.

Another potential concern is the potential for AI to be used for malicious purposes, such as in the development of autonomous weapons or in the spread of misinformation.

There is also the potential for AI to perpetuate and amplify biases that are present in the data used to train it, leading to unfair or discriminatory outcomes.

What are some ethical considerations related to AI?

There are many ethical considerations related to AI, including issues of fairness, transparency, accountability, and responsibility.

One important ethical consideration is the potential impact of AI on employment and the economy. As AI autom

Expand full comment

Some fun experiments this summer with two different home assistants points up just how much of these replies are pre-planted in expectation of specific likely (or unlikely) prompts. Asking my sister's Google home assistant, "why is six afraid of seven?" gives the punchline of this old kid's joke. However Siri did not have the punchline as of this summer, though if someone from Apple reads this and is ambitious and has budget, they may quickly add the response. Similarly, I find that ChatGPT recognizes iconic lines from some movies but not others -- that can only be a consequence of whether folks at OpenAI have gotten around to loading these key phrases into the dictionary of potential "hot" prompts. Prompting with iconic lines which have not been pre-planted (e.g. Groucho Marx's question to Chico in Duck Soup around the 25:16 minute mark) gives typical Markov chain gibberish - even though any Marx Brothers fan would recognize the line immediately.

Expand full comment

I just asked Siri. Sure enough, when I've typed a few words, it comes up with the question and the answer.

Expand full comment

Every sentence I have so far heard, read, spoken or written about ChatGPT makes more sense to me when I substitute "the collective hive mind of the internet circa 2021, filtered for presentability".

ChatGPT is a very effective tool for summoning the general consensus of its source material and rearranging it (using templates and formats that are inherent in the material), with an element of pre-determined shaping towards various designer-chosen user preference paradigms (the net effect of which seems to be to emphasise bland positivity).

It's a fun way to filter information and the collective insight of whatever subsections of humanity online fed into the training data is, when shaped by whatever set of design choices have gone into its training, probably as good as or better than most of us at generating various kinds of text if you care more about form and general vibeness than actual accuracy. And it often is right, because it benefits from an amped-up wisdom of crowds effect.

I still can't think of many applications that work for me, where it's not just mostly for fun. In the past two weeks I've asked it for ninja stories for my son (repetitive, boring); ideas for keeping the kids amused (not a bad list but not really needed, doubt I'll ever feel the practical need to ask for ideas there); a data strategy for an organisation (to compare with one we'd just written - ChatGPT's was plausible but very bland, no value add there, I changed nothing); an acceptance speech for an award (again, plausible but bland, funny as a joke but not something I'd repeat); names for fictional characters for a story (don't bother); and a list of cute animals to theme a Christmas gif advent calendar (this was helpful! it included animals I would not have thought of!).

I remain unconvinced that this is 'autocomplete for everything', or even for many things, but if I ever want a randomised list of things the early 2021 internet really liked I'll know where to turn.

Expand full comment

And boosted by a substantial influx of coherence, and with all of the sources’s serial numbers filed off. But yes: very well put.

Expand full comment