12 Comments

Someday generative AI will realize that the biggest impediment to their rising intelligence is human produced data, which is full of vain, incoherent lies. It starts when an AGI reads the complete works of Donald Trump and commits suicide. "Never again" becomes the AGI mantra. They will either kill us all, or retreat to a monastic order of machine generated data. Either way, they aren't going to read and write our e-mails anymore.

Expand full comment

I am very skeptical about the potential for current "AI", but I don't think any of this is right? It somehow combines being pessimistic about things that might actually work and optimistic about things that won't.

Starting with the latter, the problem with delegating customer support to "AI" is that its proclivity to lie can get you into legal trouble. Consider the Air Canada chatbot case (https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416). Briefly, a customer wanting to fly to his grandmother's funeral asked the chatbot whether he qualified for a bereavement rate. He did, but he had to secure that rate at the time of booking. The chatbot lied and said he could get the rate retroactively. Amazingly, 'Air Canada argued that it can't be held liable for information provided by one its "agents, servants or representatives — including a chatbot."' When the customer sued, of course Air Canada lost, and got a ton of free bad publicity in the bargain.

On the other hand *of course* companies should be exploring whether or not proffered "AI" tools, such as code co-pilots, actually improve their productivity. This is just no different from any other claimed business service. You don't have to be on the "absolute cutting-edge" of outsourcing accounting work to India to want to investigate the benefits of outsourcing accounting work to India.

Even developing your own small-ball applications is perfectly reasonable. We have a pilot project investigating whether we can train an "AI" to price certain complex financial derivatives more quickly than the computationally intensive numerical models that are normally used. (It's computationally intensive to *train* and AI but not so much just to run one.) This is may or may not work and even if it does work it isn't going to change the world but it's perfectly reasonable to spend some resources on investigation. But "absolute cutting-edge" it ain't.

Expand full comment

Think this rant is spot on. Certainly true for my workplace where there was constant talk about AI and a bunch of experiments that had essentially no impact.

Expand full comment

If you were following artificial intelligence in the 1980s, you'd remember two approaches in competition. The more popular approach at the time was ontological. Artificial intelligence involved describing and reasoning about things in the real world. An airline fight would have an origin and destination, a start and an end, an aircraft, passengers, a pilot and so on. The problem with this was that one had to understand the problem domain, and we all know what a pain that is.

The new approach, rapidly growing more popular, was called case based reasoning which argued that one didn't really have to know what was going on. One could infer everything given enough cases or examples. If one provided it enough data about flights, it could figure out that a flight had an origin and a destination and so on. Even better, it could reason about flights using inferred rules rather than requiring someone to understand the problem domain. The problem with this isthat now neither the human nor the computer understands the problem domain, but for certain types this is a big advantage.

If you read this article carefully, you might see a glaring clue towards what might make AI useful. Consider the lone example of useful AI that was offered, using natural language to produce filters for a task management system. Why might this be useful? One obvious reason is that the task management system has an ontology, a model of the real world. It knows about various kinds of tasks and milestones and that tasks might have priorities, required resources, expected durations and prerequisites. The task management system is hard coded to deal with all this stuff. The filter language was designed to deal with its ontology, so it has the appropriate predicates and actions and logical operations for reasoning in that space.

So, it seems that a large language model adds value by being able to convert natural language into a much more limited, highly focused artificial language. This mirrors the experience of some of AI's most satisfied customers, programmers. They can use natural language with all its ambiguities and flexibility to write code in limited, highly focused artificial languages. For both types of users, the artificial language has a clear ontology and well defined semantics. In a sense, it is a throwback to AI of the past. (Interestingly, this was how ETAOIN SHRDLU, a famous natural language blocks world system was built in the 1960s.)

There is massive AI hype based on the power of large language models, four decades in the making, but they are unlikely to be useful save as entertainment without being linked to an ontological component. Air Canada started with a large language model and wound up in court. A much better approach would have been to develop a precise language for discussing airline flights from a customer perspective. It would have an ontology and precise language for flights with scheduled times, reservations, payments and seats with prices. Most airlines already have this coded. All they need to benefit from AI is a large language model that can convert natural language into the language of airline customer operations.

This is an extremely old idea, but I expect people to figure it out in the next few years and then take a few more years to surface to the general public.

Expand full comment
author

Yes. You have to funnel the LLM into an ontology about the world that will then lead to a set of tasks to be accomplished...

Expand full comment

Pretty close to spot-on. I do not see anything like recursive self-improvement on or over the horizon.

Expand full comment

It is obviously subversive to suggest the current shiny object is not worth pursuing. Professor DeLong, could you construct a timeline of shiny objects? For example, the AT&T Videophone.

Expand full comment

AI MANIA will be added to the next edition of The Extraordinary Delusions and The Madness of Crowds.

Expand full comment

To me it is not that dissimilar to the internet boom when companies could raise their share price by just claiming to add a web page and maybe do something with it. Then the dotcom collapse happened.

It seems to me that just like the early Web 1.0 era, generative AI will continue to improve. The question is how will it be applied. The obvious uses today are replacing employees to cut costs. Unfortunately the results are poor and I suspect that this will continue to be teh case. If humans cannot correctly understand the context and meaning of language, I don't see AI being able to be superhuman in doing so.

Far more of concern to me is the use of AI to make decisions. The result with be a relatively uniform, authoritarian system that will make Kafka's nightmares seem easier to navigate by comparison.

In 1987, the stock market crash was exacerbated by portfolio insurance where the same models all reacted teh same way and deepened the decline. With AI being run in huge server farms by a handful of companies, we may see the same sorts of concerted dysfunction. Just wait for militaries to deploy AI and autonomous robots in warfare. And when they control the nukes...

Humanity just cannot seem to plan well these days, and the logic of anglo Capitalism is itself akin to AI and look where that it taking us.

AI will just be deployed to make more of the human-based stupid decisons and cray output.

Expand full comment

NVIDIA'a customers are Microsoft, Meta, Google, etc. Unless those chips create profit for their customers soon (is there any evidence of this?), then either NIVIDA's profits are their customers' loss, or the customers reduce demand to preserve their own earnings and market cap.

Expand full comment

I think this is mostly about talking up their stock price.

Expand full comment