11 Comments

Re: Pride and Prejudice: Mr. Collins is a schlemiel. Yet he is happy. Mr. Issac Bashevis Singer makes this the major theme of Gimpel the Fool with a good deal less restraint than Miss Jane Austen. I would not have seen their connection without this post, so Thank You Very Much, Dr. DeLong.

Expand full comment

Thank you for the discussion of LLMs & active reading. I'm 77 years old & don't understand a goddam thing about AI or see it as especially useful. But an aid/tutor to active reading?!? I'll have to play with it to see how that might work.

Expand full comment

AI has gone through a number of phases since it was used as a term in the middle of the last century. It used to be symbolic and coded, used in a variety of contexts as tools. We had the first taste of conversational" AI with the simple ELIZA program that was designed as a simple therapist. In essence, it was passing Turing's "Imitation game" in a very narrow context. The recent wave based on large language models (LLMs) can do very passable conversations, fooling users into thinking of it as a "person". AIs from HAL9000 in the movie "2001: A Space Odyssey, to "Her" attempt to show AIs as being perceived as intelligent beings. AIs as tools focus on their generative capabilities, in media such as text, images, video, and audio. AIs are being used as superior ChatBots replacing customer service reps, and summarizing texts, such as Google's Gemini which inserts itself at the top of searches with a short summary of the results of what it thinks you are asking for. The lead AI companies all seem to be pushing for that Holy Grail of artificial general intelligence (AGI) that replicates what a brilliant renaissance person might be like, able to handle a large variety of tasks from the simple to the very complex. AGIs, especially embodied in some way to see the world rather than being a "brain in a box" are expected to replace many jobs, and because of their capabilities and potential speed, able to solve problems that elude even large human teams. Some hope that such intelligences will become super-intelligent, eclipsing human intelligence, and theoretically bring on the Singularity.

The problem is that like each wave of AI technology in the past, the hype vastly exceeds reality. While they exceed humans in some tasks, they fail badly in others. A recurring problem is that they "hallucinate, " fabricating stuff, i.e. BS. OpenAI is trying to get its AI to reason. Again, it works to some extent but often fails. A problem for the user is whether the AI has truly reasoned out a problem or simply found the answer from its training set which currently is most of human-generated content. Instead of exponentially improving, these AIs seem to be reaching a maximum capability using current technology, and their energy consumption is very high.

Currently, we are in a transition period learning where and how best to use the technology, and importantly, what users are prepared to pay for it. As with all technology, it has good and bad uses, and the bad ones are getting much coverage as they make life more complex and potentially dangerous. [We may be forced to buy AIs to deal with AIs in an arms race like we have to have anti-virus and malware detecting software to keep our connected computers, and us, safe.].

It would be nice to hope that we will end up with a better world where AI helps solve difficult problems and makes our lives better. I fear that this is Pollyanna-thinking and that the world may be worse on balance, with the good unbalanced by the bad (like flying is a miserable experience compared to what it was in the 1960s due partly to terrorism).

The problem is that we cannot escape this AI future. We will get it whether we want it or not, both individually and collectively. We have to deal with it.

Expand full comment

Thank you! Your explanation is so clear I can actually follow it. I'm so old that I'll be dead before AI becomes a real menace, if it ever does. My wife uses it all the time to clean up first drafts of routine memos (but not for research), and I used Perplexity the other day to calculate compound interest, something I can't do myself. The one thing I want it to do ASAP is give us self-driving cars, since I'm getting too old to drive myself.

Expand full comment

I no longer drive either. I have to use the buses, Self-driving cars that are inexpensive to use would be a huge boon. But like flying cars, they are always farther away than we expect. San Francisco only has Waymo's cars as self-driving taxis. GM's Cruise's taxis have been abandoned. Uber ended its self-driving cars program. Teslas are too unreliable to be truly safe (and under NTSB investigation).

BTW, compound interest is really easy unless you want something more like mortgage payments with a declining principal. Financial calculators have all that built in. I would just get the formula you need and use a spreadsheet to do the calculations so that you can see the compounding operating.

I have Grammarly to do my grammar and spelling checks. It can be aggravating, especially as being an ex-Brit, the use of articles is rather different between the 2 versions of English. My very literary wife won't touch Grammarly with the proverbial barge pole because she sees the mistakes it makes. AI can be very useful for tidying up text, but it does make it very bland, taking away individual character in writing. Good for business correspondence, but not for writing in one's own style. Maybe we need a way to train an LLM on one's own writing style[s] to retain personal style[s].

Expand full comment

Alex, I wonder whether self-driving cars in some urban settings aren't closer to us than you think. Won't their adoption depend mostly on how safe they need be? Perfectly safe puts them further away. Safer than human drivers, who text, doze off, & driver drunk, closer.

My wife's memos don't have to be personal in style, and I can state a case for Perplexity to calculate compound interest much quicker than I can manipulate Excel, especially since I wanted payments with a declining principle.

Expand full comment

I think Waymo is using a remote driver to take over if the car gets into trouble in an unexpected situation. IDK if you recall that self-driving taxis were stymied by putting traffic cones on the hood. The AI software didn't know how to handle that situation and the car was stuck. Conversely, there was the case that a Cruise taxi hit a pedestrian and then dragged them for several yards instead of immediately stopping, [The investigation and fines were one reason GM's Cruise program was ended.]

It isn't that self-driving taxis are less safe than humans, but rather that the accidents they do have are ones humans wouldn't ordinarily cause as they are obvious to humans.

Lastly, there is the issue of what should the car do ina "trolley bus situation" - kill the driver/passenger or the other party or parties? These are moral issues that vary between cultures. One also suspects that cars owned by the wealthy in the US are biased to choose to kill the other parties, e.g. a line of pedestrians waiting at a bus stop, rather than the passenger. In the US, we value young people more than the old, whilst this is not the case in Japan (IIRC). So if the choice is killing the baby or the senior, we would expect the car to choose to spare the baby if possible. This isn't to say humans make better choices, but we are individuals. Some people are careful if a pet is in the street and stop rather than hitting it. Others almost take joy in injuring animals with their vehicle. Making driving decisions are done very quickly, and responses vary. Self-driving vehicles will tend to have a singular preferred action depending on teh software. Maybe it will be tunable by the owner at some point, but clearly not by a passenger in a rented vehicle.

Expand full comment

All good points. Maybe I won't live to ride in one.

Expand full comment

I have read DD since the days of the Weed administrations wonderful Iraq adventure. I just read his Unacountability Machine and of course the words describe the coming Trumpf administration.

The system will take revenge on him and his incompetent staff. I liked the last paragraph:

Customers and users have one huge cognitive advantage over all other levels of our modern system, which is that they live in the real world, rather than a representation or model of that world made out of standardised reports and collated data points. If we want to make governance systems which are viable – able to maintain integrity and stability in response to problems not anticipated at the time of their design – we need to always ensure that there are ways for their perspective to be communicated. Otherwise, we are destined to gradually drift away from reality without noticing it, until catastrophe results.

Expand full comment

America needs a feel-good story of the rich, ruthless, vain, and ambitious who destroy a republic, humiliate an empire, and die an ironic death. We need Crassus -- the movie. Drop what you're doing and draft a screenplay.

Expand full comment

Magi - the following, which sheds no light whatsoever on the matter, may be of interest

https://www.originalsources.com/Document.aspx?DocID=G2LYFKXTD7IWIJ7#ft2-214

Expand full comment