11 Comments

btw, if you are caught in the open during a thunderstorm, bear in mind that although lightning tends to *strike* a high point, it tends to *flow down* the same channels that water would. Don't huddle in a ditch, huddle on a spot that is locally high but low compared to the broader surroundings.

Expand full comment

Hard to remember that when Thor is blasting you. Easier to remember this: get below the tree line.

Expand full comment

AI is the most recent buzz (remember the CB radio buzz) used by software companies to attract attention. It is the natural result of the advancement of software programs that change themselves. Indeed, the buzz is mainly due to the development of language interface software rather than other AI advancements. It's responses are indeed mimicry and will soon fade as did ADA and Pascal. Ask a "Feynman" question and you will receive back gibberish.

Expand full comment
author

Yes. It is a much easier interface to a much worse substantive task performance engine...

Expand full comment

I was going to answer "negatory", but I do remember the CB radio buzz.

Expand full comment

The number one question to me regarding what Kevin Drum calls True AI is this: Human beings are a mess, why would we ever want a machine that acts just like them?

We want machines that are more capable, yes. That can perform tasks we find kind of boring or repetitive, or maybe expensive.

Of course, the super rich class might well want servants that can perform complex tasks much more cheaply and efficiently than humans, and don't have all that messy baggage like vacations, and maternity leave and an annoying tendency to argue with you or quit at an inconvenient time.

But of course what these people will want is unswerving loyalty. They will not tolerate a servant that defies them, and with a machine, there is no moral reserve to pulling the plug.

So my fear is not that the machines will take over, but that the oligarchs will use the machines to take over.

Expand full comment

That suggests an interesting test. It's pretty easy to create a sycophantic, ego stroking AI. It sounds like just the thing for the rich and powerful, but you and I both know that it isn't the sycophancy and ego stroking, it's that real people are being sycophants and ego strokers, and it's even better if they have no choice but to do so.

Wasn't there a whole matter in rhetoric in which some qualities are nice to have but others merely having the reputation for those qualities is just as good? For example, being just as opposed to having a reputation for being just are equally valued. In contrast, being in good health is much more valuable than having a reputation for being in good health.

There seems to be some kind of parallel. There's AI that actually does things, there is AI that just gives the appearance of having done them. For example, a chess playing program appears to play chess and if you play against it, then by the rules of chess, it is likely to beat you. In contrast, a large language model may appear to have understood and summarized the salient points of an essay, but that appearance is less valuable than if it had actually done so. The chess program will always play a good game of chess, but the LLM summarizer may or may not be confounded by the structure and meaning of the essay.

Expand full comment
author

I think we need more thinking about why we like pets and yes-men...

Expand full comment

Couldn't help thinking while reading this what I always ask my micro students: " Do we have a utility function in our heads or do we just act like we have a utility function in our heads." The ultimate goal of that conversation is to get them to see "as is as as does."

Expand full comment

“plus ça change, plus c'est la même chose “

We have been having these same issues about agency and self-awareness for many decades. Fantasy horror stories about machine agency through a "malevolent spirit" are the modern equivalent of animism, and panpsychism. Human cognition is easily fooled by surface actions, possibly because our lack of understanding of a complex system results in our taking a mental shortcut (Kahneman's System 1) and ascribing intention because that works for other humans. [I am surprised that the saying: "Never attribute to malice that which is adequately explained by stupidity." is not invoked for ChatGPT and its LLM ilk, as well as humans.]

LLMs seem quite capable of passing simple Turing tests. More in depth questioning results in failing the test. Just as Deckard needed more questions to test Rachel with the Voigt-Kampf empathy test, so we should be more circumspect when ascribing human-level qualities to machines. What I think would be interesting is how many humans would fail the extended tests before LLMs.

Expand full comment

I think that Holbo's bottom line came in part 2 of his post:

"It’s the difference between 1) and 2)

1) If a thing can make small plans, that succeed, and is getting smarter fast, it is likely to make big plans soon, that will also succeed.

2) If a thing seems like it’s making small plans, that seem like they are succeeding, and if the thing seems like it is getting smarter fast, the thing is likely to soon seem like it’s making big plans that also seem like they are succeeding.

1) seems reasonable but we need 2). 2) isn’t self-evidently senseless, but it equally clearly doesn’t just follow from 1). Do as do does doesn’t mean 1->2."

Expand full comment