10 Comments

LLM and AI: I remember way back when calculators first came out and teachers had a fit particularly when it came to math homework. Eventually they adjusted and had to accept the use of calculators. Teachers adopted by making the questions harder, work to arrive at the answer had to be shown, and answers were no longer round numbers, had to be expressed in fractions, etc. I suspect with AI now doing summary work, future education focus will revolve around working knowledge of the details or using overarching themes across different disciplines. Essentially the work gets harder and the expectations, higher.

Expand full comment

I don't think Mollick reads books the way the rest of us do. Most of us want the narrative with the author controlling what is presented to us and when. No one picks up a mystery or thriller and reads just to answer questions like "What was the vital clue?" or "How do they stop the mad bomber?" That defeats the whole purpose of reading the book. You don't go to the theater and watch a production of Henry V, Part 2 to find out the year Henry IV died.

Mollick doesn't want to read books. He wants to search books and find very particular things, for example, in what year Henry IV died. His examples are all about search, search for linguistic tics, search for metaphors, search for major points and supporting arguments. That's all very well and good if you work in certain fields and do a lot of searching, but it isn't reading save in a very limited sense. I've been rather unimpressed with LLMs so far, but I'm willing to believe that they could work reasonably well as search engines. People are used to sorting out the wheat from the tares in search engine results, so even an imperfect search would be useful.

As for understanding, I'm with the mathematicians who say that when an android proves a theorem, nothing happens. The development of artificial intelligence is historically entwined with our modern understanding of what it means to prove a theorem. Mathematicians have been addressing these issues for nearly a century now, and, perhaps strangely, they've placed human understanding at the center of all things. You can only learn so much by searching a proof. To understand it, you have to follow the author's narrative. (For a good essay on this, check out Harris' "DO ANDROIDS PROVE THEOREMS IN THEIR SLEEP?")

Expand full comment

Abigail Nussbaum has what she claims is not a review of Across the Spiderverse up on her blog: http://wrongquestions.blogspot.com/2023/06/five-things-i-loved-in-spider-man.html

Expand full comment

This is probably too late to be relevant (and I can't read the whole piece), but when Matthew Klein writes:

"However, even though this mechanism might explain what happened with specific industries or particular categories of goods and services at points in time over the past few years, “greedflation” alone cannot explain overall price trends since the start of the pandemic."

This seems to be saying: "yes there might be 'greedflation', but that doesn't explain ALL inflation." But who is arguing against this point? So far as I am aware, there is no one who argues that there have not been other drivers of inflation nor that 'greedflation' is the sole source of recent inflation.

Expand full comment

I distrusted Dawn of Everything when he started drawing concussions from new insights that were not new.

Expand full comment

Left neoliberalism is this a social-democratic wolf in sheep's clothing.

Guilty as charged. But where did the idea that Left neoliberals are unfriendly to science policy? Ditto industrial policy based on overcoming some real distortion/market failure/externality not not "wouldn't it be nice ...". Net CO2 reducing technologies are something that we may (not must) choose because we do not yet have a tax on net emissions and cost-benefit based regulations.

Expand full comment
author

I have no idea where it came from...

Expand full comment

Brian Albrecht: Read the early Sowell, before he became "Thomas Sowell."

Expand full comment

> To the extent that a book is a catechism of questions-and-answers, yes, you can ask an LLM questions and get back the right answers. To the extent that a book is structured to contain its own chapter and overall summaries, yes, you can ask an LLM to summarize and get back the right answers. But anything else is a crapshoot.

That opens up the possibility of two new activities: [an specialized AI focused on?] rewriting books as catechisms and summaries, and _directly writing texts *primarily* to be used as input documents for LLMs_. For things like reference books and test prep, "programming AIs via LLM-first texts" might be a non-trivially useful activity, an extension of the general writer -> academic writer -> technical writer spectrum.

Expand full comment

Well worth what, at first glance, looks expensive. 😈

Expand full comment