13 Comments

Re: : Now is the time for grimoires

Hasn't Charlie Stross' Laundry novels occupied that space for quite some time now? Bob Howard is a master at this, although others seem ever more powerful with the techniques.

Expand full comment

As an MIT trained CS major myself, I find that it helps to remember that any magic sufficiently advanced is indistinguishable from technology.

Expand full comment

This version is better than the original, which is a law only of the writing and reading of imaginative fiction. How so? Because in the real world there is a very simple distinction: If it's real and it works, it's technology. But your version makes it useful by swapping contexts. IMHO.

Expand full comment

I really should have given credit to the writers at Dr. Who, the long running BBC science fiction series.

Expand full comment

> What other directions should I also try to have my team explore this fall?

1. Any output structured enough that can be automatically checked is at least worth exploring. E.g. if you have a data set, asking a LLM for a mathematical model of it in some executable format might at least pass some sanity checks before you see it ("LLM, please give me the specification of a generalized linear model relating these and these variables, express it in R code using the brms package") or whatever.

2. Grimoires are a bad alternative to Turing architectures; not what I'd use a computer for and not what I'd like an economist's cognitive architecture to approximate. What about using this adversarially? A way of active reading (or second-pass reading) that involves auto-generated questions to the test, and asking the reader to assume it's wrong and edit it until it's right digging into the text. It's not automatic either in doing or in grading, but looks like a nice exercise.

3. Alright, so assuming we want to figure out a way to write a text that'll maximize the quality of answers to questions asked thru a LLM. One: that's SEO elevated to the category of intellectually desirable writing style, and ugh. Two: I suspect there are or will be soon tools that might help that thru (1) automated ways to make texts more stable [either via automated rewriting at the text level or modifying the encoding], (2) automated ways to help (ideally stable) texts be more truthful through QA loops with domain experts. So it's not likely to be entirely a process of learning a new writing style. That cripples the mind, paraphrasing Dijkstra.

As an orthogonal concept of grimoire: what if part of the end goal of a course is for the student to have built from scratch a Jupyter notebook or whatever with code, database access, references to texts, etc, making usable and scalable what they have learned? A. Having to program something is a good additive push towards having to think about it, B. It's certainly going to help with the reapplication of skills later on. Maybe a cumulative one during the career? (I know that the equivalent for me has been very useful, and I wish I had started earlier.)

Expand full comment

I fear LLM's first major business implementation will be to replace humans in customer service troubleshooting. LLM's will be trained on PDF's that are already on the company's website, which aren't comprehensive, and sometimes incorrect. I foresee anger at the Chatbots ... and at the companies that deploy them.

Expand full comment

We've gone through this cycle repeatedly. There is supposedly a new technology that eliminates the need for programmers and opens the glories of idiosyncratic computer assistance to everyone. We can ignore the 1950s and 1960s with their procedural algorithmic languages and jump to the 1980s with the introduction of spreadsheets and graphical programming languages that would make everything simple and transparent. Now we have certified spreadsheet experts and does anyone else remember programming by example or the various flow languages?

Now we have AI which is supposedly the answer, but look at it. Already we have a class of prompting experts who know how to coax useful results from LLM systems. Next there will be courses and, soon enough, certification. Meanwhile, we have a lot more people who can tailor computers to increase their productivity than we did in the 1950s and 1960s or even the 1980s. All these software tools have helped. Presumably, AI will get us another jump as it outputs text and graphics with greater facility than earlier tools.

Is this really a new era or do we just have a bunch of new applications that will open a certain class of solutions to some new subset of users? From the 19th century and well into the 20th century, there were guidebooks with letter templates for just about everything: thank you notes, condolences, job acceptances and so on. LLMs are perfect for that kind of thing even if social note writing is in sad decline. I was just talking my electrician this morning, and he casually mentioned ladder programming, a specialized programming approach used in appliances, so I'm guessing that AI will get us a new group of users, classes, certifications and a modest boost in productivity. It may make some lives easier and likely increase corporate profits.

Expand full comment

Spreadsheets did democratize numeric work and was and remains a productivity booster. Those graphic programming approaches never really made much impact as I recall. beyond toy examples, they proved unwieldy. I agree that LLMs are already resulting is short courses in prompt "engineering" and this will no doubt expand if using LLMs really democratizes so areas of work. I haven't much luck with images, but coding with ChatGPT-3 has proven a timesaver in some cases.

I will admit to losing interest in LLMs after my explorations proved disappointing, especially with the problem of "hallucinations". I am still interested in creating a "sub-Turing Arthur C Clarke" using his works and transcripts of interviews with him. I expect such intelligent bots will be created for many people to hold conversations with. My guess that will only work well when AI has become more sophisticated. I think Meta wanted to do something like this with social media posts by a user to create a bot after their death. If this bot creation becomes easy and inexpensive, I can see them being acquired as a faster way to learn about key ideas and chat with them like a tutor, or a friend, etc.

Expand full comment

Most of the positive remarks I've heard about LLMs have been from programmers. Interestingly, the languages involved aren't natural language but languages designed for a certain type of relatively unambiguous expression. I'd be curious how they might take a form letter and change number and gender, for example, as appropriate. That would be something that could make them very useful.

Back in the 1960s, they did a whole series of videotaped interviews with the sculptor Jacques Lipchitz. There was an exhibit called An Evening With Jacques Lipchitz. People in the audience were invited to ask a question. The folks running the exhibit would find the right videotape and play the appropriate section of tape with a portion of the interview that would answer. I wonder if those interview tapes are still around. Perhaps an LLM might make this an even more engaging experience.

P.S. It doesn't work anymore, but https://museum.imj.org.il/jacques-lipchitz/en/questions

Expand full comment

When it comes to Abelson & Sussman (with Sussman), I think just an image of the cover would suffice? At least it would for my 1985 edition. There are two figures; on the left, a turbaned and bearded man holds an Eval / Apply orb, obviously a wizard. On the right, a smooth-faced androgynous figure, evidently the apprentice, points at a table whose leg is a single clawed monster foot, and on which several books rest. Between them a glowing lambda hangs in the air. Coincidentally, it was resting next to an old-fashioned neural networks book on my shelf.

I remember Zeynep Tufekci complaining on Twitter (as it was) that when people were calling machine learning "algorithms", that was the opposite of the truth: you kind of knew what they did, but you had no idea how they did it. I replied to her that the struck me as closer to a homunculus than an algorithm, and she LOL'd and answered that she had deployed a similar analogy.

If LLMs are magical, then it is the magic of Jack Vance's sandestins, who were argumentative and refractory servants.

Expand full comment

I really like the magic metaphor. I live in a retirement community and showing my neighbors simple usage, like dragging a frequently used url to the desktop seems like magic to them.

And especially with visual AIs, the prompts seem more like a spell.

And then I remembered Clark's 3rd Law "Any sufficiently advanced technology is indistinguishable from magic."

We are living in interesting times.

Expand full comment

Re: SubTuringBradBot.

While I believe these bots will become better over time although at some huge computational cost, I do wonder if the problem is more a mismatch between writing of thoughts and how an ML algorithm can extract and even "understand" them. Just as we have dumbed down business communication to emphasize simple structures and short sentences, as well as stripping out metaphors and analogies, so might human writing to be changed to match the needs of machines.

An extreme example is programming. We now have decades of experience with computer languages that can be used to express exactly what a digital computer needs to follow instructions. This requires humans to think very differently and code accordingly.

Now that various ML approaches are starting to be able to write (cut and paste?) code fragments in response to prompts, e.g. MS Co-Pilot, we have a somewhat halfway approach to having machines sufficiently understand human prompts to create pieces of code in a desired language.

As coding languages become increasingly interoperable, it seems possible that machine generation of code from prompts may pick the best languages to handle specific prompts and integrate them through interfaces.

That a future library of prompts might occupy the shelves for "programming computers" is a bit worrying though. Just as making incantations to conjure up spirits abstracts the conjuror from what is happening "under the hood" to invoke the spirits, so might prompts separate the "programmer" from what the prompt is creating as code to carry out the instructions.

But when it comes to prompts to extract meaning from texts, that is a different problem. It strikes me that prompt length and specification may well be an indicator of how difficult a text is at conveying information, even to an AGI that is already shown to be at a high level human IQ. An analogy is the length of a compressed file to content.

If Orwell was alive today, I wonder what he would make of this communication issue between humans and machines. Would a novel emerge, with new terms instead of "doublespeak"?

Expand full comment