GPT LLM MAMLMs are not oracles, subordinates, or colleagues. They are emulations of TISs—Typical Internet S***posters. If you are at all a good writer, the most they can be is “rabbits” that, in...
The general process of writing a garbage first draft in order to write a somewhat satisfactory second draft is pretty standard. An LLM can certainly accelerate that but the process is more helpful than the output.
For brainstorming, you can also talk to your dog. Or, if you don't have a dog, then a graduate student will do. One is engaging one's own language centers, and it's useful. I don't think this works with cats, though.
Richard Lanham's "Revising Prose" is a pretty popular college text.
But I'm not sure that one should start with an LLM. My experience is that human first drafts are a kind of ore: jewels of concision and insight mixed with a lot of boring gangue that may or may not survive subsequent editing. My editing, at least, seldom creates more jewels, but mostly clears the worst of the gangue. As far as I can see, LLMs are mostly gangue of varying quality. Any "jewels" in LLM product are mostly cliches, because that's what shows up the most in a training set.
So a person might want to write a first draft for the sake of the jewels, use it to prompt an LLM, and then mashup and edit. Or not--I don't use LLMs myself.
My cat is useless for helping me with essays. He just sits in front of the screen, steps on the keyboard, and sometimes offers a plaintive "meeow". But I love him and won't be replacing him with a dog.
I have tried LLMs with context documents to write essay outlines, but I find them a bit "hit and miss". At least they don't take much time. What I really need is a good "Mind Map" generator instead, that orders to needed thoughts, with document reference details as a link to check accuracy, but leaving the keywords uncluttered and linked correctly in the map's graph.
Trained on my own writing (as you've experimented with, I think) OpenAI DeepResearch produces adequate boilerplate text from my dot points. I'm gradually training it on the principle "whenever you write something you think is particularly good, strike it out". Then I write the good bits myself and paste in some of the boilerplate.
LLM's are absolutely amazing. - Unarguable. But I agree with most of the post.
I've been writing reports for 36 years. Currently, I'm being asked to demo an AI-driven report writer for doing due diligence on startup companies. Frankly, its results are hopeless. It comes out with a generally acceptable view of the "big picture" i.e. common knowledge. But if your readers come to you for insight and analysis, that is simply not an LLM output (because it wasn't an LLM input! )
AI is a mind-bogglingly good tool as a librarian and research assistant. I'm becoming much more comfortable using it first before a standard Google search.
I think the fears of LLMs upending white collar employment are very wide of the mark. Part of that belief is based on the fact that besides lawyers, there isn't a large chunk of white-collar employees who do functional research writing. Making bad writers a little bit better and making writing easier for people who have difficulty with it are, on balance, good things, but not revolutionary.
Excellent for boilerplate and "ritual"—documents that are not supposed to bring insight, but simply say standard things or serve to change the world as directives, declarations, and permissives have material affects. Excellent for librarianship and literature searches. Superior to google—but that may just be because SEO has not yet come for LLMs. Making writing easier and better for people who have difficulty. A better and more effective object to explain your thinking to than your rubber duck:
> Steve Price: LLM's are absolutely amazing. - Unarguable. But I agree with most of the post. I've been writing reports for 36 years. Currently, I'm being asked to demo an AI-driven report writer for doing due diligence on startup companies. Frankly, its results are hopeless. It comes out with a generally acceptable view of the "big picture" i.e. common knowledge. But if your readers come to you for insight and analysis, that is simply not an LLM output (because it wasn't an LLM input! ) AI is a mind-bogglingly good tool as a librarian and research assistant. I'm becoming much more comfortable using it first before a standard Google search. I think the fears of LLMs upending white collar employment are very wide of the mark. Part of that belief is based on the fact that besides lawyers, there isn't a large chunk of white-collar employees who do functional research writing. Making bad writers a little bit better and making writing easier for people who have difficulty with it are, on balance, good things, but not revolutionary...
But, really, not much more.
And there is this worry: How do we teach people to write? My view is that writing-teaching will have to move much more to close reading and critique, at the word-choice, sentence, paragraph, section, and outline level. That people being taught to write would be well-advised to find programs where they spend less time writing their own essays and more time looking at how good writers produce their excellences. Something like Erich Auerbach's "Mimesis" as the model: Auerbach, Erich. 1946 [1953]. Mimesis: The Representation of Reality in Western Literature. Trans. Willard R. Trask. Princeton, NJ: Princeton University Press. <https://archive.org/details/mimesis0000unse>:
> Paul: The larger issue is that this is fine for people who have learned to write in the before-times and can tell that what LLMs write is drivel that they hate and which they can improve upon. For a generation that is just starting with writing, the LLM version is probably as good if not better than their own first draft, and even their second, and they will not see how it could be better. As a result they will not learn to revise. The racheting-up of learning to write will be broken at the first step...
"Writing" is a dang vague term. It covers a huge array of activities. I realize my writing is almost purely expository and is most often based on actual research and interviews, filtered and argued/reasoned by my subject matter expertise. Which is not something AI helps with. AI can certainly help with boilerplate. But again, with 36 years of using that other newfangled tech, dictation, I can spew a whole page of boilerplate in 3 minutes. But yeah... maybe "not much more"......
The larger issue is that this is fine for people who have learned to write in the before-times and can tell that what LLMs write is drivel that they hate and which they can improve upon. For a generation that is just starting with writing, the LLM version is probably as good if not better than their own first daft, and even their second, and they will not see how it could be better. As a result they will not learn to revise. The racheting-up of learning to write will be broken at the first step.
Let me add that there is a lot of pushback on LLM-generated text. There are tools like QuillBot that can highlight text that looks like AI slop. [You may be familiar with TurnItIn to detect student plagiarism in reports and essays.] If your manager uses such tools, it will say something about you if you turn in a performance review written by AI, and it won't be "Wow, this person knows how to use AI, they need a promotion!"
By 2030, any writing that doesn't sound like it was written by an LLM will be considered amateurish. Style will be as homogenous and formulaic as classwork from a bad Creative Writing course.
So cancel the many prizes for writing, especially the Booker prize which seems to love writing style above all else. This should be on Rodney Brooks' prediction list.
> mike harper: What happens when resumes are reviewed by AI Bots?? Will they have a bias for slop???
Well, it already runs wild. It will run wilder...
====
DELONG’S GRASPING REALITY: My attempt to make my readers—and myself—smarter. People say, and I think believe, that I am a go-to source to understand things economic in the past and in the present. Where I think I have Value Above Replacement in what I have to say, I will say it. Where I think I do not, I will shut up—and, hopefully, point you to somebody who does. Oh. And I have been too online since 1995.
Resumes are already electronic, filled with visually invisible prompt injections to raise the chances of the candidate being selected for an interview. And we thought that AI resume scanners were bad. I expect images will be the next means to poison the AI resume readers, perhaps added to a logo, selfie, or even hidden as a pixel in whitespace. AI is making even Peter Watts' scifi novels that include rogue AIs seem almost tame.
In the early 1960s, I read a book called Danny Dunn and the Homework Machine in which Dunn invents a machine, some type of computer, to do his homework. It works well, but programming it takes a lot of work. Needless to say, the moral of the story was that some labor saving ideas wind up making one do more rather than less work. I was probably seven or eight years old when I read it, but it should be required reading for anyone thinking LLMs are going to make things easier for them.
(It's an older story than that. In Those Happy Years, the protagonist spends days figuring out how to cheat on a critical Latin exam. He works so hard and does so well he winds up learning some Latin.)
If writing well is preceded by thinking well, then using an LLM to write, even if you struggle to write well and don’t succeed, sacrifices something essential for something mediocre. That doesn’t mean you shouldn’t use an LLM to jumpstart the process of thinking well.
Ah. But how do you do that? And how do you train people to instinctively do that?:
> Giulio Martini: If writing well is preceded by thinking well, then using an LLM to write, even if you struggle to write well and don’t succeed, sacrifices something essential for something mediocre. That doesn’t mean you shouldn’t use an LLM to jumpstart the process of thinking well...
===
DELONG’S GRASPING REALITY: Trying to make my readers—and myself—smarter. I think I am a go-to source to understand things economic in the past and in the present. Too online since 1995. **Currently featuring**:
You’re absolutely correct — this really captures something important. I’ve been thinking along similar lines, and your perspective adds a lot to the conversation.
The general process of writing a garbage first draft in order to write a somewhat satisfactory second draft is pretty standard. An LLM can certainly accelerate that but the process is more helpful than the output.
For brainstorming, you can also talk to your dog. Or, if you don't have a dog, then a graduate student will do. One is engaging one's own language centers, and it's useful. I don't think this works with cats, though.
Richard Lanham's "Revising Prose" is a pretty popular college text.
But I'm not sure that one should start with an LLM. My experience is that human first drafts are a kind of ore: jewels of concision and insight mixed with a lot of boring gangue that may or may not survive subsequent editing. My editing, at least, seldom creates more jewels, but mostly clears the worst of the gangue. As far as I can see, LLMs are mostly gangue of varying quality. Any "jewels" in LLM product are mostly cliches, because that's what shows up the most in a training set.
So a person might want to write a first draft for the sake of the jewels, use it to prompt an LLM, and then mashup and edit. Or not--I don't use LLMs myself.
My cat is useless for helping me with essays. He just sits in front of the screen, steps on the keyboard, and sometimes offers a plaintive "meeow". But I love him and won't be replacing him with a dog.
I have tried LLMs with context documents to write essay outlines, but I find them a bit "hit and miss". At least they don't take much time. What I really need is a good "Mind Map" generator instead, that orders to needed thoughts, with document reference details as a link to check accuracy, but leaving the keywords uncluttered and linked correctly in the map's graph.
Programmers use a rubber duck. They call it rubber ducking. It's a great way to get past the "tyranny of the blank page" and for debugging.
:-)
Trained on my own writing (as you've experimented with, I think) OpenAI DeepResearch produces adequate boilerplate text from my dot points. I'm gradually training it on the principle "whenever you write something you think is particularly good, strike it out". Then I write the good bits myself and paste in some of the boilerplate.
LLM's are absolutely amazing. - Unarguable. But I agree with most of the post.
I've been writing reports for 36 years. Currently, I'm being asked to demo an AI-driven report writer for doing due diligence on startup companies. Frankly, its results are hopeless. It comes out with a generally acceptable view of the "big picture" i.e. common knowledge. But if your readers come to you for insight and analysis, that is simply not an LLM output (because it wasn't an LLM input! )
AI is a mind-bogglingly good tool as a librarian and research assistant. I'm becoming much more comfortable using it first before a standard Google search.
I think the fears of LLMs upending white collar employment are very wide of the mark. Part of that belief is based on the fact that besides lawyers, there isn't a large chunk of white-collar employees who do functional research writing. Making bad writers a little bit better and making writing easier for people who have difficulty with it are, on balance, good things, but not revolutionary.
Excellent for boilerplate and "ritual"—documents that are not supposed to bring insight, but simply say standard things or serve to change the world as directives, declarations, and permissives have material affects. Excellent for librarianship and literature searches. Superior to google—but that may just be because SEO has not yet come for LLMs. Making writing easier and better for people who have difficulty. A better and more effective object to explain your thinking to than your rubber duck:
> Steve Price: LLM's are absolutely amazing. - Unarguable. But I agree with most of the post. I've been writing reports for 36 years. Currently, I'm being asked to demo an AI-driven report writer for doing due diligence on startup companies. Frankly, its results are hopeless. It comes out with a generally acceptable view of the "big picture" i.e. common knowledge. But if your readers come to you for insight and analysis, that is simply not an LLM output (because it wasn't an LLM input! ) AI is a mind-bogglingly good tool as a librarian and research assistant. I'm becoming much more comfortable using it first before a standard Google search. I think the fears of LLMs upending white collar employment are very wide of the mark. Part of that belief is based on the fact that besides lawyers, there isn't a large chunk of white-collar employees who do functional research writing. Making bad writers a little bit better and making writing easier for people who have difficulty with it are, on balance, good things, but not revolutionary...
But, really, not much more.
And there is this worry: How do we teach people to write? My view is that writing-teaching will have to move much more to close reading and critique, at the word-choice, sentence, paragraph, section, and outline level. That people being taught to write would be well-advised to find programs where they spend less time writing their own essays and more time looking at how good writers produce their excellences. Something like Erich Auerbach's "Mimesis" as the model: Auerbach, Erich. 1946 [1953]. Mimesis: The Representation of Reality in Western Literature. Trans. Willard R. Trask. Princeton, NJ: Princeton University Press. <https://archive.org/details/mimesis0000unse>:
> Paul: The larger issue is that this is fine for people who have learned to write in the before-times and can tell that what LLMs write is drivel that they hate and which they can improve upon. For a generation that is just starting with writing, the LLM version is probably as good if not better than their own first draft, and even their second, and they will not see how it could be better. As a result they will not learn to revise. The racheting-up of learning to write will be broken at the first step...
I'm not sure we should denigrate rubber ducks!...
"Writing" is a dang vague term. It covers a huge array of activities. I realize my writing is almost purely expository and is most often based on actual research and interviews, filtered and argued/reasoned by my subject matter expertise. Which is not something AI helps with. AI can certainly help with boilerplate. But again, with 36 years of using that other newfangled tech, dictation, I can spew a whole page of boilerplate in 3 minutes. But yeah... maybe "not much more"......
The larger issue is that this is fine for people who have learned to write in the before-times and can tell that what LLMs write is drivel that they hate and which they can improve upon. For a generation that is just starting with writing, the LLM version is probably as good if not better than their own first daft, and even their second, and they will not see how it could be better. As a result they will not learn to revise. The racheting-up of learning to write will be broken at the first step.
Didn't Galbraith say that the spontaneity came in between the 7th and 8th drafts?
Let me add that there is a lot of pushback on LLM-generated text. There are tools like QuillBot that can highlight text that looks like AI slop. [You may be familiar with TurnItIn to detect student plagiarism in reports and essays.] If your manager uses such tools, it will say something about you if you turn in a performance review written by AI, and it won't be "Wow, this person knows how to use AI, they need a promotion!"
By 2030, any writing that doesn't sound like it was written by an LLM will be considered amateurish. Style will be as homogenous and formulaic as classwork from a bad Creative Writing course.
So cancel the many prizes for writing, especially the Booker prize which seems to love writing style above all else. This should be on Rodney Brooks' prediction list.
What happens when resumes are reviewed by AI Bots?? Will they have a bias for slop???
SEO will run wild!
> mike harper: What happens when resumes are reviewed by AI Bots?? Will they have a bias for slop???
Well, it already runs wild. It will run wilder...
====
DELONG’S GRASPING REALITY: My attempt to make my readers—and myself—smarter. People say, and I think believe, that I am a go-to source to understand things economic in the past and in the present. Where I think I have Value Above Replacement in what I have to say, I will say it. Where I think I do not, I will shut up—and, hopefully, point you to somebody who does. Oh. And I have been too online since 1995.
**Currently featuring**:
* Consequences of the Revolutions of 1848: Élite Recognition that "If Everything Is Going to Stay the Same, Everything Has to Change..." <https://braddelong.substack.com/p/draft-consequences-of-the-revolutions>
* The Fourteen-Lion Parade <https://braddelong.substack.com/p/fourteen-lion-parade>
* How We All Already Have Our Superintelligent AI-Assistants <https://braddelong.substack.com/p/what-is-man-that-thou-art-mindful>
* Lash, Cash, & Cotton in the Imperial-Commercial & Early SteamPower Age <https://braddelong.substack.com/p/lecture-notes-lash-cash-and-cotton>
* Why the 14th Amendment Was Necessary: Who 'We the People’ Were Back in 1787 <https://braddelong.substack.com/p/who-were-the-we-the-people-back-in>
Resumes are already electronic, filled with visually invisible prompt injections to raise the chances of the candidate being selected for an interview. And we thought that AI resume scanners were bad. I expect images will be the next means to poison the AI resume readers, perhaps added to a logo, selfie, or even hidden as a pixel in whitespace. AI is making even Peter Watts' scifi novels that include rogue AIs seem almost tame.
In the early 1960s, I read a book called Danny Dunn and the Homework Machine in which Dunn invents a machine, some type of computer, to do his homework. It works well, but programming it takes a lot of work. Needless to say, the moral of the story was that some labor saving ideas wind up making one do more rather than less work. I was probably seven or eight years old when I read it, but it should be required reading for anyone thinking LLMs are going to make things easier for them.
(It's an older story than that. In Those Happy Years, the protagonist spends days figuring out how to cheat on a critical Latin exam. He works so hard and does so well he winds up learning some Latin.)
If writing well is preceded by thinking well, then using an LLM to write, even if you struggle to write well and don’t succeed, sacrifices something essential for something mediocre. That doesn’t mean you shouldn’t use an LLM to jumpstart the process of thinking well.
Ah. But how do you do that? And how do you train people to instinctively do that?:
> Giulio Martini: If writing well is preceded by thinking well, then using an LLM to write, even if you struggle to write well and don’t succeed, sacrifices something essential for something mediocre. That doesn’t mean you shouldn’t use an LLM to jumpstart the process of thinking well...
===
DELONG’S GRASPING REALITY: Trying to make my readers—and myself—smarter. I think I am a go-to source to understand things economic in the past and in the present. Too online since 1995. **Currently featuring**:
* Consequences of the Revolutions of 1848: Élite Recognition that "If Everything Is Going to Stay the Same, Everything Has to Change..." <https://braddelong.substack.com/p/draft-consequences-of-the-revolutions>
* The Fourteen-Lion Parade <btps://braddelong.substack.com/p/fourteen-lion-parade>
* How We All Already Have Our Superintelligent AI-Assistants <https://braddelong.substack.com/p/what-is-man-that-thou-art-mindful>
You’re absolutely correct — this really captures something important. I’ve been thinking along similar lines, and your perspective adds a lot to the conversation.
I really like the bolded summary statements at the top. Sometimes that’s enough to get my brain into a higher gear.