Using GPT LLM MAMLMs as Your Rabbit—Your Pacer
GPT LLM MAMLMs are not oracles, subordinates, or colleagues. They are emulations of TISs—Typical Internet S***posters. If you are at all a good writer, the most they can be is “rabbits” that, in...
GPT LLM MAMLMs are not oracles, subordinates, or colleagues. They are emulations of TISs—Typical Internet S***posters. If you are at all a good writer, the most they can be is “rabbits” that, in races and in training, mark and keep the pace that you want to meet and then better. Yes, with a lot of work you can nudge the TIS-emulators into desirable configurations. But that is a lot of work, done at the wrong and at a very unreliable abstraction layer. Better to say “I can do better than that drivel…” and write it yourself…
And looking at this yet again. Still so excellent on the way to use ChatGPT and its couins
David MacIver: How to say what you’re good at <https://drmaciver.substack.com/p/how-to-say-what-youre-good-at>: ‘Another form of autocompletion… [I find effective is this] very simple process:
Dump your notes into Claude or ChatGPT and tell it what you want to write.
Copy its output to a separate document.
Stare at what it wrote for a bit.
Let the hate flow through you.
Delete whatever drivel it wrote and write something better, because you can definitely improve on that!
It’s certainly possible that some of what it wrote will be OK, and you don’t actually have to delete all of it, but I really strongly encourage you not to edit what it wrote into something usable, but instead to write your own thing, borrowing as little or as much from it as you feel you need/want to.
At the end of this process, you should have something….
As a workflow: dump notes into AI, and then look at the slop in disgust. Then delete and rewrite—perhaps paragraph by paragraph, perhaps all at once. the mush, then delete and rewrite—using the friction to surface your authentic articulation. If you are at all a good writer, you are then highly likely to find that you have something.
After all, ChatGPT or Claude will, at bottom, give you only AI slop. They cn give you nothing other than what a TIS—a Typical Internet S***poster—would write. This is how these models work: they learn from the mass of online text and spit out some interpolation driven by its compressed internal model of typical word patterns. There are ideas buried in the word-patterns it produces, yes. But they are buried.
Now GPT LLM boosters, at this point, begin talking frantically about RAG, RLHF, prompt engineering, data curation, feedback loops, careful data chunking and indexing, embedding optimization, hybrid search and reranking, supervised fine-tuning training from high-quality and human-curated examples, human reward modeling, proximal policy optimization, continuous feedback loops, custom base prompts, context injection, dynamic prompt templates, semantic chunking, metadata tagging, regular content updates, fine-tuning models on domain-specific data, and so on. Yes, these are ways to try to make the TIS that is the GPT LLM MALML behave better. As Andrej Karpathy once wrote:
Andrej Karpathy: <https://twitter.com/karpathy/status/1937902205765607626>: ‘+1 for “context engineering” over “prompt engineering”. People associate prompts with short task descriptions you’d give an LLM in your day-to-day use. When in every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window with just the right information for the next step. Science because doing this right involves task descriptions and explanations, few shot examples, RAG, related (possibly multimodal) data, tools, state and history, compacting... Too little or of the wrong form and the LLM doesn’t have the right context for optimal performance. Too much or too irrelevant and the LLM costs might go up and performance might come down. Doing this well is highly non-trivial. And art because of the guiding intuition around LLM psychology of people spirits. On top of context engineering itself, an LLM app has to: - break up problems just right into control flows - pack the context windows just right - dispatch calls to LLMs of the right kind and capability - handle generation-verification UIUX flows - a lot more - guardrails, security, evals, parallelism, prefetching, ... So context engineering is just one small piece of an emerging thick layer of non-trivial software that coordinates individual LLM calls (and a lot more) into full LLM apps. The term “ChatGPT wrapper” is tired and really, really wrong…
These are patches, not solutions. Relying on software to mimic a TIS nudged into producing something less sloppy is a trick that almost never works. If you write at all well, AI will not be better than just a very rough draft.
And step back. “Context engineering” is a lot of work. Unless you have a large number of documents or tasks that are parallel where you need to produce prose quickly at scale, and where you can nudge the TIS into a not-embarrassing configuration, you are doing the work, but at the wrong unreliable abstraction letter.
Better to avoid that “context engineering” work, for it is massive, and focus on saying what you need to say to persuade your ultimate readers.
And so the best course then is really to “let the hate flow through you. Delete whatever drivel it wrote and write something better, because you can definitely improve on that…”
Of course, if your writing is currently worse than that of the TIS—or if you have to write in a language in which you are not fluent—AI can really help. ChatGPT and Claude and company can then produce text that can be clearer and less embarrassing. For weak writers, AI does powerfully lift the floor. And I see this democratization of the power to write serviceable prose as a very good thing.
And if the words you write are not to persuade but to ticket-punch, to box-check, matters of boilerplate or ritual—things not to be seriously read but rather to fit into bureaucratic routine—the AI-slop generated from your notes may well be fine.
References:
DeLong, J. Bradford. “On Figuring Out What Your Comparative Advantage Is”. DeLong’s Grasping Reality. September 22. <https://braddelong.substack.com/p/on-figuring-out-what-your-comparative?utm_source=publication-search>.
Karpathy, Andrej: 2025. “+1 for ‘context engineering’ over ‘prompt engineering’…” Twitter. June 25. <https://twitter.com/karpathy/status/1937902205765607626>:>.
MacIver, David. 2025. “How to say what you’re good at”. Ovethinking Everything. September 12. <https://drmaciver.substack.com/p/how-to-say-what-youre-good-at>.




The general process of writing a garbage first draft in order to write a somewhat satisfactory second draft is pretty standard. An LLM can certainly accelerate that but the process is more helpful than the output.
For brainstorming, you can also talk to your dog. Or, if you don't have a dog, then a graduate student will do. One is engaging one's own language centers, and it's useful. I don't think this works with cats, though.
Trained on my own writing (as you've experimented with, I think) OpenAI DeepResearch produces adequate boilerplate text from my dot points. I'm gradually training it on the principle "whenever you write something you think is particularly good, strike it out". Then I write the good bits myself and paste in some of the boilerplate.