Stochastic Parrots Do Not Understand Þt Hexapodia Is, in Fact, þe Key Insight
How Chat-GPT boosters are deluding þemselves in thinking þt it is much more þan it is—but þt does not keep it from being, potentially, very useful indeed. A stochastic parrot is a useful assistant...
How Chat-GPT boosters are deluding þemselves in thinking þt it is much more þan it is—but þt does not keep it from being, potentially, very useful indeed. A stochastic parrot is a useful assistant because so much of our use of language is in fact (also[?]) stochastic parrotage…
As I have been periodically saying for pretty much the past year, my big first-order takeaway from Chat-GPT in particular and recent work in GPT-LLM-ML in general is not that these models are in any sense a road to AGI, but rather that their usefulness stems from how much of our use of language is actually not AGI either—that we are, in our daily activities, largely stochastic parrots too.
And so a stochastic parrot can easily be a very helpful virtual assistant. But—only as a stochastic parrot.
And, this morning, something comes across my desk that makes me want to say this again:
The smart Ben Levinstein writes:
Ben Levinstein: How to Think about Large Language Models: ’If you hang out on social media at all, you sometimes see (non-ML) professors worried about the future of teaching say things like, “Well, ChatGPT can write semi-coherent prose, but it can’t do [x],” where [x] means something like write creatively, solve mildly complex math problems, generate interesting new ideas, pass some college level exam, or whatever. These professors are right, and ChatGPT does not have those capabilities. But it’s generally not clear that we need any kind of conceptual innovation to get the next chatbot to get to that level. We may just add some more layers and parameters and training and just generally throw a bit more compute at our model next time, and voilà…
What can Chat-GPT actually do?
Here is my latest attempt to find out:
Some backstory: Noah Smith proposed and I agreed that we take a particular phrase—“hexapodia is the key insight”—and make it the title of our podcast. Its source is Vernor Vinge’s A Fire Upon the Deep <https://archive.org/details/fireupondeep0000ving>. And the meaning we ascribe to it in the context of our podcast is, as we have now said 52 times, once in every podcast episode: That there is often some key nugget of fact that—if you understand it correctly and place it in its proper context—will transform your view of the situation, and allow you to grok it correctly.
Google comes up with only 53 hits total for the search ‘“Brad DeLong” AND “hexapodia”’, all of which are links to or references to Noah’s and my podcast <https://braddelong.substack.com/s/hexapodia-is-the-key-insight-by-noah>, in every single one of which we provide this explanation of why we use the phrase: “hexapodia is the key insight”.
An internet search-enabled version of Chat-GPT—which is what Bing Chat is—should be able to pick this up if asked any question involving “Brad DeLong” AND “hexapodia”, clothe it in appropriate prose, and return it as an answer no?
But it does not.
This is what happens when we ask Chat-GPT4 in its Bing Chat guise the question: “Does Brad DeLong believe that hexapodia is the key insight?”
Searching for:
Brad DeLong hexapodia
Generating answers for you…
Hello, this is Bing. I can help you find some information about Brad DeLong and hexapodia. 😊…
Keep reading with a 7-day free trial
Subscribe to