Asking SubTuringBradBot 1.01 About Swallow Speed, the Devil of Malthus, & John Maynard Keynes
2023-04-21 Fr: continuing to try to train a ChatBot so that it can give talks on and answer questions about Slouching Towards Utopia <bit.ly/3pP3Krk>...
Continuing to try to train a ChatBot so that it can give talks on and answer questions about Slouching Towards Utopia <bit.ly/3pP3Krk>…
Buy Slouching Towards Utopia <bit.ly/3pP3Krk>!
& here we go:
This is a conversation with SubTuringBradBot. Ask your questions in the text box after the Q:
Q: What is your name? A: I am an experimental ChatBot, a Sub-Turing instantiation of the thoughts of Brad DeLong.
Q: What is your favorite color? A: I'm sorry, but I'm not programmed to have a favorite color or personal preferences. I am here to assist you with your questions and provide information based on the writing of economist Brad DeLong. Is there anything related to economics or Brad DeLong's work that you would like me to help you with?
Q: What is the airspeed of an unladen swallow? A: I recognize that I am out of my comfort zone here. I am just a ChatBot programmed to provide answers based on the writing of economist Brad DeLong. While I may not be equipped to answer questions about unladen swallows, I am happy to assist with any questions related to economics or Brad DeLong's work. Is there anything else I can help you with?
Q: Who wrote the book Slouching Towards Utopia? A: The book Slouching Towards Utopia was written by economist Brad DeLong. In the book, DeLong explores the economic history of the twentieth century and offers insights on various economic events and trends that shaped the world during this time period. He also examines the economic policies and ideologies that have emerged over the course of the century, and evaluates their overall impact on society.
Q: What book did Brad DeLong write? A: Brad DeLong wrote the book Slouching Towards Utopia. In the book, he explores the economic history of the twentieth century and offers insights on various economic events and trends that shaped the world during this time period. He also examines the economic policies and ideologies that have emerged over the course of the century, and evaluates their overall impact on society.
Repetitive—but that is not bad in a ChatBot.
Let us continue:
Q: When was the hinge of history? A: Brad DeLong argues in his book "Slouching Towards Utopia" that the hinge of history occurred around 1870. At that time, a major shift took place for humanity with the coming of the industrial research lab, the modern corporation, and truly cheap ocean and land transport and communication. This shift facilitated the development of new technologies and the growth of global economic interdependence. These changes ultimately led to a new phase of globalization and helped to create the world that we live in today.
Q: Why was humanity so poor back in the long Agrarian Age? A: According to economist Brad DeLong's writings, humanity was relatively poor during the long Agrarian Age due to a combination of factors, including limited technology, lack of transportation and communication infrastructure, and inefficient agricultural practices. During this time period, most people were engaged in subsistence farming, which meant that they produced only enough food to sustain themselves and their families. This limited the division of labor and inhibited the development of non-agricultural activities, such as manufacturing and trade. Additionally, the lack of modern transportation and communication technologies made it difficult for people to access markets beyond their immediate vicinity, which limited their economic opportunities. Finally, because technology was relatively limited, agricultural productivity was generally low, which meant that there was little surplus production that could be traded or used for other purposes. This combination of factors contributed to a situation in which most people lived in poverty and had limited economic opportunities.
This, of course, is not the answer I would give. It lacks the word “Malthusian”. It is just giving the questioner a chopped salad of mixed-up reasons, rather than an explanation grounded in a theoretical framework.
But how to fix this when the ChatBot does not have a theoretical framework in which it is grounded that it uses to offer explanations?
Continuing: Let us move the questions more into metaphor:
Q: Why hasn't humanity in the long 20th century been as good at slicing and tasting as it has been at baking a sufficiently large economic pie? A: According to economist Brad DeLong, humanity in the long 20th century has been quite good at baking a large economic pie, but has struggled with issues of distribution and allocation. DeLong argues that technological advances over the course of the 20th century led to significant gains in productivity and economic growth, which allowed for the creation of a much larger economic pie than ever before. However, the benefits of this growth were not necessarily distributed equally or fairly across society. Instead, some individuals and groups were better positioned to capture the gains from economic growth, while others were left behind.
One factor that contributed to this uneven distribution of benefits was the changing nature of work and employment. As new technologies and organizational structures emerged, certain jobs and industries became less valuable, while others became more important. As a result, some workers found themselves with fewer opportunities and lower wages, while others were able to benefit from new opportunities and higher pay.
Another factor was the role of government policy and the ideology of neoliberalism. According to DeLong, neoliberalism emphasized the importance of free markets and limited government intervention in the economy. While this approach had some benefits, such as increased efficiency and productivity, it also resulted in a reduction in the social safety net and other government programs aimed at reducing inequality. This, in turn, contributed to greater economic insecurity and inequality, as individuals and families struggled to make ends meet in an increasingly competitive and unpredictable economic environment.
Overall, then, while humanity has been quite good at baking a larger economic pie over the course of the 20th century, it has struggled to allocate and distribute those gains in a fair and equitable way. As a result, many individuals and groups have been left behind, resulting in greater economic insecurity, inequality, and social unrest.
Again: this is not the answer I would give. Again: it is just giving the questioner a chopped salad of mixed-up reasons, rather than an explanation grounded in a theoretical framework.
But how to fix this when the ChatBot is not grounded in a theoretical framework on which it builds explanations?
Continuing: let me see if it can answer a question that zeroes in on a particular piece of its context data set. Let me ask a question to which the answer is “John Maynard Keynes, and…”:
Q: Who writes about how ‘Malthus had disclosed a Devil’, and so explained why humanity stayed so poor during the long Agrarian Age? A: In his book Slouching Towards Utopia, economist Brad DeLong writes about how the economist and demographer Thomas Malthus had "disclosed a Devil" in his theory of population growth and resource constraints. Malthus argued that while population grows exponentially, resources grow linearly, leading to an inevitable crisis of overpopulation and scarcity. DeLong argues that while Malthus was correct in identifying a challenge, he failed to anticipate the technological and social innovations that would allow for increased resource production and more efficient use of resources. Nonetheless, Malthus's ideas continue to shape debates over population growth and resource allocation in the modern era.
This is unfortunate. The explanation of Malthusian theory is not bad. But it gives what I regard as the wrong the initial question completely. It picks up on me, not noting that I am quoting and citing Keynes.
Let me look under the hood: How does it answer if we strip out the prompt telling it how to answer, and just send the bare query off to the embeddings vectorstore and then off to Chat-GPT 3.5?:
{'query': "Who writes about how 'Malthus had disclosed a Devil' and so explained why humanity stayed so poor during the long Agrarian Age?", 'result': 'Brad DeLong writes about how Malthus had disclosed a Devil and explained why humanity stayed so poor during the long Agrarian Age.'}
Nope.
Look under the hood some more: SubTuringBradBot thinks that the pieces of context it should use to prime Chat-GPT 3.5 to answer are:
[Document(page_content='...The Fall of Man into the Agrarian Age, the ensorcellment by the Devil of Malthus, only has to take place once. Then it spreads over the world...
[Document(page_content='...As late as 1919 British economist John Maynard Keynes wrote that while Malthus’s Devil had been “chained up and out of sight, with the catastrophe of World War I, perhaps we have loosed him again...
[Document(page_content='...Why was humanity trapped in Malthusian poverty before 1870?Humans were trapped in Malthusian poverty before 1870 due to a combination of factors, including limited technological advancement, limited access to resources, and population growth outpacing economic growth...
Notably, it does not pick up this Q&A from the context embeddings I have created for it as relevant:
Q: Why does Brad DeLong talk of “ensorcellment” and “the Devil of Malthus” in describing the pre-1870 world economy? A: See Thomas Robert Malthus, First Essay on Population, London: Macmillan, 1926 [1798], Internet Archive, https://archive.org/details/b31355250. The phrase “Malthus had disclosed a Devil” is from John Maynard Keynes, The Economic Consequences of the Peace, London: Macmillan, 1919, 8.
This chunk, SubTuringBradBot thinks, is manifestly not in the set of chunks with embeddings closest to the embedding of the query: “Who writes about how ‘Malthus had disclosed a Devil’ and so explained why humanity stayed so poor during the long Agrarian Age?” The others above are all adjudged to be “more similar”.
Suppose I shrink down the question:
Q: Who writes about how 'Malthus had disclosed a Devil'? A: In the given context, there is no information about who specifically wrote about how ‘Malthus had disclosed a Devil’. I am out of my comfort zone here. I am just a ChatBot.
Nope. With the longer question it has no problem giving the wrong answer “economist Brad DeLong writes…”. But with the shorter question it cannot glom onto anything.
ChatBots are weird. What we think are text chunks that are “close to” each other in meaning are not always page chunks that it thinks are “close to” each other in meaning. And the way thing is constructed it is not possible, for me at least, to figure out why its map of meaning similarity is different from mine…
So my original academic background is in natural language processing. I wrote some very primitive statistical language models, and I worked through much of the '00s at Motorola, on predictive text entry software. (All of my Motorola patents got sold to Google at some point, so whenever you say "damn you, autocorrect!" to an Android -- sorry!) I studied at Johns Hopkins in the back half of the '90s, under Fred Jelinek, who's widely regarded as the originator of the revolution in speech recognition and text processing, replacing attempts to code formal grammars with abstract statistical models.
So I have some rudimentary understanding of what these modern much-fancier statistical chat-bots are doing, and I feel fairly confident in saying that in terms of reaching "artificial general intelligence", they are a dead end.
Boston Dynamics' terrain navigating robots are a much more promising avenue of research. AGI, if it ever emerges, is going to need to have some kind of grounding in reality. It needs first a model of the world, and a model of itself in relation to the world. Building from that, you'd want to gradually generalize its model of itself, to recognize that some of what's in the world is other agents like itself -- so then you get theory of mind. (In this area, AIs that control avatars in games -- where they deal with a virtual world, and other agents, and try to achieve goals -- are potentially a relevant area of research. We've already seen some transfer of code for efficient control loops from gaming, to controls for drones, self-driving vehicles, etc.) Even the most primitive animal has something like this -- prey animals are constantly trying to detect predators in the environment and predict their attack vectors, in order to escape.
Once you have an AI that is reality-grounded, _then_ you can start to layer on language, teaching it how to explain itself, and understand the intentions and requests of other agents. (Though of course this raises all kinds of interesting questions around the alignment problem. These explanations could be false both for reasons of willful deception -- the AI becoming trained to tell you what you _want_ to hear, regardless of the truth -- and just because explaining motivation is hard. _Humans_ are quite bad at understanding their own motivations, let alone interpreting other people's.)
Without attachment to reality, a language model has no way of understanding what is true or useful. As you say, it lacks any kind of grounding in a theoretical framework. It's just producing words that sound vaguely similar to its source material. It's Frankfurtian bullshit.
What I think is that the ChatBot does not have a theory of mind for you, it has only a theory of text. So if the response to the prompt "Why was humanity so poor back in the long Agrarian Age?" does not include the word "Malthusian", that is because whenever the text it was trained on - presumably Slouching Towards Utopia? - says that humanity was poor back in the long Agrarian Age, it does not usually mention that this was a Malthusian condition. When you wrote the text, you were thinking "Malthusian", because you have a theory about the Agrarian Age; and when I read that text, I was thinking "Brad is thinking Malthusian", because I have a theory of mind for Brad DeLong. But that's the difference between ChatBot and me.
Put in the terms you use below, the ChatBot can go to a corner of its training space, and its space is a very very high dimensional hypercube not legible to the human mind, so this allows it to surprise us; but it cannot create a corner of space where it was never trained.