What I Wish I Had Said at: Reinvent: A Meeting of þe Minds on þe Positive Possibilities of Gen AI
Þis is, of course, five times as long as what I could have possibly fit into my time slot...
At Shack 15, the SF Ferry Building, 1 Market St., San Francisco, CA :: 2023-06-28 Th
Organizer Peter Leyden:
We [want to] talk though the positive potential of Generative AI in another Meeting of the Minds with some featured guests like Kevin Kelly. We want to step back and think through the many positive possibilities that this new general purpose technology could bring to our economy and society in the longer term.
And what I wish I had been able to say:
Individually, each of us is a “knowledge workers”. We practice the liberal arts—that is, we make our living by our wits, by our intellectual and communications skills through our social networks. And we must do so in this world in which we do not have farm property, craft tools, or a secure place in the military or bureaucratic apparatuses, or in the church to backstop us. For us knowledge workers individually, I have great news. As we make our way into the future world in which we must rely on our intellectual skills and our social networks, this machine-learning wave now breaking over us is going to be wonderful and marvelous.
Let me put things in some perspective by remembering everything that has happened since the days of Alan Turing, Johnny von Neumann, and Grace Hopper. The story has six beats:
The mainframe as I experienced it at the end of the 1970s gave me, as a student, more computational power than Richard Feynman had had when he was working on the Manhattan Project and had 100 computers—that is, women with B.A.‘s and calculators—working for him. The mainframe was a game-changer for me, and for many others in my field. We went from writing down and adding up columns of numbers semi by hand, and from doing the simplest of regression analyses by calculating cofactors with pencil and paper, to being able to perform complex calculations and simulations beyond our predecessors’ wildest dreams—provided we could get a slot on the calendar, and provided that our punchcards were correctly typed and sorted so that our output was not the machine barfing up a JCL error. The mainframe thus, literally, changed my life.
The personal computer as I experienced it in the 1980s and early 1990s gave me, as an assistant, associate, and then full professor and as a deputy assistant secretary of the Treasury, the equivalent of a 24/7 staff of five: access to the typing pool, revision and reworking capabilities I would before have had to pay a professional editor through the nose for, plus an always on-call graphics draughtsman. Typing up and revising my papers and creating my presentations—I could do it all myself, quickly and easily. This freed up more time for me to focus on the parts of my jobs where I felt my brain was really in use.
The internet as I experienced it starting in the mid–1990s gave me the equivalent of a full-time 24/7 runner to the biggest library in the world, and more. I could access vast amounts of information from anywhere in the world at any time. This was a huge boon for research and collaboration. I could work with colleagues from around the world without ever leaving my office.
The smartphone and the laptop, and now genuinely useful machine learning–each and every one of these things makes perhaps 1/4 of my job so easy to accomplish that I can do 5 times as much of it, and by myself. This last is important, given the inevitable difficulties in closing the loop when one has to rely on others. My guess is that each of these leaps forward in information and communications technology roughly doubled what I can do. I can crunch numbers, words, and images like no single individual could before. I can work from anywhere at any time. I can access information on the go.
And all of these leaps forward overlap. So by now, I figure, I have had four doublings of what I can do workwise since the 1970s. Or maybe not four. Maybe I need to subtract one from the doomscrolling and the clickbait, from the people who want to make money by selling my attention to those who want to hack my brain not to my benefit. Still, the pain of adjustment to these new technologies could be worse: The Gutenberg information revolution produced Inquisitor Torquemada and General Tilly, the Spanish Inquisition and the Sack of Magdeburg. All we have had to deal with, so far, is Mark Zuckerberg and Elon Musk.
So maybe it is three doublings of my individual productivity only, so far.
But my main point is that a fourth doubling is on the way, in the form of actually useful machine-learning technologies:
Very large-scale regression and classification.
Very high-dimensional regression and classification.
Very useful natural voice interfaces to databases (that occasionally hallucinate),
Taking VC’s money as they run lemming-like to give it to us, so we can try and discover things that are collectively very worthwhile and produce huge amounts of blue-sky valuable societal learning about how new technologies can be useful—even if, as expected-value propositions, the VCs would be better advised to take half their money and set it on fire.
And this time it may well be more than a doubling.
The machine-learning wave is, for all us individual knowledge-workers, going to be a true game-changer.
Okay, that is the effect on any individual—on what those of us who run to and learn how to use these technologies experience vis-à-vis those who eschew them.
But how about its effects on the state of our caste of knowledge workers collectively, and on the human world?
The onrushing machine-learning wave has the potential to greatly impact our productivity and our ability to figure out what work needs to be done to promote human flourishing, collectively. I am a techno-optimist. I think the effects will be strongly positive. But I could be wrong. The machine-learning wave is about to hit. Each of us will see our knowledge-worker productivity explode. Then we will have to try to fit all the puzzle pieces together so that the overall effect is constructive. And here are some issues to consider
First, look at those of our institutions that do harness us cooperatively. The institutions of scientific discovery are powerful mechanisms to give people incentives to turn their individual productivity to tasks that make us all smarter. The institutions of the market economy are powerful mechanisms to give people incentives to turn their individual productivity to tasks that make us all richer—as long, at least, as we are producing rival, excludible commodities under competitive conditions.
But, second, we have no similarly well-crafted institutions or arrangements in communication and in information evaluation that work nearly as well.
Thus third, building such is, I think, the challenge that must be addressed if we are to fully realize the potential of machine learning technologies.
Fourth, there is signal and there is noise, and it is not yet clear to what extent the ML wave is going to make it easier to create and discern signal, and to what extent the ML wave is going to make it cheaper to create and obfuscate noise. So far the image-based neural network models we have seen in the past year have been absolutely wonderful, giving people without an artistic bone in their body the ability to create something valuable and nearly professional—and, if they need something professional, making it much easier to do the handshake and handoff to a real artist. By contrast, so far the text-based neural network models have been internet-level bullshit-generators. They do not, or do not, distinguish between what is smart and true and what is dumb and false, because all of the energy has been spent in making what they produce plausible. Can they be transformed into signal rather than noise generators? People need to be working much harder on this.
Fifth, we really do not think well individually. We are not smart individually. Individually, we can barely remember where we left our keys last night. But collectively we, and increasingly we plus our ’bots, are really smart. Thus crowdsourcing. Thus: to enough eyes all bugs are shallow. This collective intelligence is our most-powerful tool. Modern machine-learning technologies will be useful for us rather than for lucky individuals only if we can tune them to be more than individual-level assistants. They can be the glue that binds us together.
But, sixth, this works only if our communications and action systems harness us so that we pull together. And that problem is highly multidimensional. We need to think hard about just how multidimensional.
Seventh, I think that the most important dimension is our need for systems to direct our attention—about to be the only thing truly scarce—usefully.
Eighth, I think the second most important dimension is that we need processes to mentor the young—as the ways they used to rub up against people who have the useful tacit knowledge continue their decline. This will be essential in ensuring that the next generation of workers is able to fill our shoes. But education and training for tacit as well as formal knowledge is really hard in anything other than an apprenticeship setting, and the jobs that apprentices would have filled will soon be, many of them, the province of the ’bots.
Thank you.
Lovely set of thoughts and questions. Every task and every individual carrying out a recognized function operates along a distribution, with the simple, boring tasks stretching out to the left and the challenging, innovative original tasks stretching out far to the right. Most of the productivity enhancers scoop up the stuff on the left and (as long as the fellow sitting in the center has the wit and the tools to make sure it is correct and in the style desired -- there are tools already under development for this) productivity goes up many-fold. But the real gains are made on the right, where something new happens, which must be recognized, evaluated, shared and built on. These are two very different problems. I hope the monetizable gains on the left will pay for some of the right hand stuff.
In the stone age, we explore and extend a new idea by explaining it to others until we understand it well enough to take it further and build something new. We can't all do this at once in a global market square -- the cacophony even if all ideas are brilliant is overwhelming. What seems to be missing from the blogosphere and the world of startup accelerators is some economic structure that pulls good ideas together until they reach a survivable size.
Back in 2000, Bruce Sterling wrote an article for one of the big business magazines about life in 2050. One thing stuck in my mind. He said that the production of new technology would be so rapid that if any particular technology failed to deliver, there were a number of others that could do the same thing. For the public this meant ennuie would set in. To some extent we see that in the plethora of computer languages. In the 1980s, there was little disagreement about which computer language beginners would use on their PCs (BASIC) and then advance to (C/Pascal), and then which object orientated language to progress to (C++ was the default for C users). Can we agree on what language beginners should start with today? OTOH, for established coders, there are so many viable languages, all freely available, each with their pros and cons. You can even get free Fortran compilers today, and Linux, FreeBSD, has pretty much destroyed the value of proprietary Unix OSs such as System V. In other domains, we see the same thing happening - overlapping science and technology "advances" mean that there are very few really breakthrough advances that are unique and have no comparable competitors.
This has an impact on the value of the knowledge worker output. The "unique value" AI will generate for the individual will be eroded by the sheer volume of near identical output. Produce an analysis that gains attention and almost immediately there will be other analyses doing the same or better. That ease of competition has scientists maintaining tight control over their expensive, hard won data, as the "crowd" could probably do useful analyses even faster, and more comprehensively, than the originator, and potentially publish faster.