25 Comments

> (1) Very large-scale very high-dimension regression and classification analysis is going to be, if we can manage to tame and subdue it, truly game-changing: the transformation from the world of the bureaucracy to the world of the algorithm, with not just Peter Drucker’s mass production, not just Bob Reich’s flexible customization, but rather bespoke creation for nearly everything. This is the heart of modern MAMLM-GPTM-LLM technologies.

Maybe I'm too deep in too narrow a trench to see the overall picture, but as a point of technical fact I don't see GPTM-LLM technologies being particularly good at any of that except maybe a small subset of domains (so: protein folding - yes, antibody optimization - no; most science and engineering problems do *not* look like language processing). MAMLM, yes (although the term might be too capacious for clarity), but little of the bubble money and engineering is going to non-GPTM-LLM MAMLM. Hopefully after the bubble bursts and we have the leftover infrastructure to play with that'll help.

To be clear, I enthusiastically agree that (1) is an epoch-shifting change and I'm bullish on current and near-term cutting-edge advanced machine learning and optimization algorithms[1] getting us there; I'm just skeptical of GTPM-LLM being part of the path there except as making it easier to get money from people who want to be into "AI" and not look too deeply into which sort.

[1] In terms of scientific and engineering advances, classification and regression aren't the whole story: optimization and active learning are higher multipliers long-term. Oversimplifying, the former gets you better-than-ever factories (really good!) and the latter gets you better-than-ever science (really really good!). In that sense I suspect there's a general over-estimation of what GTPM can do and a general under-estimation of what MAMLM minus GTPM can do.

Expand full comment
author

But there will be a huge number of GPUs around to run other models on!

Expand full comment

Forcing me to update my posterior, Terence Tao is significantly more optimistic than I am/was on the potential usefulness of advanced versions of existing AI tools for mathematical research [ https://terrytao.files.wordpress.com/2024/03/machine-assisted-proof-notices.pdf ]. It's far from a prediction of the obsolescence of mathematicians, but given the importance of mathematical research over the long term, sustainable tooling improvements have nontrivial impact. (FWIW, he's not seeing a different set of capabilities than I do; but he finds them more useful in reality and potentiality than I thought they were, and that's enough to change my mind about that.)

Expand full comment

That's true, yes. Changes the timeline/technology/players a bit from the current expectations, but maybe makes the long term potential larger.

Expand full comment

The reason protein folding was successful because there is a very good, well annotated dataset and evolution works by modification and descent so it places limits on the design rules. If you are looking for a small molecule, however, good luck. Organic chemists have been at it for over a century, and there are still sugars that haven't been synthesized and characterized.

Any useful AI system for higher dimensional correlation is going to have to understand its training set and its limitations.

Expand full comment

Hard agree on the importance of data sets and the impact of evolutionary constraints, but I'd add that protein folding _as a physical process_ is already a relatively good match for LLM-like models: after all, you're working with complex linear sequences of tokens from a finite set, and so on; you are still working with quantum chemistry here and there, but a lot of the large-scale patterns depend on token sequences. They are large beasts, but from a comparatively very constrained space.

As you say, small molecules are a whole different beast. Not because they are smaller, of course, but because the space is so much wilder. For what it's worth, it does seem that specialized models for particular bits of the puzzle can be helpful (if nothing else to get order-of-magnitude guesses before you have to synthesize and try), but that's a far cry from the AI hype.

(And we're mostly talking of _in vitro_ or nearly _in vitro_ biochemistry; a lot of the popular or business coverage of AI in medical research glosses over the absurd complexity of _in vivo_ biochemistry and how partial our knowledge of it is.)

Expand full comment

I think I underestimated the value of getting good guesses. Sometimes, that's the best you can hope for. Machine learning can be a powerful flashlight.

Expand full comment

If you include genetic algorithms as part of the AI field, then the huge capabilities of GAs for evolving optimal solutions could be a game changer. Just imagine the LLM taking a prompt (text or verbal) and the engine creating the appropriate model which is then evolved to meet the desired goal. Will engineering, not to mention a host of problems, become solvable.

Expand full comment

Technically they're not even "models" in the conventional sense of the term but that's already a much abused term. Horse gone, barn door etc. At some point toasters will be considered models of bread I suppose.

Expand full comment

1) I love MAMLM which I pronounce mammal M. (for Mechanical?)

2) I'm working on a Utopian Hard Sci-Fi trilogy which starts in 2030 when 90% of all white collar jobs have been eliminated by AIs but in response COOP base firms have sprung up with don't have executives, middle managers, HR, or legal departments all of which are handled by AIs. (The protagonist's COOP has named her legal and business AI Learned Digit, LD for short. LD has a sense of humor.

3) Spreadsheets are a good example of your thesis. Pre-spreadsheet a boss would ask a (reasonable) question and a team would labor for a few days, weeks, or months and come back with an answer. The boss would have other questions but not want to spend the effort. With spreadsheets, they'd come back in a few days. He's ask variations on the original and get an answer in the meeting or in hours. So he's ask a lot of questions.

Expand full comment

"starts in 2030 when 90% of all white collar jobs have been eliminated by AIs".

IMO, you are making the "lump of labor" fallacy.

Farming = lump of labor. Why? because crop growing and animal husbandry is conceptually rather simple. Automation with machinery does reduce the huuman labor required, and so economics drives farms to be larger and wso we see the huge decline in farmworkers needed.

For anything else that requires some intellectual input, labor demand could be infinite. Why? Because in any creative industry, output is time and labor constrained. Ideas have to be filtered quickly given time constraints of output. Hiring more labor to increase idea production is costly. Automation of intellectual I/O just expands the intellectual output and does not reduce employment.

Example from history. When clothes were expensive, woolen mills, automated away jobs for the basic commodity of cloth production. But then intellectual inputs increased with fashion that used the reduced production costs. Today we have "fast fashion". There is an infinite space for clothing designs. AIs will just expand the possibilities of the design space. I don't see that reducing the number of designers, just changing their skillset.

Some jobs will disappear. The movie industry strike was a pushback on the employment of especially extras. Computers have replaced the era of "thousands of extras" in Cecil B. DeMille historical movies. Peter Jackson didn't need costumed extras to create the Orc armies in his LoTR movies. Computers build virtual sets reducing the labor needed to construct them. Screenwriters fear they will be out of work as AIs write scripts. So set builders will be replaced by those using AIs to help construct those virtual sets. Because of the intellectual input, rather than reducing their numbers, the designers will be able to experiment much more with AIs automating much of the boring, labor-intensive work. Screen writers will need to use their creativity to use AIs to try out more script ideas, just as we all more aggressively edit our text output, something we couldn't do with typewriters. OTOH, of movie and radio production costs fall with AI, expect an ever larger production of output. There is already far more content produced today than mass attention can consume. Consumption becomes ever more niche, as novelist' incomes demonstrate. [We can consume clothes quickly, but we cannot create more individual attention as reading/listening/viewing must be consumed in realtime. [I used to play VHS recordings of news at 2x speed back in the day, but not movies or other shows of interest.] Using AIs blindly to rejig eBooks is now a thing on AMZN, which isn't producing good content, just polluting existing content. AIs will have to become as good writers and humans before humans are displaced.

What I would hope for is that any production improvements by AIs - increasing output and reducing costs [and resource use, pollution] at various scales, This will help raise global living standards. One use of specialty AIs would be to make 3D printing far easier - from design to output artifact. It will need guidance, especially for design, but it should make new designs fastera nd reduce the print failures. The ultimate will eventually be the ST Replicator.

Bottom line, I don't see mass unemployment for white collar workers. I see work becoming more creative with AIs the next step up in reducing routine and labor-intensive work, but expanding the possibilities in creating new ideas and importantly, democratizing new ideas. In teh sciences, AIs will enable more cross- and interdisciplinary work. Wouldn't it be great if it would mend the "two Cultures" so that both sides of the culture could use the other in their work? Because creation is near infinite - I see a very beneficial future for good AIs if we can make them. If not, the the current crop of AI technology will reach a plateau of usefulness, where the labor needed to make them economically useful will balance out any labor-saving benefit they have.

Expand full comment

Consider the trend of building curated data. I will use as an example of what might be a common use is Brad's sub-Turing BradBot. Trained on Brad's book "Slouching..." and possibly other works. But Brad is much more than his works. As james Burke reminded us, everything is "Connected". So the BradBot needs to be trained with every reference work Brad's books cite. And in turn, perhaps the works that cite these references too. But as we know from this blog, Brad has wider interests and also includes the posts and writings of others, as with teh current Dune critique. So those must be added in. And on and on. If we are to get to near Turing complete versions of Brad with the fidelity of the dead wife in the movie "margorie Prime", an awful lot of knowledge about Brad needs to be collected. This applies to organizations building curated organizational knowledge. But here is teh rub. bad actors will be purposely trying to infiltrate and render that data untrustworthy, despite the best efforts of curators. Neal Stephenson's "Anathem" introduced teh idea of trust in data, but it is going to be hard to accomplish that. Once scientific papers were considered very trustworthy but now we know that they are polluted with fake datam and even fake papers. How are we going to handle this? It is analogous to spam filters, anti-virus software, etc., but at a greater level of difficulty. Which means more work and more software tools to try to weed out corrupt data and prevent malware corrupting data, not to mention internal human actions.

A world of good, sub-Turing instances of people would be wonderful. Porn is already entering this arena with "AI girlfriends". These instances of people will be very useful...but only if they are not corrupted. A public "J Bradford DeLong Prime" would be very nice as long as it didn't stop to insert an advert every few minutes, or exhibit torettes syndrome due to malware. I think P K Dick had already explored this sort of world.

Expand full comment

2 very different views:

1. https://nautil.us/how-quickly-do-large-language-models-learn-unexpected-skills-528047/

The idea is that increasing size does lead to new skills and supposedly more accuracy of certain skills. I am not convinced. Why does it take a huge LLM to be able to do simple addition when parsing and extracting the elements and feeding them to a simple function works perfectly? Surely it is not beyond the wit of the companies to incorporate tools that RELIABLY WORK?

But overall the message is very optimistic.

2. https://pluralistic.net/2024/03/14/inhuman-centipede/#enshittibottification

Cory Doctorrow has a piece that argues that the whole LLM/ChatGPT model is not likely to keep improving as the training data is getting polluted.

This definitely argues for curated data sets. I see a potential demand for expert curators - jobs, jobs, jobs in every organization and even user to build quality information for interrogation.

What strains credulity in my mind, is that our experience of bad actors in the digital world, will effectively derail much of the value of this form of AI. We already have deepfakes, (text, audio, and even video) used for various purposes. Prompt injection (from websites and malicious code) is going to result in poor quality/biased/disinformation being returned for queries. If you thought spellcheck/grammar check/autocorrect were bad, you haven't seen anything yet. This whole edifice could make online information access a minefield. Technology is always a two-edged sword, and teh best we can hope for is a small net benefit at the cost of complexity and controls to tamp down its use as a weapon against the user.

Expand full comment

"1. Very large-data very high-dimension regression and classification analysis" is going to be fun in economic history too! e.g. Julius Koschnick has a nice working paper on pre-industrial British academia which uses transformer models to classify the topic and measure the innovativeness of research.

Expand full comment
author

Yes!

Expand full comment
Mar 15·edited Mar 15

I actually have found the most recent updates to Apple's iOS auto-complete functionality intensely annoying.

I type regularly in two different languages. It used to be that if I flipped to the Spanish keyboard mode, auto-complete would pretty reliably complete Spanish words. But in the last few months, it's suddenly, and fairly consistently, trying to "correct" my spelling into English, even for the most basic words. So "Es" at the start of a sentence will get turned into "Ed", or I'll be trying to write "Es probable que", and it will want to auto-complete that second word as "probably". These examples are on my mind because they _just_ happened, within the last hour or so. My experience of their newly AI-influenced product is that it has set back the usefulness of auto-complete by about 4-5 years, because they think they are now too smart to have to pay any attention to the user's declaration of what language they're trying to write.

Expand full comment

AI is like MOOCs or self driving cars. It offers investors and managers a labor-free fantasy. Since investors and managers rarely have a clue as to how their business works, they won't notice if AI is doing a terrible job. The real AI threat is to workers who will be replaced by AI and customers who will have to put up with a lower quality product. The investors will get bailed out by the government and the managers will have golden parachutes, so the consequences of failure are minimal.

Expand full comment

Not just switchboard operators. Copy typists have disappeared. Personal secretaries became Executive Assistants, but the typing pool is no more. Similarly Data Punch Operators, though this was a category created by ITC to replace lots of clerks.

But the obvious response to improved tech productivity, standard until the rise of neoliberalism. is shorter working hours.

Expand full comment

"But the obvious response to improved tech productivity, standard until the rise of neoliberalism. is shorter working hours."

I consider this [Keynes?] idea a fallacy. Do farmers work shorter hours with all the automation they have? No. Farms just became larger and farm employment dwindled to about 2% today. Former farmworkers found other work to do. Automation of work that does not proliferate will go the same way as farming and typing. Other work that can proliferate with tools, such as doing what if? scenarious on spreadsheets, will just increase as it the time will be filled by doing more "work". Ai tools will no doubt require lots of support staff as well as users, as work becomes adapted to using these tools where appropriate. An AI might write lots of plots and draft scripts, but that will require seasoned writers to select the good plots, rework teh scripts to be usable, and that may take as much work, but with different skills than before the tools existed. It may require curation of scripts from pror work to ensure they are not reused without modification, and someone to ensure that draft scripts do not cause copyright issues, and so forth.

I really doubt anyone is going to get shorter work weeks for teh same pay. Far better to improve work conditions than reducing hours.

Expand full comment

Farm workers, like all workers, have much shorter hours than they did in 1900. It's only in the last 40 years or so that working hours have stopped the steady decline that began around 1850.

Expand full comment

That maybe so, but it doesn't change the fact that automation primarilly changed numbers in farm employment. Had modern farm machinery been cheap enough (or shared to reduce per farm costs), farmers on small holdings would have needed very little working time. That did not happen. The economies of scale proved the better route to profits, a driver that continues to this day. Typing pools are an example of where the work was decentralized. In the 1980s, I hired typists to convert my handwritten content to type. Computers and software have been cheap enough for 30+ years to drive that typing to me. With accurate speech to text, I have even given my thumbs a rest and used this to create text messages on occasion.

With the appearance of cheap computers, especially since 1980, the number of software programmers exploded. I became one of them. 10 years earlier I failed miserably trying to program in Fortran on the university mainframe by writing code for the computer department to convert to punchcards to run (with the inevitable return of a fanfold of stacktrace errors).

Who didn't spend hours futzing with PowerPoint to it becoming a meme? Andreesen said "Software is eating the world". It has also consumed attention and multiplied the work dedicated to producing and using it. But how much of this has been translated into measurable productivity gains that adds to GDP? How much has improved products rather than just consuming time as busywork?

My guess is that AI will be of benefit, but it will prove far less productivity enhancing than attention consuming. [And with that realization, the value of AI companies will decline.]

Expand full comment

"6. And more…"

I suspect this may be the interesting area. What surprising uses with various AI techniques be with bucketloads of computational brute force behind them? I am interested in how AI algorithms and deployments can be puched to the "edge" rather than centralized. What will that mean for such devices and how will that change things?

One observation I have noted over my lifetime spanning mainframmes to smartphones and smaller devices, is that information technologies don't just replace existing work, but hugely expand it - a sort of Jeavons paradox. Spreadsheets were just becoming available when I diod my MBA in England. What happened was that playing with spreadsheet models replaced carefully thought through simple examples. Spreadsheet-ITIS. Recall how making sacetate slides changed with PowerPoint? Aerospace companies used to just use windtunnels - now aerodynamics are investigated with simulations testing many more cases and variations. And so it goes. The simple becomes more complex. Yes, the solutions can be better, but computers have created a huge demand to use them more to explore the solution space. Even in Math, once the approach was to solve differential and integral equations with calculus. Now you can use a computer algorithm to brute force the answer (and better still, it works with equations that are not amenable to calculus).

Bottom line is that despite the concerns of computers creating unemployment in the early 1980s, the reverse happened. We are seeing the same concern today with the current crop of new AI algorithms. I suspect that as then we will generate more work, not less.

Expand full comment

Hexapodia - change in topic. Perhaps it is time for a Hexapodia on social media. Or on AI. Or on something.

Expand full comment

With regard to AI safety, I think you are more or less making the argument that "AI doesn't kill people, people kill people". In a very broad sense, this is correct. We do need to figure out how to regulate it, though, and we need to figure that out pretty quickly. I am pretty sure the constitution does not guarantee the right to bear LLMs.

Expand full comment

No. What nearly brought down the world economy in 2008 was the Fed* just not telling everybody [not that they should have needed telling!] that if would NOT allow inflation to fall (very much for very long) below target and unemployment to fall below full and going mano a mano with anyone who did not believe them.

*Yes the ECB was worse, but without a US recession, it would not have had the "occasion for sin."

Expand full comment