15 Comments

As a very long time software architect, I think folks over-estimate the effect of CoPilot. Tools have existed in IDE for "code completion" for a long time and CoPilot is really just the follow-on in that area.

That said, a lot of "software engineers" (i.e the programmers who aren't really engineers) have rarely used code completion and other IDE capabilities very well, so maybe that class of developers will improve (good thing).

But most of the real issues with software development productivity have a lot more to do with bad engineering, changing requirements, mismatch with customer needs, etc, etc. than the ability to get a better code completion. Programming is the "easy part" of being a software engineer.

Expand full comment

As a clinical psychologist for 25 years and a software developer for the next 25, I agree that programming is the "easy part." Learning to listen to what the customer/future user isn't saying and leading them to articulate it is a "soft skill" that underlies successful software development.

Expand full comment

I think you are underestimating the potential power of ML/AI to have significant multipliers in the way we do things, from the consumer to the industry.  One reason is that you are rather fixated on this latest shiny ML - LLms and their boosters promising the moon.  As you linked to an excellent article by Dave Karpf who was more cautious in his assessment of LLMs (rightly I believe) you are perhaps seeing just the trees, not the forest. Admittedly, I am more of a techno-optimist, but I see more of the range of AI tools and how they might come together.  The early history of AI was much more about solving various logical and planning problems using symbolic logic.  We progressed to hand-crafted, rule-based tools to solve complex problems that worked reasonably well but were expensive and time-consuming to produce.  Then we saw the birth of machine learning that allowed any table of data to be quickly translated into various ML models, like decision trees.  But all that got largely derailed when ANNs, particularly large ones got a huge boost from Geof Hinton, and pattern recognition of data, particularly images, got a major improvement and we were off to the races.  LLMs seem to have largely solved many pattern problems including blowing Turing's "Imitation Game" test out of the water, even fooling smart people that a computer with an LLM was sentient.

Some demos of the capabilities of LLMs were quite impressive, but not so impressive when investigated by academics.  The public has now tested various iterations of LLMs and it is clear that they have serious limitations.  Hence Karpf et al are probably right that LLM tools will be more like spreadsheets and wordprocessors in productivity gains, and especially with knowledge workers.  Interrogating the FRED database verbally and getting exactly the charts with post-processing done on the data to combine series with equations and output the appropriate charts is going to really help anyone who uses such databases, especially if the data needs to be acquired from different data sources and combined in creative ways.  

But when we go back to earlier AI tools, we get to see how combining them with LLMs might well make huge productivity gains.   All the tedious artisanal care in doing so many chores might no longer be needed, allowing greater democratization of the tasks.

Take as an example 3D printing, which not so long ago was the toy-du-jour for hobbyists and industry, promising to allow everyone to make stuff on demand using available models.  Well, that hasn't quite worked out as any forum about 3D printing indicates.  It is temperamental just to print an available model, and those models need to be built in various ways.  Complex organic-shaped models like humanoids need to be scanned and then worked on to create good model files.   But imagine if the models could be built much more simply, and the printers imbued with AI to overcome errors and correct the parameters for printing.   Now that 3D printer works almost as easily as a 3D printer, and the model files created verbally or tweaked from a vendor are produced quickly or obtained quickly.  For example - take a pic of a broken part, and ask the AI to locate the part as an image, file, or model and then print it.  Or perhaps make a model of person X, attired in clothes from Y,  color it appropriately, and then print the model to be Z cm tall.    Or perhaps design a part to solve this problem and print it with the appropriate filament/resin/other.   Such a step change in use would make 3D printers as ubiquitous as microwave ovens, or at least as air-fryers.   Cannot do it at home?  Then local shops can produce the print more quickly and at a much lower cost than a machine shop.  Or perhaps a large print shop doing 2D and 3D printing can design and output an object in hours, or while you wait.  Additive printing of rocket engines while much faster and cheaper than using traditional methods still needs considerable careful design work.  What if the AI takes your basic specs and does the design work for you, modeling all the performance based on the materials and design and iteratively produces a new engine within a week, allowing even a relative novice to have a rocket engine or other technological artifact on demand?  Same with computers and electronic parts.  No need to outsource your idea to a production shop in China, just have the computer do all the work and have it made locally, perhaps even on your desktop.  

While perhaps not productivity-enhancing of the economy, just the medics, what about AI designing replacement parts for sick or injured people?  Life-enhancing as well as the [potentially dangerous] applications of designer organism technology.  All these ideas, a fraction of what might be possible, use AI tools as a glue to combine technologies to create a far greater range of artifacts, machines, living, or artistic, and place this power in the hands of the individual, or the locally trained artisan.  

Bruce Sterling once wrote an article about technology and the economy in 2050.  He suggested that every new idea would have many others waiting in the wings if the popular idea failed for some reason.   There would no longer be excitement with the new because there would be an abundance of similar ideas to choose from.  This is not unlike the ennui I get with each new computer language announced these days.  Yawn.

In a sense, these are tools that do seem like spreadsheets on steroids.  But I see them as much more, invading the physical world, not just the virtual one.  AIs could be the agents creating new ideas, rather than humans.  Perhaps it would be a "Midas Plague" but I think that the potential for uplifting humanity with "technology that is indistinguishable from magic" might be a huge game changer if done right.  A person in that future would look back at our period with horror, thankful that the dreadful burdens and limitations of our time were nearly banished.  [Realistically society would face different problems, and maybe the AI assistants and tools would not eliminate the complexity of the society, just replace it with new problems.  But at least it would be a richer world.]

Expand full comment

If the 5 percent or so in the most info-driven professions double their productivity, won't the biggest impact be the acceleration of the rate of technological change. My post retirement profession has become secondary school public education. Many times I've gone into a class with a botched -or poor lesson plan, and after a five minute session with Bard save the lesson even though I came in knowing little of the subject matter. Exploited properly the rate and quality of learning, both school establishment driven, and autodidactic learning could be greatly increased. This could have a big impact once these new superstudents enter the workforce.

And remember we have the biology/gene technology that is rapidly gaining steam exploiting the incredible control being opened up by the CRISPER technology. Its quite possible the the rate of "progress" might increase well beyond the historic rates.

Expand full comment
author

Touché...

Expand full comment

I'm curious if anyone is stopping to think how ridiculous they are going to look in 3 years when this latest "AI" bubble - if not outright grift - collapses. We're already seeing this with LLM models tested against experienced clinicians in medical diagnosis showing 85% error rates - bit of a problem if you are the one in the examining room eh?

The interesting thing to me is that many commenters on the socio-politics of technology who were (rightly!) skeptical of Bitcoin and its successors from the first days seem to have fallen hardest for this newest shiniest object.

Expand full comment
Jan 5·edited Jan 5

It seems to me the percentage of medical diagnosis error rates is not as significant as the seriousness of the errors. If the software misdiagnoses a muscle strain as a muscle sprain, no harm is done. If the software misdiagnoses a heart attack as heartburn, that's a different story. I've always been impressed by the Isaac Asimov book "The Relativity of Wrong." AI software is far from perfect but it learns from its mistakes and gets better over time.

Expand full comment
Jan 5·edited Jan 5

It's a very good comment about "relativity", but I'm far less convinced that AI software "learns from its mistakes" - feeding back AI generated stuff into the training has made it worse, putting in safety checks (good) has made it worse...

LLM AI isn't based on checking its output against reality, so hard for it to "learn" in the same sense as Deep Mind. Will that get better by meshing LLM with other parts of AI - maybe? By coupling it with checking against data (ex. with some business software already does) - maybe?

Yes, I do think there is a real role for AI in medicine (see a lot from Dr. Eric Topol), but the notion that AI will just get better is a bit dubious (at least for LLMs) - at least if better means fewer hallucinations and meaningful mistakes.

Expand full comment

LLMs are just one ML technology that has caught the attention of many because it passes the Turing Test and is easily used for some tasks that it does well. However, there are many other methods, most now based on Deep Learning ANNs and they are becoming very good at some forms of diagnosis. So good that the danger is that clinicians could become trusting of them as they make so few mistakes.

IBM's "Watson" based on symbolic logic did not perform well at cancer diagnosis and treatment when tested by the nation's top cancer spacialists at Sloan Kettering, but apparently was far more accepted by regional physicians with the expertise level of the the clinicians at SK.

Current ML is arguably already better than radiologists at interpreting imaging, and if used without some QC, would quickly solve the radiology interpretation bottleneck. In practice, these systems will just flag possible cancers for teh clinician to look at more carefully, eliminating the tedium of reviewing images with normal non-cancerous cells or tissues.

Expand full comment
author

Yes: High-dimensional big-data flexible regression-and-classification analysis is **very** important, and LLMs may well not be the best (or even a good) use case for the computing power we now have available...

Expand full comment

Goes a little bit against the principles of TQM and the drive for measurable levels of quality and perfection I would say.

Expand full comment

I write optimistic hard science fiction. This article could be a "bible" for worldbuilding for a utopian universe (or dis)

Well done.

Expand full comment

Maybe a lesson from history is that the next transformative technological change is not going to be the one we are now predicting (AI), but one less consonant with the interests and world-view of those who think they are now creating the future. The atmosphere of awe and fascinated fear generated around AI (which you relate to California spiritualism) might be part of the need by those building it to convince others, and themselves, that their vision of the future is all-encompassing. But really, the ideology behind AI is fundamentally the same vision of "more convenience" that has been in place since the 1950s. Great article!

Expand full comment

I've been fantasizing about winning a million dollar grant to travel the earth taking 3D scans of cuneiform tablets.

They would serve to populate a magnificent data base for natural language queries (and others) massive multivariate analysis. I think this is a perfect example of the possibilities of 2 plus four, above.

Expand full comment

Postscript: there are no million dollar grants to travel the earth 3D scanning cuneiform tablets

Expand full comment