8 Comments

As a software engineer I may be a little biased, but I think the idea that software costs will fall to nothing is beyond laughable. Most of software engineering is figuring out what you want, whether it’s possible, and whether the thing you made actually does what you want. The making of the thing is already pretty easy. So 10xing the making part, which I can imagine, is only going to take a fraction of the time. And it’s just not the case that there’s a bunch of people who can do the requirements and feasibility and fit-for-purpose assessment parts and are only missing the ability to code. (Or rather, there actually are some of those people, but they already have high-paid jobs, often as engineering or product managers, because those skills are the ones in demand.)

The ability to code is not sufficient. It’s like saying Stable Diffusion will make architects obsolete. It’s a productivity tool, not a replacement.

Expand full comment

Yes. No doubt I lack imagination, but I would like it spelled out more clearly how using a LLM is supposed to make significant improvements to software productivity. The examples I have seen so far are basically "what if you had a faster way to ask a question on stack overflow?" That's useful at the margin, but it might help me once every year or two.

I want to stress your point that most of the job is *figuring out what to do*; actually doing it is comparatively easy. Is the LLM supposed to be better at persuading the client to reveal what they want the software to do? Because that would be really helpful!

I am reminded of the enthusiasm for proof of software correctness in my youth. This amounts to a mapping between a formal software specification and an implementation. But a formal specification *is* an implementation! That's why the verification is possible in the first place. It did help at the margin by manifesting assumptions - and that's exactly the trace of this idea we've retained in current development systems.

Expand full comment

Maybe I know the wrong kind of lawyers, but trial lawyers are about producing a product that appeals to a certain set of consumers, juries. They have to satisfy judges and cow the opposition, but their real job is to sell the case to a jury. There is an entire literature on juries, and the more I've read of it, the less I think there is a place for AI in assessing them. The current state of practice is to set up focus groups, mock juries and mock trials, and then figuring out what worked and what didn't. These are always full of surprises, so one four day job in Little Rock, for example, turns into a series of three day gigs testing out alternatives. There's good money in running those mock trials which has the advantage of payment not being contingent.

The rest of their practice seems to be responding to documents with appropriate counter-documents, ideally in minimal time, and AI might help a bit with the mechanics, but the challenge is aligning the strategy and the tactics. When someone slams the judge with a plea or a response at 4:45PM on Friday, the response has to address both the surface and structural attack, and it has to be sent out by 4:55PM. There's also the matter of telling people you will be late to their dinner party, something you really shouldn't outsource to an AI.

Expand full comment

On Singapore--I've often wondered if part of their rapid growth is their distrust of fancy banking. AFAIK, the Singaporean banking system is mostly bog-standard intermediation of the 19th-century variety, mixed with some asset management of flight capital. I never could understand the American system, which seems to rely mostly on legal arbitrage services whose only justification seems to be "liquidity."

Expand full comment

Color me skeptical. I'm even skeptical that someone can measure productivity of programmers, though lots of people think they can.

One canard is to count lines of code produced. You will get what you pay for when you do that, which is volumes of code, many files, much of repetitive and poorly thought out. The system might be very difficult to understand as a whole.

So that's my first problem with this.

Second, there's the historical observation. We keep finding tools that make programmers more productive, and we keep investing that gain in more complex software. Software does not do everything we want it to. It never does. So the addition of function-level autocomplete will be nice, and give a nice boost. It will not make programmers obsolete or crash the cost of programming.

If you've only dabbled in programming - written programs that are 100 lines or so - you might well think it's easy, and ChatGPT could do this and make you much happier. That might be true.

But the difficulty of programming is not linear in the number of lines. It's a much higher order polynomial, or it might be an exponential, because every line you write can interact with every other line, and one must 1) understand the requirements, like X. says 2) develop a strategy to deliver those requirements, 3) constantly revise things to allow you, and your teammates to work on it. Managing the complexity of a big task is most of that big task. It is not easy. When one reads ChatGPT's code, one thinks "ChatGPT doesn't understand this problem, it's just using a magic incantation". This is a way, by the way, that we sometimes describe code written by humans. Sometimes I would describe stuff *I've* written that way, and that code is always a liability for the codebase.

Code is a living thing. It is always changing. In order to do that effectively, one must understand it. Can the robots do that? I don't think anybody is even trying to do that.

Expand full comment
author

As I said, I am of the view that the cost of programming has already crashed—programmers are producing vastly more complicated and reliable systems today than they could ever have thought about producing half a century ago...

Expand full comment

My emotionality was not really aimed at you. I apologize if it seemed that way. I recall hearing Ed Feigenbaum, at the time Chairman of the Stanford CS department claim that programmers would all be put out of work by Expert Systems by Y2000 or some such puffery.

Needless to say, that didn't happen. But we are still getting the AI hype. Mind you ChatGPT is a lot better than any Expert System of the form envisioned in 1980...

Expand full comment
author

As somebody or other once said: Chat-GPT is an internet simulator. If you are programming having everything in your head, it's not a help. If you need to look up things, it can be a great help...

Expand full comment