Discussion about this post

User's avatar
John Quiggin's avatar

As with the hypothetical Chinese room, the math involved is conceptually simple in outline. Most of what was classed as machine learning/neural nets until recently was just classification by discriminant analysis (worked out by Fisher nearly 100 years ago) but with scads of data and as you say, flexible functional forms. LLMs extend that by making the right hand side of the model (the predicted text) flexible as well.

Leaving aside the high-level questions about whether this intelligence, the crucial economic issue is whether there is anything here that can't be replicated easily once you know it can be done at all. The proliferation of LLMS, most notably cheap LLMs, suggest not, especially since we know that the data set has been exhausted.

What that means is that the tens (maybe hundreds) of billions invested in AI so far is really public good research, with no real private payoff. That point is independent of whether the public benefit is huge, modest or negative.

Expand full comment
Robert N Athay's avatar

As I remember from a course I took long ago, early AI researchers fell into two distinct groups (with some overlap): those who wanted to develop falsifiable theories of intelligence and those who wanted to program computers to emulate human reasoning. Both approaches hit some hard limits, and a lot of the early work seems to have been forgotten by the 1980s. With the emergence of neural nets came the *hope* that all you needed was a big enough computer and enough data. I think that's what we're seeing now with MAMLMs.

Still, it isn't clear what the commercial value of these systems will be. So for companies like Google, Apple, Microsoft, etc. it makes sense to invest *enough* Internal Research & Development (IR&D) to understand what MAMLMs can & *can't* do and what the *realistic* commercial potential is.

Expand full comment
7 more comments...

No posts