2 Comments

Re: MAMLMs

Just because OpenAI has led the pack of ML companies in going for "more data" to train their models does not mean that this is the best way forward. The Allen Insitute for AI has released Molmo that is trained on curated data rather than hoovering up the internet's content. They claim their far smaller model is as good as the largest LLM. If they are correct, this may be as game changing as Watt's steam engine was compared to Newcomen's atmospheric engine. OpenAI may be so big and expensive that using it costs OpenAI moree than the revenue it generates. Continuing with that business model is like the idea of selling at a loss to make profits with greater volume. Better to make the models small enough to be hosted on the user's device and thus restoring the marginal cost of zero when scaling up. Of course, the model must be useful [ and profitable ] but so far no killer app has emerged for Generative AI. We are likely at the peak of the "inflated expectations" state of the hype cycle, about to enter the "trough of disillusionment" before reaching the "plateau of productivity".

Expand full comment

Graeber was indeed a grifter.

Expand full comment