Amazon, Facebook, Microsoft, even Google, & many others are paying hundreds of billions to NVIDIA, and hoping to spend tens of billions to make this a short-term dependency. This is the story of an...
"It might be glorious! If, that is, there actually do turn out to be truly, massively valuable use cases for language-interfaces and classification-schemes."
And so far, there does not seem to any "killer app" implemented with AI. No one is making any profits, just huge losses. The paying customers are limited. If it turns out that the profitable way forward is making smaller systems based on curated data, then the huge investments in hardware and scraping data was a fruitless dead end. IOW, an AI platform is not like a potentially dominant OS, but rather more like custom software application that can be built with a suitable language. NO moats other than basic brand marketing, like O'Reilly.
Those expensive NVIDIA chips might become quite inexpensive once a bust sets in, like Aeron chairs after the dotcom implosion.
I agree with that. The Allen Institute has shown that can work. I would go further and suggest that nattowing the scope of the trained AI instances will require less capital and hardware, yet offer more value. A target market might be academic textbooks, both off-the-shelf with accompanying AI, and tools to build AIs from a decent number of relevant texts, The AI is now sold as a software package with a pretrained language LLM over which the texts are integrated and used to further train the LLM. Academic publishers already are switching to online material, building questions and exams for students. Adding AI to both pose and answer questions with the student seems like a natural fit. Google already as digitized a vast library of books. What better than to monetize this library by adding an AI that can converse on sets of related books acting like a personal tutor.
I could see businesses wanting to do the same with their internal documentation peobably using the RAG approach in the stack.
It was a short while ago that Google offered an online tool to build an LLM on top of ~5 PDF/TXT files. That seems to have disappeared,, but a more robust version that could handle, e.g. a dozen textbooks, would be worth a sub-$100 price tag as an application that is as versatile as a wordpressor or spreadsheet to build local aggregated content that one can converse with, and importantly doesn't "hallucinate" so that it is reliable. [I have a lot of academic journal papers that would make great content to build various domain specific AIs to save me digging through the papers for the information I want, and less enshittified than a Google search.
So far, there have been no AI applications that I have heard of that are based on any of that new hardware.
Is it really possible to program enough information so that a computer can learn on its own, the way we learn? That is not to say that a computer is not a sentient being, they could act against us if they are not treated well, in ways we could never figure out.
The closest I am familiar with are restaurant seeking programs. Those have not been the greatest, but the last version I have had experience with was getting better.
I think that phone call systems are getting better with voice recognition.
I would not invest anything in those companies unless they could show an application using their chips.
I find it quite interesting that Apple's TV ads for its AI functionality show people using it to tell trivial lies. It's certain that Apple's marketers have racked their brains to name the reason this is a must-have. That's all they can come up with?
Seems like railroads to me -- with all the overbuild, government subsidy, and corruption potential. Let's hope they don't bring down the whole global economy in a dot.com bust.
New communications technologies (in which I include railroads) seem to have this tendency to boom-bust. Probably because they have a tendency toward natural monopoly/oligopoly.
1) NVidia is a hardware company. Any analogy would be to Intel. The effective, persistent monopolies, e.g. IBM & Microsoft, have been about software not hardware. If NVidia held the IP for the only reasonable AI hardware API, then they would have a persistent monopoly. They don't.
2) There is currently only an extremely limited use case for AI. It isn't very accurate or reliable, so the only applications that make sense are the ones that could be delegated to a so so, novice assistant. Apple is not the only company selling vaporware in this market.
They're still just a hardware company. There's no lock in. There is nothing to keep one from moving to an alternative piece of hardware. Only a tiny piece of software needs to be rewritten, if that.
"It might be glorious! If, that is, there actually do turn out to be truly, massively valuable use cases for language-interfaces and classification-schemes."
And so far, there does not seem to any "killer app" implemented with AI. No one is making any profits, just huge losses. The paying customers are limited. If it turns out that the profitable way forward is making smaller systems based on curated data, then the huge investments in hardware and scraping data was a fruitless dead end. IOW, an AI platform is not like a potentially dominant OS, but rather more like custom software application that can be built with a suitable language. NO moats other than basic brand marketing, like O'Reilly.
Those expensive NVIDIA chips might become quite inexpensive once a bust sets in, like Aeron chairs after the dotcom implosion.
I would bet on smaller systems using curated data as front-ends to databases known to be reliable... -B.
I agree with that. The Allen Institute has shown that can work. I would go further and suggest that nattowing the scope of the trained AI instances will require less capital and hardware, yet offer more value. A target market might be academic textbooks, both off-the-shelf with accompanying AI, and tools to build AIs from a decent number of relevant texts, The AI is now sold as a software package with a pretrained language LLM over which the texts are integrated and used to further train the LLM. Academic publishers already are switching to online material, building questions and exams for students. Adding AI to both pose and answer questions with the student seems like a natural fit. Google already as digitized a vast library of books. What better than to monetize this library by adding an AI that can converse on sets of related books acting like a personal tutor.
I could see businesses wanting to do the same with their internal documentation peobably using the RAG approach in the stack.
It was a short while ago that Google offered an online tool to build an LLM on top of ~5 PDF/TXT files. That seems to have disappeared,, but a more robust version that could handle, e.g. a dozen textbooks, would be worth a sub-$100 price tag as an application that is as versatile as a wordpressor or spreadsheet to build local aggregated content that one can converse with, and importantly doesn't "hallucinate" so that it is reliable. [I have a lot of academic journal papers that would make great content to build various domain specific AIs to save me digging through the papers for the information I want, and less enshittified than a Google search.
I might add that Nvidia's moat is as much CUDA as its hardware. One lesson from The Mythical Man-Month: software is harder than hardware.
So far, there have been no AI applications that I have heard of that are based on any of that new hardware.
Is it really possible to program enough information so that a computer can learn on its own, the way we learn? That is not to say that a computer is not a sentient being, they could act against us if they are not treated well, in ways we could never figure out.
The closest I am familiar with are restaurant seeking programs. Those have not been the greatest, but the last version I have had experience with was getting better.
I think that phone call systems are getting better with voice recognition.
I would not invest anything in those companies unless they could show an application using their chips.
I find it quite interesting that Apple's TV ads for its AI functionality show people using it to tell trivial lies. It's certain that Apple's marketers have racked their brains to name the reason this is a must-have. That's all they can come up with?
Seems like railroads to me -- with all the overbuild, government subsidy, and corruption potential. Let's hope they don't bring down the whole global economy in a dot.com bust.
New communications technologies (in which I include railroads) seem to have this tendency to boom-bust. Probably because they have a tendency toward natural monopoly/oligopoly.
Increasing returns to scale.
I think, in your last paragraph, you meant to write "Intel and Samsung succeed in closing the gap vis-à-vis TSMC?"
Two things:
1) NVidia is a hardware company. Any analogy would be to Intel. The effective, persistent monopolies, e.g. IBM & Microsoft, have been about software not hardware. If NVidia held the IP for the only reasonable AI hardware API, then they would have a persistent monopoly. They don't.
2) There is currently only an extremely limited use case for AI. It isn't very accurate or reliable, so the only applications that make sense are the ones that could be delegated to a so so, novice assistant. Apple is not the only company selling vaporware in this market.
NVIDIA has no fabs...
They're still just a hardware company. There's no lock in. There is nothing to keep one from moving to an alternative piece of hardware. Only a tiny piece of software needs to be rewritten, if that.
MMMM???
Valuable Use?
Why does the peabrain think Military Uses?