Well, maybe not: the first two conversations with the 2024-03-29 version at <https://chat.openai.com/g/g-L3OgqXJbL-sub-turing-bradbot-2024-03-29-public>...
"The… amount of tacit knowledge that's involved in successfully training a high-quality large model is still quite high. So you can read the papers, you can look at the open source, but getting these things to train and converg… over these large clusters and managing all of that—there's still quite a lot of that knowledge… not published, not written down…. The individual items are probably small, but they really add up…"
Ahem. Isn"t this exactly the problem that ultimately sunk Expert Systems? Too much handcrafted knoedge missing tacit knowledge that was needed. Train language models seems on a similar level to extracting expertise and encoding it in rules. What was the saying about "not learning from history..."?
I thought the Sub-Turing Bradbot was Larry's WaPo columns. Or were those the Sub-Turing (yes, Sun-Turing does sound, like an excellent evil company name, actually, thank you fingers) LarryBit products?
(Yes, it would be helpful for the AI to reflect tense properly - the only way to do that would be to take queries reflecting the future tense, and limit search sweeps to future dates. But you'd have to reliably understand the direction time arrow as indicated by phrasing. Failing to do so is probably the problem.)
As for the rest of it: "I have no idea what I'm doing, why any of this works, but I am going to keep building something and hope something works profitably" is certainly a thing you can do. But the proliferation of AI apps suggest that the excess unused compute time that is being used for 1) bitcoin & 2) AI is reflecting an inability to understand the way forward, so they are just throwing things (AI, blockchain) at other things to try and locate a use case. (Outside of the use case of "suckering investor money".)
We need some genuinely thinking, designs, or whatever here.
Here's an April 1 post: https://www.rfc-editor.org/rfc/rfc9564.html h/t Gergely Nagy.
However, every day is April 1 in this area. And much else.
Also this:
https://www.youtube.com/playlist?list=PLcajvRZA8E08mP8_JddQQDZ3T25pDmXyW
"The… amount of tacit knowledge that's involved in successfully training a high-quality large model is still quite high. So you can read the papers, you can look at the open source, but getting these things to train and converg… over these large clusters and managing all of that—there's still quite a lot of that knowledge… not published, not written down…. The individual items are probably small, but they really add up…"
Ahem. Isn"t this exactly the problem that ultimately sunk Expert Systems? Too much handcrafted knoedge missing tacit knowledge that was needed. Train language models seems on a similar level to extracting expertise and encoding it in rules. What was the saying about "not learning from history..."?
I thought the Sub-Turing Bradbot was Larry's WaPo columns. Or were those the Sub-Turing (yes, Sun-Turing does sound, like an excellent evil company name, actually, thank you fingers) LarryBit products?
(Yes, it would be helpful for the AI to reflect tense properly - the only way to do that would be to take queries reflecting the future tense, and limit search sweeps to future dates. But you'd have to reliably understand the direction time arrow as indicated by phrasing. Failing to do so is probably the problem.)
As for the rest of it: "I have no idea what I'm doing, why any of this works, but I am going to keep building something and hope something works profitably" is certainly a thing you can do. But the proliferation of AI apps suggest that the excess unused compute time that is being used for 1) bitcoin & 2) AI is reflecting an inability to understand the way forward, so they are just throwing things (AI, blockchain) at other things to try and locate a use case. (Outside of the use case of "suckering investor money".)
We need some genuinely thinking, designs, or whatever here.
elm
you want get from the techbro overlords