> Keep building bigger and more expensive models, but then thwack them to behave by confining them to domains—Tim Lee says coding, and mathing—where you can automate the generation of near-infinite amounts of questions with correct answers for reinforcement learning. That would be a tremendous boon for programmers and mathematical modelers. But expensive:
To my eyes that shows that coding and maths are areas where we can/should/will get advantages from AI by using LLMs to bridge between informal language and specialized tooling (which I think we can do with much smaller specialized models than what we already have) and then leveraging existing hardware and software tools to build non-LLM models for those domains; basically LLMs as parsers and things like AlphaZero-for-maths/-quantum chemistry/-etc as domain-specific compilers.
I'm not saying the intellectual Jeeves isn't a good idea or business model, but that's like using electrical power *only* to use a conveyor belt to move pieces from manual workstation to manual workstation.
MAMLMs: I take your point to be that much smaller LLMs than we already have are more than sufficient as natural-language front-ends to structured and unstructured data, and that the Royal Road is then applying those as queries to well-curated databases. That would imply that spending more money on LLMs is simply a waste of time. That is a very intriguing and, I think, quite possibly correct conclusion. A bigger and more complicated LLM would then just get us a slightly refined interpolation function from the space of training data prompts to the space of answers. And to the extent that those corpori are unreliable, you have not gotten anything extra:
> Marcelo Rinesi
> > Keep building bigger and more expensive models, but then thwack them to behave by confining them to domains—Tim Lee says coding, and mathing—where you can automate the generation of near-infinite amounts of questions with correct answers for reinforcement learning. That would be a tremendous boon for programmers and mathematical modelers. But expensive:
> To my eyes that shows that coding and maths are areas where we can/should/will get advantages from AI by using LLMs to bridge between informal language and specialized tooling (which I think we can do with much smaller specialized models than what we already have) and then leveraging existing hardware and software tools to build non-LLM models for those domains; basically LLMs as parsers and things like AlphaZero-for-maths/-quantum chemistry/-etc as domain-specific compilers.
> I'm not saying the intellectual Jeeves isn't a good idea or business model, but that's like using electrical power *only* to use a conveyor belt to move pieces from manual workstation to manual workstation.
One domain DeepMind seems to have excelled is in protein folding. This is narrow AI and seems to work very well, AFAICS. Its AlphaFold server allows 20 protein sequences per day to be tested. It just begs for large-scale sequence testing to find likely interesting sequences that can be experimentally tested for functionality. True genetic engineering design becomes possible with libraries of useful protein motifs and domains outside of those produced in nature. LLMs should just be the interface to such tools. Where I would focus LLMs is to understand specific technical words accurately so that text or verbal prompts could be more accurately interpreted rather than the compressed probability of meaning. This would be in domain-specific areas rather than a general LLM. That way the coding/math/engineering/science/etc. would generate better results. As for "reasoning", IMHO, OpenAI's approach makes little sense and is computationally costly. Far better to use "reasoning engines", again perhaps domain limited, to do the needed work.
Anecdotally, I was using the free version of ChatGPT (3.5?), to test its basic reasoning, e.g. could an actor born after a movie was made have a role in that movie? That required ChatGPT to acquire the birth and death dates of an actor versus the given name of a movie (with date if needed for same title ambiguity). I found that ChatGPT just dissembled the actor's characteristics and ignored the possibility of being available by age. IOW, it was unable to reason. I hope to get some better answers with later versions of ChatGPT to determine if it can reason better with such a simple task, or is it just as clueless.
As regards math, my difficulty has been with building models from scratch, and if they need to be partially differentiated for turning point values, my weakness in doing this. A MAMLM that could do these tasks based on verbal input with some [simple] equations to build complex formulae, and input of equations to do the partial differentiation would be a help. The latter task can be done with expensive Math packages (I think) but not cost-effective for the use I could make use of them. As "bicycles for the mind", calling on specialist MAMLMs for help would be a boon for me.
The problem I see for much of the working world is that people have no idea how such tools could be used given the lack of education. This seems a problem for really boosting GDP and results in firms hoping to bypass humans in the loop and replace them with generalized AIs. Firms therefore lose the creativity and intelligence of humans which should be harnessed. The solution - "more education" doesn't seem to work, but maybe changing how we educate and what for might. For example, I repeatedly read that young children are naturally curious (good), but our teaching methods beat it out of all but a minority (bad). Constant testing hinders rather than helps. Teaching that encourages curiosity to find out things might be a better way to have productive humans who could benefit from AIs and related tools, and hence improve the productivity of firms, and with enlightened owners, pay for that productivity with wages and benefits. [Yes, I know, fanciful thinking.]
Protein folding is definitely one of the best results for generative AI methods; it's extremely useful, and the domain is very language-like in its underlying structure.
However, a point I'd make is that protein folding prediction (while legitimately a great achievement and practical advancement) wasn't at all the intellectual bottleneck for drug design, much less genetic engineering; it's simply not the case that what AlphaFold can do moves the needle for drug discovery, much less genetic engineering, and the same is true of the rest of the AI tools currently being pushed. Here's an slightly more detailed view from a hugely more knowledgeable person: https://www.chemistryworld.com/opinion/robots-queuing-up-to-fail/4020705.article
I'm less certain about the impact of LLMs in the wider world of work. My view about its potential is more negative than yours and certainly than DeLong's, but I stand on shakier ground there.
I read the article, and while teh author is not wrong about the process and time and costs to launch a new drug on the market, picking good leads is actually very important. To give to an idea, back in the 1990s, Merck built a lead factory to test vast numbers of chemicals for new drug candidates. That was a vast cost and did not fare well. Conversely, tools that were able to pick good candidates with likely efficacy and low side effects in animals (rats, and single cells) could quickly find the candidates by ruling out NCEs that would later fail in Phase I tests. After that, yes there is no Star Trek was to save time, although AI used to plan clinical trials is still a useful tool.
But I am talking about something rather different. The possible protein space is vast, and that is just with the 20 amino acids our life uses. Nature has done much of the work of finding biologically useful proteins, but the search space has hardly been scratched. Being able to determine folding, and more important functionality, would be very useful when designing new proteins as biologic drugs, enzymes, and a range of biomimicry structures. With that, one can then engineer organisms that add these genes to their genomes to accomplish the new functions we want. Unlike drugs, the testing and commercialization phase is far faster. If an AI could determine functionality with substitute amino acids, or even confirm that removing some codon for translation into an amino acid doesn't break the cell's functioning, we can generate libraries of new genes like MIT's biobrick gene sets to do certain task. This speeds up gene engineering because the slow trial and error methods are obsoleted.
I don't disagree with your definition of the problem, but those "if"s carry a lot of weight in the argument. Anyway, time will tell. I disagree as expectation, I'm on your side hope-wise.
a) skilled labor is also immobile internationally becasue of immigration restrictions. 85,000 H1B visas is ridiculous!
b) the budget deficit as US anti-industrial policy
c) US trade negotiators prioritizing getting US owners paid for IP (Gordons fault?) over accepting more more US manufactured exports. (and I do mean "manufactured." I have seen USDA FAS people in embassies working on us exports of honey and chicken entrails. :))
"And I still think that, technocratically, that was the right decision for the Feb to have made. But I find myself in a very small minority here."
I'm with you up until Sept 2021. TIPS started expecting above target inflation on the 5 and 10 year horizons [Why won't Treasury give us 1,2, and 3 year TIPS?] That was the time to start dialing back inflation, the relative priced had adjusted or very soon would. March 2022 was too late.
"... grease[ing] the movement of real relative prices to their proper von Hayek resource allocation signaling values" is the WAY to achiever price stability and full employment (of all resources). The New Keynesian models that implicitly have only one relative price, stuff/labor --sorry Blanchard, Elmendorf -- cannot make sense of or policy for large sectoral supply/demand positive/negative shocks
And if Nunes is willing to admit that relative price adjustment may require above NGDPLT growth, why not just recognize that is what FAIT is all about?
"Roughly, Biden’s two big industrial failures were:
Prioritizing the power of special-interest groups over the good of the public.
Refusing to address regulatory barriers that inhibit government action."
No! The _fundamental_ flaw was not to treat this as an _economic_ issue. Or rather, to conflate three different economic issues into one:
1. Reduction in global emissions of CO2
2. National security issues arising from China's dominance of global manufacturing
3. Promotion of activities in which US firms could come to have monopoly power/reap global economies of scale in production (old fashioned Korea/earlier Japan style "industrial policy").
Then, having a clear idea of the benefits of each of these distinct objectives, to have given a few moments of thought to how to achieve them at lowest cost. (Not paying attention to lowest cost is where not dealing with special interest groups and regulatory issues comes in.)
To wit:
1. IRA does NOT mimic well the effects of a tax on net emissions
2. Chinese dominance of global manufacturing requires freer trade with non-Chinese countries.
3. Industrial policy needs to encourage activities with export potential and with instruments -- subsidies not import restrictions -- that do not encourage domestic sales over exports.
And, since all of this requires investment, we really ought to do something about a fiscal system that taxes income rather than consumption, replacing the income tax with a progressive consumption tax and replacing the wage tax for social insurance with a VAT.
From an industrialist viewpoint Nippon Steel is actually interested in making steel which is something that no majority owner of US Steel has really wanted to do since ~1970. Yes, it would be visually jarring to see NIPPON STEEL on the gate at Granite City Works - but if left to USS all that the workers will see is the USS logo being painted over by the demolition/scrapping contractor.
"Back up, and train a GPT LLM as a summarization engine on an authoritative set of information both through pre-training and RAG, and so produce true natural-language interfaces to structured and unstructured knowledge databases. That would be wonderful. But it is best provided not by building a bigger, more expensive model but rather by slimming down to keep linguistic fluency while reducing costs. Moreover, that would be profitable to provide: it would essentially be performing the service of creating a bespoke intellectual Jeeves for each use case. Doing that would produce profitable businesses. But it would not validate $3 trillion corporate market cap expectations."
There is a perfectly profitable market for bespoke information - books, textbooks, taught courses. The relevant intelligences behind these artifacts are authors, teachers, etc. Publishers are already adding media interfaces to these works - CD inserts, eTextbook links to online tests, etc. Publishers should find it easy to add value by grafting on AIs to summarize material and arguments for both individual books and aggregate books (e.g. for a subject)., as well as teachers doing the same for the aggregate materials for a course. This strikes me as the better way to go, and then the many competing domain-specific AIs can be rated, just like authors.
As for the current high market cap values and well-paid "leaders", I couldn't care less about their fortunes. Their hubris went for huge sums to achieve the AGI and superintelligent AI goals. It looks like that was a bridge too far, and that a bust will happen. Nemesis. We will be better off without AIs with the possible existential threat of the fictional "Colossus" computer. Bespoke AI assistants will better meet humanity's needs, by becoming "bicycles for the mind" for each domain. Consider the recent doorstop econ books, including yours. It is large, yet you admit you had to pare it down. An AI trained on the totality of material could become a tutor, both summarizing the arguments and fleshing them out where desired. Even better would be an AI that could answer questions beyond the material, explaining why certain approaches were taken rather than others. The result might be a richer experience for the interested reader. It's more like a multi-track video game than a linear movie. [Also movies are now sold with director voiceover tracks to explain the director's thoughts as the movie unfolds. Multiple voices are preferable to one overarching voice in most subjects, whether science or arts. Domain-specific AIs could be a useful interface for books and other media, and their competing voices would allow for variety and potential progress. [Competing AIs in a political debate might shed more light than rhetorical heat in these debates, with facts rather than misinformation and slogans in a good debate.]
Let's not forget that LLMs, however hooked up to RAGs, are just the current AI technology. They are unlikely to be the last. Ideally, they should be as flexible as a human mind, with infinitely better recall, low resource use, and preferably better logical analysis of the data before responding. Less like the drunk at the bar mouthing off an opinion, and more like an expert with lower latency deliberation. IOW, intelligent experts on tap. [I appreciate this can all be gamed, but I prefer that the technology is accessible to the many, rather than the few, or the one.]
"Active reading" has long been THE way that those super-skilled in utilizing the technologies of writing and printing we have had for 5000 and 500 years, respectively, to supercharge the intellectual powers these technologies enable. It is in sharp contrast to passive readings, in which the words wash over you—as in listening to a speech, but with your eyes rather than your ears. This form of passive reading has all the flaws Platon's Sokrates puts in the mouth of King Thamos in his response to the God Theuth in the "Phaidros"—that it creates the trompe l'oeil appearance of thinking, but not the reality. (Not said in the Phaidros, but a subtext in much of Platon, is that the speechifyin' rhetoric of the sophist suffers from much the same problem: rather than helping you think, the speeches of the demagogue drive you like cattle to his desired conclusion).
In active reading, however, you are the master of the book. You dogear pages to return to them. You flip back and you flip forward. You write in the margins. And so, in fact, the good active reader will argue with the book: will take the codex, spend maybe three or four hours interacting with it, and from the black marks on the page spin up a sub-Turing instantiation of the author's mind, run it on their own wetware, and have in their mind's eye—and who is to say that is not as real as the actual eye—a Sokrates on the other end of the log, answering questions. As Machiavelli wrote in 1513, when he goes into his library: "I step inside the venerable courts of the ancients... where I am unashamed to converse with them and to question them about the motives for their actions, and they, out of their human kindness, answer me...".
But for only a small slice of society, only for the truly hyperliterate, is it the case that they—we—have managed to train our brains to make active reading second nature. The rest of humanity cannot do it.
The right use of GPT LLM technology is to provide a route-around: rather than having to train yourself for years to become a hyperliterate active reader and spinner-up of sub-Turing instantiations of authors' minds, you can have a dialogue with Sub-TuringAuthorBot(TM):
> **Alex Tolley**: 'There is a perfectly profitable market for bespoke information - books, textbooks, taught courses. The relevant intelligences behind these artifacts are authors, teachers, etc. Publishers are already adding media interfaces to these works - CD inserts, eTextbook links to online tests, etc. Publishers should find it easy to add value by grafting on AIs to summarize material and arguments for both individual books and aggregate books (e.g. for a subject)., as well as teachers doing the same for the aggregate materials for a course. This strikes me as the better way to go, and then the many competing domain-specific AIs can be rated, just like authors.
> As for the current high market cap values and well-paid "leaders", I couldn't care less about their fortunes. Their hubris went for huge sums to achieve the AGI and superintelligent AI goals. It looks like that was a bridge too far, and that a bust will happen. Nemesis. We will be better off without AIs with the possible existential threat of the fictional "Colossus" computer. Bespoke AI assistants will better meet humanity's needs, by becoming "bicycles for the mind" for each domain. Consider the recent doorstop econ books, including yours. It is large, yet you admit you had to pare it down. An AI trained on the totality of material could become a tutor, both summarizing the arguments and fleshing them out where desired. Even better would be an AI that could answer questions beyond the material, explaining why certain approaches were taken rather than others. The result might be a richer experience for the interested reader. It's more like a multi-track video game than a linear movie. [Also movies are now sold with director voiceover tracks to explain the director's thoughts as the movie unfolds. Multiple voices are preferable to one overarching voice in most subjects, whether science or arts. Domain-specific AIs could be a useful interface for books and other media, and their competing voices would allow for variety and potential progress. [Competing AIs in a political debate might shed more light than rhetorical heat in these debates, with facts rather than misinformation and slogans in a good debate.]
> Let's not forget that LLMs, however hooked up to RAGs, are just the current AI technology. They are unlikely to be the last. Ideally, they should be as flexible as a human mind, with infinitely better recall, low resource use, and preferably better logical analysis of the data before responding. Less like the drunk at the bar mouthing off an opinion, and more like an expert with lower latency deliberation. IOW, intelligent experts on tap. [I appreciate this can all be gamed, but I prefer that the technology is accessible to the many, rather than the few, or the one]...
> > Back up, and train a GPT LLM as a summarization engine on an authoritative set of information both through pre-training and RAG, and so produce true natural-language interfaces to structured and unstructured knowledge databases. That would be wonderful. But it is best provided not by building a bigger, more expensive model but rather by slimming down to keep linguistic fluency while reducing costs. Moreover, that would be profitable to provide: it would essentially be performing the service of creating a bespoke intellectual Jeeves for each use case. Doing that would produce profitable businesses. But it would not validate $3 trillion corporate market cap expectations.
So I think we agree that option 1 is the best solution for genAI for most of us As the main players are going for option 2, the strategy is to wait for the bust bringing on the next "AI winter" and pick up the pieces for a song to build businesses on the bespoke model. I expect the publishers will be the initial entrants, but as the technology is democratized and the hardware continues to grow, we will all be able to build bespoke AIs on our home computers. This follows the same path as graphics that at the beginning of the 1990s required expensive minicomputers and ran out of shops, to high-end Unix desktops like Silicon Graphics machines by the mid-1990s, to decent computers running on a variety of OSs using affordable and even FOSS graphics software packages. The high-end graphics have moved up to video CGI which in turn will migrate to the home computer by 2030, possibly running genAI locally in the packages to build decent video from scratch. I don't think we will see AGI or anything remotely superintelligent, but we will see locally run MAMLMs take off for a host of applications. It is just that the current crop of AI leaders will have burned up their profits pursuing a dream. However, I expect application companies like MSoft and Apple to be able to integrate AI effectively eventually, but with unrecoverable losses from their earlier AI endeavors (although MSoft is playing the game with OpenAI far more cannily). I fully expect OpenAI to go bust. An interesting question is whether NVidia can continue to do well by integrating its technology into consumer hardware. I hope it can, although I would like to see very different neuromorphic approaches take that market rather than GPUs.
> Keep building bigger and more expensive models, but then thwack them to behave by confining them to domains—Tim Lee says coding, and mathing—where you can automate the generation of near-infinite amounts of questions with correct answers for reinforcement learning. That would be a tremendous boon for programmers and mathematical modelers. But expensive:
I don't understand this claim. I.e. what DeepMind did for math Olympiads [ https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/ ] used the purely linguistic skills of a LLM to formalize problems and then applied (comparatively very lean, pun not intended) specialized engines to work on them.
To my eyes that shows that coding and maths are areas where we can/should/will get advantages from AI by using LLMs to bridge between informal language and specialized tooling (which I think we can do with much smaller specialized models than what we already have) and then leveraging existing hardware and software tools to build non-LLM models for those domains; basically LLMs as parsers and things like AlphaZero-for-maths/-quantum chemistry/-etc as domain-specific compilers.
I'm not saying the intellectual Jeeves isn't a good idea or business model, but that's like using electrical power *only* to use a conveyor belt to move pieces from manual workstation to manual workstation.
MAMLMs: I take your point to be that much smaller LLMs than we already have are more than sufficient as natural-language front-ends to structured and unstructured data, and that the Royal Road is then applying those as queries to well-curated databases. That would imply that spending more money on LLMs is simply a waste of time. That is a very intriguing and, I think, quite possibly correct conclusion. A bigger and more complicated LLM would then just get us a slightly refined interpolation function from the space of training data prompts to the space of answers. And to the extent that those corpori are unreliable, you have not gotten anything extra:
> Marcelo Rinesi
> > Keep building bigger and more expensive models, but then thwack them to behave by confining them to domains—Tim Lee says coding, and mathing—where you can automate the generation of near-infinite amounts of questions with correct answers for reinforcement learning. That would be a tremendous boon for programmers and mathematical modelers. But expensive:
> I don't understand this claim. I.e. what DeepMind did for math Olympiads [ https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/ ] used the purely linguistic skills of a LLM to formalize problems and then applied (comparatively very lean, pun not intended) specialized engines to work on them.
> To my eyes that shows that coding and maths are areas where we can/should/will get advantages from AI by using LLMs to bridge between informal language and specialized tooling (which I think we can do with much smaller specialized models than what we already have) and then leveraging existing hardware and software tools to build non-LLM models for those domains; basically LLMs as parsers and things like AlphaZero-for-maths/-quantum chemistry/-etc as domain-specific compilers.
> I'm not saying the intellectual Jeeves isn't a good idea or business model, but that's like using electrical power *only* to use a conveyor belt to move pieces from manual workstation to manual workstation.
One domain DeepMind seems to have excelled is in protein folding. This is narrow AI and seems to work very well, AFAICS. Its AlphaFold server allows 20 protein sequences per day to be tested. It just begs for large-scale sequence testing to find likely interesting sequences that can be experimentally tested for functionality. True genetic engineering design becomes possible with libraries of useful protein motifs and domains outside of those produced in nature. LLMs should just be the interface to such tools. Where I would focus LLMs is to understand specific technical words accurately so that text or verbal prompts could be more accurately interpreted rather than the compressed probability of meaning. This would be in domain-specific areas rather than a general LLM. That way the coding/math/engineering/science/etc. would generate better results. As for "reasoning", IMHO, OpenAI's approach makes little sense and is computationally costly. Far better to use "reasoning engines", again perhaps domain limited, to do the needed work.
Anecdotally, I was using the free version of ChatGPT (3.5?), to test its basic reasoning, e.g. could an actor born after a movie was made have a role in that movie? That required ChatGPT to acquire the birth and death dates of an actor versus the given name of a movie (with date if needed for same title ambiguity). I found that ChatGPT just dissembled the actor's characteristics and ignored the possibility of being available by age. IOW, it was unable to reason. I hope to get some better answers with later versions of ChatGPT to determine if it can reason better with such a simple task, or is it just as clueless.
As regards math, my difficulty has been with building models from scratch, and if they need to be partially differentiated for turning point values, my weakness in doing this. A MAMLM that could do these tasks based on verbal input with some [simple] equations to build complex formulae, and input of equations to do the partial differentiation would be a help. The latter task can be done with expensive Math packages (I think) but not cost-effective for the use I could make use of them. As "bicycles for the mind", calling on specialist MAMLMs for help would be a boon for me.
The problem I see for much of the working world is that people have no idea how such tools could be used given the lack of education. This seems a problem for really boosting GDP and results in firms hoping to bypass humans in the loop and replace them with generalized AIs. Firms therefore lose the creativity and intelligence of humans which should be harnessed. The solution - "more education" doesn't seem to work, but maybe changing how we educate and what for might. For example, I repeatedly read that young children are naturally curious (good), but our teaching methods beat it out of all but a minority (bad). Constant testing hinders rather than helps. Teaching that encourages curiosity to find out things might be a better way to have productive humans who could benefit from AIs and related tools, and hence improve the productivity of firms, and with enlightened owners, pay for that productivity with wages and benefits. [Yes, I know, fanciful thinking.]
Protein folding is definitely one of the best results for generative AI methods; it's extremely useful, and the domain is very language-like in its underlying structure.
However, a point I'd make is that protein folding prediction (while legitimately a great achievement and practical advancement) wasn't at all the intellectual bottleneck for drug design, much less genetic engineering; it's simply not the case that what AlphaFold can do moves the needle for drug discovery, much less genetic engineering, and the same is true of the rest of the AI tools currently being pushed. Here's an slightly more detailed view from a hugely more knowledgeable person: https://www.chemistryworld.com/opinion/robots-queuing-up-to-fail/4020705.article
I'm less certain about the impact of LLMs in the wider world of work. My view about its potential is more negative than yours and certainly than DeLong's, but I stand on shakier ground there.
I read the article, and while teh author is not wrong about the process and time and costs to launch a new drug on the market, picking good leads is actually very important. To give to an idea, back in the 1990s, Merck built a lead factory to test vast numbers of chemicals for new drug candidates. That was a vast cost and did not fare well. Conversely, tools that were able to pick good candidates with likely efficacy and low side effects in animals (rats, and single cells) could quickly find the candidates by ruling out NCEs that would later fail in Phase I tests. After that, yes there is no Star Trek was to save time, although AI used to plan clinical trials is still a useful tool.
But I am talking about something rather different. The possible protein space is vast, and that is just with the 20 amino acids our life uses. Nature has done much of the work of finding biologically useful proteins, but the search space has hardly been scratched. Being able to determine folding, and more important functionality, would be very useful when designing new proteins as biologic drugs, enzymes, and a range of biomimicry structures. With that, one can then engineer organisms that add these genes to their genomes to accomplish the new functions we want. Unlike drugs, the testing and commercialization phase is far faster. If an AI could determine functionality with substitute amino acids, or even confirm that removing some codon for translation into an amino acid doesn't break the cell's functioning, we can generate libraries of new genes like MIT's biobrick gene sets to do certain task. This speeds up gene engineering because the slow trial and error methods are obsoleted.
I don't disagree with your definition of the problem, but those "if"s carry a lot of weight in the argument. Anyway, time will tell. I disagree as expectation, I'm on your side hope-wise.
Baldwin. Good no doubt but omits
a) skilled labor is also immobile internationally becasue of immigration restrictions. 85,000 H1B visas is ridiculous!
b) the budget deficit as US anti-industrial policy
c) US trade negotiators prioritizing getting US owners paid for IP (Gordons fault?) over accepting more more US manufactured exports. (and I do mean "manufactured." I have seen USDA FAS people in embassies working on us exports of honey and chicken entrails. :))
"And I still think that, technocratically, that was the right decision for the Feb to have made. But I find myself in a very small minority here."
I'm with you up until Sept 2021. TIPS started expecting above target inflation on the 5 and 10 year horizons [Why won't Treasury give us 1,2, and 3 year TIPS?] That was the time to start dialing back inflation, the relative priced had adjusted or very soon would. March 2022 was too late.
https://thomaslhutcheson.substack.com/p/fiscal-policy-pandemic-and-inflation
"... grease[ing] the movement of real relative prices to their proper von Hayek resource allocation signaling values" is the WAY to achiever price stability and full employment (of all resources). The New Keynesian models that implicitly have only one relative price, stuff/labor --sorry Blanchard, Elmendorf -- cannot make sense of or policy for large sectoral supply/demand positive/negative shocks
And if Nunes is willing to admit that relative price adjustment may require above NGDPLT growth, why not just recognize that is what FAIT is all about?
"Roughly, Biden’s two big industrial failures were:
Prioritizing the power of special-interest groups over the good of the public.
Refusing to address regulatory barriers that inhibit government action."
No! The _fundamental_ flaw was not to treat this as an _economic_ issue. Or rather, to conflate three different economic issues into one:
1. Reduction in global emissions of CO2
2. National security issues arising from China's dominance of global manufacturing
3. Promotion of activities in which US firms could come to have monopoly power/reap global economies of scale in production (old fashioned Korea/earlier Japan style "industrial policy").
Then, having a clear idea of the benefits of each of these distinct objectives, to have given a few moments of thought to how to achieve them at lowest cost. (Not paying attention to lowest cost is where not dealing with special interest groups and regulatory issues comes in.)
To wit:
1. IRA does NOT mimic well the effects of a tax on net emissions
2. Chinese dominance of global manufacturing requires freer trade with non-Chinese countries.
3. Industrial policy needs to encourage activities with export potential and with instruments -- subsidies not import restrictions -- that do not encourage domestic sales over exports.
And, since all of this requires investment, we really ought to do something about a fiscal system that taxes income rather than consumption, replacing the income tax with a progressive consumption tax and replacing the wage tax for social insurance with a VAT.
From an industrialist viewpoint Nippon Steel is actually interested in making steel which is something that no majority owner of US Steel has really wanted to do since ~1970. Yes, it would be visually jarring to see NIPPON STEEL on the gate at Granite City Works - but if left to USS all that the workers will see is the USS logo being painted over by the demolition/scrapping contractor.
"Back up, and train a GPT LLM as a summarization engine on an authoritative set of information both through pre-training and RAG, and so produce true natural-language interfaces to structured and unstructured knowledge databases. That would be wonderful. But it is best provided not by building a bigger, more expensive model but rather by slimming down to keep linguistic fluency while reducing costs. Moreover, that would be profitable to provide: it would essentially be performing the service of creating a bespoke intellectual Jeeves for each use case. Doing that would produce profitable businesses. But it would not validate $3 trillion corporate market cap expectations."
There is a perfectly profitable market for bespoke information - books, textbooks, taught courses. The relevant intelligences behind these artifacts are authors, teachers, etc. Publishers are already adding media interfaces to these works - CD inserts, eTextbook links to online tests, etc. Publishers should find it easy to add value by grafting on AIs to summarize material and arguments for both individual books and aggregate books (e.g. for a subject)., as well as teachers doing the same for the aggregate materials for a course. This strikes me as the better way to go, and then the many competing domain-specific AIs can be rated, just like authors.
As for the current high market cap values and well-paid "leaders", I couldn't care less about their fortunes. Their hubris went for huge sums to achieve the AGI and superintelligent AI goals. It looks like that was a bridge too far, and that a bust will happen. Nemesis. We will be better off without AIs with the possible existential threat of the fictional "Colossus" computer. Bespoke AI assistants will better meet humanity's needs, by becoming "bicycles for the mind" for each domain. Consider the recent doorstop econ books, including yours. It is large, yet you admit you had to pare it down. An AI trained on the totality of material could become a tutor, both summarizing the arguments and fleshing them out where desired. Even better would be an AI that could answer questions beyond the material, explaining why certain approaches were taken rather than others. The result might be a richer experience for the interested reader. It's more like a multi-track video game than a linear movie. [Also movies are now sold with director voiceover tracks to explain the director's thoughts as the movie unfolds. Multiple voices are preferable to one overarching voice in most subjects, whether science or arts. Domain-specific AIs could be a useful interface for books and other media, and their competing voices would allow for variety and potential progress. [Competing AIs in a political debate might shed more light than rhetorical heat in these debates, with facts rather than misinformation and slogans in a good debate.]
Let's not forget that LLMs, however hooked up to RAGs, are just the current AI technology. They are unlikely to be the last. Ideally, they should be as flexible as a human mind, with infinitely better recall, low resource use, and preferably better logical analysis of the data before responding. Less like the drunk at the bar mouthing off an opinion, and more like an expert with lower latency deliberation. IOW, intelligent experts on tap. [I appreciate this can all be gamed, but I prefer that the technology is accessible to the many, rather than the few, or the one.]
"Active reading" has long been THE way that those super-skilled in utilizing the technologies of writing and printing we have had for 5000 and 500 years, respectively, to supercharge the intellectual powers these technologies enable. It is in sharp contrast to passive readings, in which the words wash over you—as in listening to a speech, but with your eyes rather than your ears. This form of passive reading has all the flaws Platon's Sokrates puts in the mouth of King Thamos in his response to the God Theuth in the "Phaidros"—that it creates the trompe l'oeil appearance of thinking, but not the reality. (Not said in the Phaidros, but a subtext in much of Platon, is that the speechifyin' rhetoric of the sophist suffers from much the same problem: rather than helping you think, the speeches of the demagogue drive you like cattle to his desired conclusion).
In active reading, however, you are the master of the book. You dogear pages to return to them. You flip back and you flip forward. You write in the margins. And so, in fact, the good active reader will argue with the book: will take the codex, spend maybe three or four hours interacting with it, and from the black marks on the page spin up a sub-Turing instantiation of the author's mind, run it on their own wetware, and have in their mind's eye—and who is to say that is not as real as the actual eye—a Sokrates on the other end of the log, answering questions. As Machiavelli wrote in 1513, when he goes into his library: "I step inside the venerable courts of the ancients... where I am unashamed to converse with them and to question them about the motives for their actions, and they, out of their human kindness, answer me...".
But for only a small slice of society, only for the truly hyperliterate, is it the case that they—we—have managed to train our brains to make active reading second nature. The rest of humanity cannot do it.
The right use of GPT LLM technology is to provide a route-around: rather than having to train yourself for years to become a hyperliterate active reader and spinner-up of sub-Turing instantiations of authors' minds, you can have a dialogue with Sub-TuringAuthorBot(TM):
> **Alex Tolley**: 'There is a perfectly profitable market for bespoke information - books, textbooks, taught courses. The relevant intelligences behind these artifacts are authors, teachers, etc. Publishers are already adding media interfaces to these works - CD inserts, eTextbook links to online tests, etc. Publishers should find it easy to add value by grafting on AIs to summarize material and arguments for both individual books and aggregate books (e.g. for a subject)., as well as teachers doing the same for the aggregate materials for a course. This strikes me as the better way to go, and then the many competing domain-specific AIs can be rated, just like authors.
> As for the current high market cap values and well-paid "leaders", I couldn't care less about their fortunes. Their hubris went for huge sums to achieve the AGI and superintelligent AI goals. It looks like that was a bridge too far, and that a bust will happen. Nemesis. We will be better off without AIs with the possible existential threat of the fictional "Colossus" computer. Bespoke AI assistants will better meet humanity's needs, by becoming "bicycles for the mind" for each domain. Consider the recent doorstop econ books, including yours. It is large, yet you admit you had to pare it down. An AI trained on the totality of material could become a tutor, both summarizing the arguments and fleshing them out where desired. Even better would be an AI that could answer questions beyond the material, explaining why certain approaches were taken rather than others. The result might be a richer experience for the interested reader. It's more like a multi-track video game than a linear movie. [Also movies are now sold with director voiceover tracks to explain the director's thoughts as the movie unfolds. Multiple voices are preferable to one overarching voice in most subjects, whether science or arts. Domain-specific AIs could be a useful interface for books and other media, and their competing voices would allow for variety and potential progress. [Competing AIs in a political debate might shed more light than rhetorical heat in these debates, with facts rather than misinformation and slogans in a good debate.]
> Let's not forget that LLMs, however hooked up to RAGs, are just the current AI technology. They are unlikely to be the last. Ideally, they should be as flexible as a human mind, with infinitely better recall, low resource use, and preferably better logical analysis of the data before responding. Less like the drunk at the bar mouthing off an opinion, and more like an expert with lower latency deliberation. IOW, intelligent experts on tap. [I appreciate this can all be gamed, but I prefer that the technology is accessible to the many, rather than the few, or the one]...
> > Back up, and train a GPT LLM as a summarization engine on an authoritative set of information both through pre-training and RAG, and so produce true natural-language interfaces to structured and unstructured knowledge databases. That would be wonderful. But it is best provided not by building a bigger, more expensive model but rather by slimming down to keep linguistic fluency while reducing costs. Moreover, that would be profitable to provide: it would essentially be performing the service of creating a bespoke intellectual Jeeves for each use case. Doing that would produce profitable businesses. But it would not validate $3 trillion corporate market cap expectations.
So I think we agree that option 1 is the best solution for genAI for most of us As the main players are going for option 2, the strategy is to wait for the bust bringing on the next "AI winter" and pick up the pieces for a song to build businesses on the bespoke model. I expect the publishers will be the initial entrants, but as the technology is democratized and the hardware continues to grow, we will all be able to build bespoke AIs on our home computers. This follows the same path as graphics that at the beginning of the 1990s required expensive minicomputers and ran out of shops, to high-end Unix desktops like Silicon Graphics machines by the mid-1990s, to decent computers running on a variety of OSs using affordable and even FOSS graphics software packages. The high-end graphics have moved up to video CGI which in turn will migrate to the home computer by 2030, possibly running genAI locally in the packages to build decent video from scratch. I don't think we will see AGI or anything remotely superintelligent, but we will see locally run MAMLMs take off for a host of applications. It is just that the current crop of AI leaders will have burned up their profits pursuing a dream. However, I expect application companies like MSoft and Apple to be able to integrate AI effectively eventually, but with unrecoverable losses from their earlier AI endeavors (although MSoft is playing the game with OpenAI far more cannily). I fully expect OpenAI to go bust. An interesting question is whether NVidia can continue to do well by integrating its technology into consumer hardware. I hope it can, although I would like to see very different neuromorphic approaches take that market rather than GPUs.