Yes, it is the f---ing "New York Times" again; a Jonathan Weisman-environmental protection "opinions of shape of earth differ" garbage dumpster fire; plus the incredibly talented Alicia Key; wheat...
When I found out that the voters in East Palestine OH voted 78% for Trump and that their mayor is a Big Lie supporter, my first thought was "fuq'm, they got what they deserved" - on further thought they are the perfect example of the people who buy Republican bullshit because being lied to makes them feel good about living in a place with no future.
LOL regarding the NYT!!! We have subscriptions to print editions of both the NYT and WaPo. I pointed out to my wife that subscriptions are now hugely expensive. Daily delivery of the NYT is about $1200/yr and the WaPo close to $1000/yr. My wife insists that we continue to get the print editions as she likes to read the paper at the kitchen table. She has offered to pay for them and I might take her up on that!!!
To Brad's point, the NYT is abysmal these days. I can't stomach the nonsense on the op-ed pages and their over emphasis on 'correctness' is making the paper less relevant to me. At least the WaPo has two pages of comics that have not been cut back because of belt tightening.
Regarding the Chat-GPT output, it’s actually not bad. First the program knows and understands Tom DeLong’s areas of expertise — policy, economics, and history. It has a framework for current geopolitical and economic challenges faced by governments. It is also aware of certain platforms where he tends to publish. From these inputs, it created suitable outputs for the types of articles he would write. Having said that, the question was phrased in a certain way and the answer was phrased with emphatic certainty. Hence we hear multiple journalists criticize that it confabulates, autocompletes, and lies, etc. However, there are no criticisms of syntax, such as answers making no sense, knowing nothing of the subject, giving anachronistic and irrelevant answers, etc. in short, as with any relationship, “it’s not what you say, it’s how you say it.”
So what if the answer to the question from the program was, “I don’t have access to his recent writings, but given his areas of expertise, he may be working on articles such as these…” Then our thoughts on the program may be different.
That’s why I think the developers are sitting in the break room sipping coffee and laughing as they read these articles. Likely is that they want people to treat it like a toy, rather than expose the types of logical and programming challenges that they have surmounted. But, they very well may be looking for actual shortcomings, some of which have been uncovered, such as bias within the database, and how to deal with these. The short solution appears to block certain subjects. However the real solution is for the program to cull and correct the errors and bias within the dataset. Of course the first problem is for the program to notice when it has become biased or has run into an inconsistency within the data.
Regarding Chat-GPT, that’s an odd output for the question being asked. But it understands enough of DeLong writings to make an inference of what can come next or what could have been.
But as a chat program, it can easily classify when it is doing a look up, making a deduction, or drawing an inference. Using the proper sentence shell would be a simple matter for the program. But why the developers chose to blur all of those lines of logic as factual type of sentence? Maybe they are just teasing the user, keeping the user from understanding how it works. But for any other application, inferences would be tagged as such for future testing or validation.
I forget how long it took the non-MAGA media to transition from "XYZ voters believe the election was fraudulent.." to "XYZ voters falsely believe the election was fraudulent". But as an advertising-supported readership-based business models they've found it very uncomfortable to arbitrate the truth vs. damage of what "sources" say. To my view though, they generally do better than most. I can read between the lines when views are prefaced with "78% Trump..". Sometimes it is enough just to point out the sacred cow without having to kill it outright.
I don't see that disposable personal income has been rising at 10% since the summer. It is a volatile number, particularly in January, when Treasury withholding taxes fell significantly while weekly earnings rose. Everything in January spikes up. But the US economy turns as quickly as a supertanker. There's more afoot here.
*Of course* ChatGPT is not actually intelligent. The problem is how hard it makes it to ignore the ubiquity of natural stupidity. You are unusual in being a learned and brilliant professor with profound thoughts and something interesting to say! Sure, ChatGPT is mainly a bullshit generator, but most of us write bullshit most of the time! How difficult it is, then, to avoid the disquieting suspicion that we ourselves spend most of our time autocompleting, when we have the example of ChatGPT to prove that most of our activities that we consider proof of our humanity are nothing of the sort.
BTW, mea culpa, I have sinned: I succumbed to the NYT's Christmas promotion and purchased a year's online subscription for CAD 20 (which is, like, USD 15.) It was worth it just to get Krugman in my inbox. Now, if only I can remember to cancel in December ...
Lee: But before politician can “learn” how to execute “industrial policy” they need to have an idea of what problem they are trying to solve” Climate change Is easy; “industrial policy is a substitute for a Pigou tax on the negative externality of CO2 and methane emissions. The risk of politically motivated import disruption whose cost exceeds the losses of users? Nothing new. WTO allows for national security tariffs and standard trade theory teaches that subsidies’ are preferable o tariffs. (That Biden has nor repealed “national security” tariffs on imports from Canada and Europe does not give much grounds for hope this will be done well.) Eliminating regulatory obstacles to energy production and transportation projects that pass cost benefit analysis? But we already knew that.
Where’s the (new) beef?
So illuminating. Thank you.
Intrigued by Karlsson ... and forwarding the piece to a friend for whom it seems to have been written. (What could be more appropriate?)
Subscribed to Henrik Karlsson's substack after seeing the quote.
Watch out for Gary Marcus. He wont commit to the insanity of AGI. It's still possible he says.
Promoting your being on t he path to AGI is saying you're equivalent to God - the capital "G" one. Sam Altman needs you to believe in this to make billions. Gary Marcus needs you to tolerate such musing so he can get column inches with Ezra Klein. Beware anyone who uses words like "understanding" when describing AIs. And remember, AI only really ever meant Augmented Inference.
All the best Professor Delong.
How about if you get Chat GPT to generate a list of essay titles you *may* write, you pick one, and write it ? I'd pay extra dog treats for such an essay