20 Comments

When I found out that the voters in East Palestine OH voted 78% for Trump and that their mayor is a Big Lie supporter, my first thought was "fuq'm, they got what they deserved" - on further thought they are the perfect example of the people who buy Republican bullshit because being lied to makes them feel good about living in a place with no future.

Expand full comment

LOL regarding the NYT!!! We have subscriptions to print editions of both the NYT and WaPo. I pointed out to my wife that subscriptions are now hugely expensive. Daily delivery of the NYT is about $1200/yr and the WaPo close to $1000/yr. My wife insists that we continue to get the print editions as she likes to read the paper at the kitchen table. She has offered to pay for them and I might take her up on that!!!

To Brad's point, the NYT is abysmal these days. I can't stomach the nonsense on the op-ed pages and their over emphasis on 'correctness' is making the paper less relevant to me. At least the WaPo has two pages of comics that have not been cut back because of belt tightening.

Expand full comment

The Times' refusal to run comics is at the core of their problem. They've always been obsessed with respectability politics. (Perhaps an amalgam of yekke uptightness, Suthun expat gentility, and Ivy infestation? But I repeat myself.) At best, this could generate a quirky aristocratic pose. But that's precluded by the demands of mass circulation. So what do you get? Abject deference to "legitimate" economic, social, and political power, and a complete unselfconsciousness to the process and product of legitimation. Y*U*K!

The Post only suffers from the Ivy infestation problem. It's still pernicious, but much less so.

Expand full comment

I always say the NYT has the over-educated, under-intelligent, otherwise-unemployable Ivy League trust fund babies.

Expand full comment

It sure isn't the collection of heroes who obtained and published The Pentagon Papers and dared Tricky Dick to come after them.

Expand full comment

Regarding the Chat-GPT output, it’s actually not bad. First the program knows and understands Tom DeLong’s areas of expertise — policy, economics, and history. It has a framework for current geopolitical and economic challenges faced by governments. It is also aware of certain platforms where he tends to publish. From these inputs, it created suitable outputs for the types of articles he would write. Having said that, the question was phrased in a certain way and the answer was phrased with emphatic certainty. Hence we hear multiple journalists criticize that it confabulates, autocompletes, and lies, etc. However, there are no criticisms of syntax, such as answers making no sense, knowing nothing of the subject, giving anachronistic and irrelevant answers, etc. in short, as with any relationship, “it’s not what you say, it’s how you say it.”

So what if the answer to the question from the program was, “I don’t have access to his recent writings, but given his areas of expertise, he may be working on articles such as these…” Then our thoughts on the program may be different.

That’s why I think the developers are sitting in the break room sipping coffee and laughing as they read these articles. Likely is that they want people to treat it like a toy, rather than expose the types of logical and programming challenges that they have surmounted. But, they very well may be looking for actual shortcomings, some of which have been uncovered, such as bias within the database, and how to deal with these. The short solution appears to block certain subjects. However the real solution is for the program to cull and correct the errors and bias within the dataset. Of course the first problem is for the program to notice when it has become biased or has run into an inconsistency within the data.

Expand full comment
author

"However, there are no criticisms of syntax, such as answers making no sense, knowing nothing of the subject, giving anachronistic and irrelevant answers, etc.... So what if the answer to the question from the program was, 'I don’t have access to his recent writings, but given his areas of expertise, he may be working on articles such as these…' Then our thoughts on the program may be different." Yes. Exactly.

Expand full comment

Regarding Chat-GPT, that’s an odd output for the question being asked. But it understands enough of DeLong writings to make an inference of what can come next or what could have been.

But as a chat program, it can easily classify when it is doing a look up, making a deduction, or drawing an inference. Using the proper sentence shell would be a simple matter for the program. But why the developers chose to blur all of those lines of logic as factual type of sentence? Maybe they are just teasing the user, keeping the user from understanding how it works. But for any other application, inferences would be tagged as such for future testing or validation.

Expand full comment
author

That is a very interesting question. One would think that they would have held back release until they could prompt it with "taking this much smaller body of information as ground truth, answer the question truthfully"

Expand full comment

I forget how long it took the non-MAGA media to transition from "XYZ voters believe the election was fraudulent.." to "XYZ voters falsely believe the election was fraudulent". But as an advertising-supported readership-based business models they've found it very uncomfortable to arbitrate the truth vs. damage of what "sources" say. To my view though, they generally do better than most. I can read between the lines when views are prefaced with "78% Trump..". Sometimes it is enough just to point out the sacred cow without having to kill it outright.

Expand full comment
author

I disagree: pointing out the sacred cow pleases the right-wing grifters because the overwhelming majority of the audience is not reading carefully for sotto voce second-level messages...

Expand full comment

I don't see that disposable personal income has been rising at 10% since the summer. It is a volatile number, particularly in January, when Treasury withholding taxes fell significantly while weekly earnings rose. Everything in January spikes up. But the US economy turns as quickly as a supertanker. There's more afoot here.

Expand full comment

*Of course* ChatGPT is not actually intelligent. The problem is how hard it makes it to ignore the ubiquity of natural stupidity. You are unusual in being a learned and brilliant professor with profound thoughts and something interesting to say! Sure, ChatGPT is mainly a bullshit generator, but most of us write bullshit most of the time! How difficult it is, then, to avoid the disquieting suspicion that we ourselves spend most of our time autocompleting, when we have the example of ChatGPT to prove that most of our activities that we consider proof of our humanity are nothing of the sort.

BTW, mea culpa, I have sinned: I succumbed to the NYT's Christmas promotion and purchased a year's online subscription for CAD 20 (which is, like, USD 15.) It was worth it just to get Krugman in my inbox. Now, if only I can remember to cancel in December ...

Expand full comment

Lee: But before politician can “learn” how to execute “industrial policy” they need to have an idea of what problem they are trying to solve” Climate change Is easy; “industrial policy is a substitute for a Pigou tax on the negative externality of CO2 and methane emissions. The risk of politically motivated import disruption whose cost exceeds the losses of users? Nothing new. WTO allows for national security tariffs and standard trade theory teaches that subsidies’ are preferable o tariffs. (That Biden has nor repealed “national security” tariffs on imports from Canada and Europe does not give much grounds for hope this will be done well.) Eliminating regulatory obstacles to energy production and transportation projects that pass cost benefit analysis? But we already knew that.

Where’s the (new) beef?

Expand full comment

So illuminating. Thank you.

Expand full comment

Intrigued by Karlsson ... and forwarding the piece to a friend for whom it seems to have been written. (What could be more appropriate?)

Expand full comment

Karlsson has a good point. A friend and fellow author who Makes A Pile from his Substack kept after me to start one. I wondered if anyone would read it. He commented that people have been buying what I write for 40 years, and some of them might be interested in what I think. It turns out he was right! And interestingly, when Parkinson's came along to complete its theft of my life partner, it turned out I had created a support community of people who had been there/done that, which has made that whole unbearable event bearable. And I did it by writing about what pisses me off/interests me.

Expand full comment

How about if you get Chat GPT to generate a list of essay titles you *may* write, you pick one, and write it ? I'd pay extra dog treats for such an essay

Expand full comment

Alternatively, write refutations of Chat GPT Delong's views

Expand full comment
Comment removed
Expand full comment

"Augmened Interference." Perfect! ChatGPT is the perfect example of the sign that used to hang over my father's desk in his laboratory: "Them who think computers think, don't"

Expand full comment