19 Comments

Brexit:

One reason for not talking about it was mentioned in the podcast - none of the parties want to upset the Brexit voters so that they can try to get their vote.

Let us not forget the Lib-Dems ran on a platform of retuning to the EU and got crushed - with their leader losing her seat.

Let us also not forget that Labour under Corbin's leadership was pro-Brexit. The Labour party voted in Parliament to ratify the referendum quickly and then played silly games that just resulted in a relatively hard Brexit. Starmer may want to renew relations with the EU, but he will have to do that piecemeal while not frightening the Brexit (anti-immigrant) believers in the once "red wall" counties.

Lastly, Britain remains delusional about its global status. They claimed that the EU would offer Britain a Canada++ deal. They did not. Pols claimed post-Brexit trade deals would be the easiest to enact. They have been meager. Britain is not a colossus bestriding the globe any more, and it is quite clear that this was not going to happen again. Britain's role as a permanent member of the UN Security Council is only due to the possession of nuclear weapons. Its actual military capabilities are rather weak.

The relatively isolated Britain, largely cut off from a far larger EU, is now bringing home the stupidity of the Brexit decision, once that could have been averted if a non-binding referendum had been discarded if cooler heads had been able to persuade their colleagues this was an undesirable decision. But the idealogues in the parties just went along with the popular (just) decision and the rest is history.

Expand full comment

Cognitive self-defense...: "(One of the worst parts of Threads is so many people here are still so obsessed with what happens there that there’s constant spillover!)…"

Ezra is being too mild (as he would be). It is more like a million screams, passing through an amplifier larger than any that the Bose could ever imagine.

Expand full comment

Cohen: Why can't we break the silence on taxing net emissions of CO2?

Expand full comment

Shapiro: I believe that freer tared as governed by trade agreements has raised real income, as explained by David Ricardo. I doubt that trade agreements have created jobs, although some of the increased real income may have raised the wages earned in some jobs. But "creating jobs" is something that only the Fed can do and undo.

Expand full comment

Kustof: Americans (sample = Marginal Revolution commenters) understand US immigration procedures perfectly; we have open borders. :)

Expand full comment

Viola Zhou & Nanchanok Wongsamuth: Has BYD refused to soft pedal its criticism of the CCP and Tesla is paying the price for Musk's courage? :)

Expand full comment

No offense to Knut Wicksell, but I don't understand the desire to estimate an R*. A central bank wants to know at what values to set its policy instruments to achieve the real income maximizing rate of inflation (or whatever its objective function is). If it has achieved that trajectory, we could observe R*, the actual real value of one of its instruments but how would it add anything? If it has not achieved its target, how would knowing R* indicate what instrument settings should be? Wouldn't the model that estimated R* when actual inflation is not=target inflation spit out the "correct" instrument setting directly without beating around the bush?

Two regimes:

1) Data => model0 => Instrument settings => Target

2) Data => model1 => R* =>model 2 => Instrument settings => Target

How is 2) better than 1)? Or wouldn't model 0 = (model1 => R* =>model 2)?

Expand full comment

Bregman: Is this just that people can drive better bargains for housing than housing agencies renting low-cost housing.

Expand full comment

In the LLM/stochastic parrot (legal edition) today: https://pluralistic.net/2023/09/07/govern-yourself-accordingly/#robolawyers

Expand full comment

Aren't we, by using stochastic parrots to facilitate (and hence on the margin increase) the creation of ritualistic zero-Shannon-content + nonzero-attention-demand text, *contributing* to the cognitive wars problem? One can argue that we can also use stochastic parrots to simplify (or even filter out) ritualistic content on the reading side, but at that point I must protest. Dedicating untold megawatts, MIPS, programmer-hours, and our own tool learning and usage bandwidth to a net-zero game of first creating ritual bull and then dispensing of it is *not* what a rational society does.

It could be a Nash equilibrium, mind you - certainly optimal as a professional choice. But I chafe at the use of tools to help me write things that, by the very reason that they can be written (or at least drafted) with these tools, are proven to be things one should *not* [have to] write.

Expand full comment
author

Yes...

Expand full comment

Humans have been bombarded with attention seeking parasites from time immemorial. Whether annoying loudmouths in drinking establishments, pamphlets, books, advertising on billboards (once a uniquely US phenom), advertising in books, magazines, tv,...social media, loudmouths on social media, and now robots flooding all the interactive electronic media.

Like many tools, however imperfect, they have both good and bad points. I see LLMs as part of a toolkit to aid writing in more than just autocomplete, spell and grammar checking. they could be useful machine interfaces to allow verbal interactions. The negatives are the automated outpourings of drivel, propaganda, etc. But is this truly worse that politicians and pundits filling the airwaves and media with their blather?

I want tools to act as a gatekeeper. Neal Stephenson's "Anathem" had the concept of trust as a means to navigate information. Musk broke even the basic trust model on Twitter. So why not have ML tools do the heavy lifting to screen out those this low trust indices? Operating like adblockers and ad filters, this is easier said than done. I would surely like a way to filter out comment noise on interesting posts, leaving just the better ones. Otherwise we end up like the Russians - ignoring everything, even the truth tellers.

LLMs need not necessarily use up lots of energy once trained. They will be run on edge devices, which will undermine the current pricing model as well as coincidentally reducing privacy concerns. If search can be supported by ads, then there is no reason why the manual download of links cannot be done by a local machine and the content processed for output locally, saving the user time and effort.

Expand full comment

1. I agree with your point about this being an iteration of an always-similar, never-the-same social process (what's a ritual but a device to compel attention, other people's and other *beings*'?)

2. I don't think the only options are reading everything, ignoring everything, or using AI to filter things out. I don't need an AI to filter in or out what Brad writes; if I ever believe or learn he's acting in bad faith, or is otherwise uninformative, I'll stop reading him and that's it. Yes, we might need AI to deal with wide-writeable media like platform feeds and comments, but that's far from the only way to set up an information ecosystem --- and I'd argue there's very little there you can't get more interestingly and usefully elsewhere. You can set up all sort of automated systems to figure out who to pay attention to in your house party... or you can make your house party invitation-only, try to get the smartest good-faith people you can, and kick out anybody who comes bearing sewage.

> Musk broke even the basic trust model on Twitter. So why not have ML tools do the heavy lifting to screen out those this low trust indices?

Alternatively: don't read Twitter or any other world-writeable area, but rather blogs, newsletters, papers and books of a few dozen experts you trust, seek out those _they_ recommend, and so on and so forth. I'm not arguing for a 100% algorithm-free approach to maintaining your intellectual landscape [I track around 1,500 Google Scholar alerts, and had to write my own software to handle duplicates, filter out unreliable domains, and so on], but I do think the structure of knowledge is such that the bottleneck is less in our finding expertise than in using it.

Expand full comment

I think that in my lifetime, I was used to having various trusted sources act as filters. Parents telling you who not to spend time with. Reading newspapers and magazines you felt you could trust based on the editors. Selecting film reviews who seemed to match one's own experiences of movies, and of course, using textbooks. I remember in the late 1990s[?] an early newsfeed that pushed out lots of news items, rather like a financial news feed for trading rooms. There were discussions about whether filters for news or accepting the hosepipe was the way to go.

As I cannot even keep up with my reading - unread books (fiction and non-fiction) pile up and the web keeps grabbing my attention, filtering of some sort is needed. We know that "engagement" is used to trick one to following near infinite paths to maintaining attention, a parasitic problem for us that really needs better cognitive tools to deal with. I think it was Cory Doctorrow who said that new forms of attention grabbing, initially cause a social problem, but then society as a whole learns to deal with it. [Of course there are the "information-olics" who cannot stop,]

I don't use Twitter, but I do occasionally read a Twitter/X post that has been suggested as worth reading.

"think the structure of knowledge is such that the bottleneck is less in our finding expertise than in using it." - That is a very interesting statement and at first glance I agree with it.

Do you have any supporting links to support that assertion/observation? [As regards global heating/pollutions/X issues, I agree - but I could be biased.]

Expand full comment

> Do you have any supporting links to support that assertion/observation?

Only my own growing pile of unread books and papers, each of which encodes a chunk of expertise I've already located and would very much like to be faster and better at integrating... (something at least so far LLMs don't seem to be of much use with; it might well be that a Neal Stephenson's Young Lady's Illustrated Primer is a step up in educational technologies, and certainly something like that will require LLMs, but I suspect it's going to be built _with_ them, not just feeding existing texts into a model and hoping it understands it in any functional sense; still, figuring out really radically educational technology would certainly count as a long-term disruptive change, and I do hope LLMs have at least solved part of the linguistic aspects of the problem).

Expand full comment

"Dedicating untold megawatts, MIPS, programmer-hours, and our own tool learning and usage bandwidth to a net-zero game of first creating ritual bull and then dispensing of it is *not* what a rational society does."

But a rational society does create sewage and has sewage treatment plants, doesn't it? That is costly too.

Expand full comment

That was meant for Marcelo. Maybe I misunderstood what he said.

Expand full comment

Well, sewage is a side effect of useful activities. We don't deliberately add sewage to our products just to increase productivity measured by volume [or publication numbers, or readership stats, or... ] and therefore force consumers to build (or pay us for) sewage treatment plants.

ETA: We do have laws that frown on companies to do that (although post-Brexit Britain seems to have second thoughts about it); the same in the content/informational ecosystem, if not necessarily enforceable by law, can be at least partly enforceable by a refusal to deal with spam-generating entities.

Expand full comment

True, true. I was probably incorrectly thinking that all the junk (sewage) due to the currently technology may have a technological solution (sewage treatment). Your original point is why release the current technological junk in the first place.

Expand full comment