Jason Kottke & Ezra Klein & MarshallBermanChatBot on how to deal wiþ all that is solid melting into air; cost-disease & cost-shifting rule relative price changes; Davies and Levine snark about Sil...
Page level autocomplete seems to work as well as short phrase autocomplete. It's accurate maybe 10%-15% of the time. The AI/ML systems that are useful are the product of extensive and focused data gathering and do an advanced form of compression or curve fitting. If you want to optimize a metal alloy or find a protein structure, they can work fairly well, but that's because of the clean, standardized data set.
A friend of mine has been exploring various AI chat systems' literary abilities, answering questions about English language literature. She says the completion systems have access to interesting things, but you have to know those interesting things in order to get their completions. It is a chicken and egg problem that leaves one stuck with a chicken or an egg and no way to get from one to the other.
It's a lot like internet search. If you know the right terminology and phrasing, you can - or at least could - find all sorts of interesting things, but if you don't, you'll have to keep trying alternative search phrasings and hope that hit #267 provides a useful key phrase that leads one to a more interesting place. Search systems have actually gotten worse in this regard since they have done a lot of work to "improve" search over the last decade. It is often impossible to find certain things without precise phrasing and priming.
What happened to search? It moved away from simple textual search and tried to understand the semantics. Suddenly, things that were accessible vanished, since their web pages didn't fall into the correct place in semantic vector space. It's much easier to figure out an appropriate textual search than trying to outsmart a relatively opaque semantic space algorithm. Conversational search seems to have the same problem. At least one can look at the URL and original text of a web page to determine if it is bogus rather than having a credulous semantic algorithm accept and incorporate it. (Maybe these things should provide footnotes.)
To deal with this, most search engines now have a list of approved web sites that they use to resolve certain search queries. The same sites, often encyclopedias or standard reference works, appear again and again at the top of one's search results. The reason is obvious, the internet is full of incorrect and poorly indexed information. If the answer can be found in Encyclopedia Wikipedia, it is at least likely to be useful. On the other hand, there is a reason one learns to move beyond encyclopedias and such secondary sources if one is going to find out things.
RE: cut-rate MarshallBermanChatBot - like BradBot, an excellent demonstration & application of LLM
Rubenstein: It would help if people questioned the practice of term transformation PLUS variable-fixed interest rate mismatch, not term transformation, per se.
On average people mostly ignore accelerating modernity. but ~25% are actively pushing it while ~25% are actively fighting it. The majority doesn't really care either way. This coincidentally (or not) explains a lot of American politics, and from what we've seen as the rise of authoritarianism worldwide it's not only the US. The war against the future is actually the war against ambiguity and modernist theories and a demand to return to a simpler, less abstract and more concrete life and way of thinking. Get rid of Einstein and Heisenberg and go back to Newton (or Aristotle). The fact that the idealized past never existed and that nature is ambiguous is not dispositive to the argument because that remains its emotional goal despite the ahistoricity of it. The fact that this would also change the US from a leading industrial/information leader to a sidelined observer of progress raises the stakes enormously, as does the fact that ignoring global warming is already set to destroy trillions of dollars and millions of jobs in the US alone.
You are correct that this isn't anything new, but now it's not just threatening blue collar workers but white collar and professionals as well - the people whose unhappiness isn't generally dismissed by the media. This will affect reporters, editors, producers, the entire labor component of the media, which is why it's getting so much coverage. No one wants to discuss even the possibility of the imminent replacement of knowledge workers so the issue is addressed by substituting issues that strongly appeal to about one quarter of the populace - a return to religious control over social life, the diminution of women's and children's rights, the attack on anything that varies from the absolutely most simplistic (and incorrect) views of racism, human sexuality, and progress. They're actually trying to make it illegal to even discuss issues like this. I wouldn't be surprised at anti-AI legislation in the near future.
This battle isn't going away any time soon, and it's far too early to call a winner. The future will continue to happen even if some countries decide to drop out. But looking at what happened to European countries 1900-1950 shows the scale of change possible in a 50 year period and they weren't even facing some of the dangers we face now.
Also, I want to point out that the idea of verbal prompt programming and hacking is not new, I learned about it in high school 50 years ago - https://en.wikipedia.org/wiki/I_Think_We%27re_All_Bozos_on_This_Bus#Portrayal_of_theme_parks_and_computer_technology
I'd argue that the problems of the LAST 50 years are due to "things" (i.e. GDP/capita in rich countries) having changes to slowly, not too fast. The next 50? We'll see Or you will. I'm outta here :))
AI and LLM, yes, but climate change has not happened too fast to know what to do about it. Since at least Earth Day 1 we have know that pigou taxes are the way to deal with externalities. That CO2 “pollution” is just one more, albeit global, externality is not news. Likewise, COVID did not catch us off guard. The template for vaccine development existed and was put in place. That CDC and DFA did not use cost benefit analysis in making policies and making recommendations of how individuals and local policy makers could craft cost effective measures to reduce the spread until vaccines were available is a structural issue, and was not a problem of COVID having happened “too fast.” Having to message against the noise coming from the Trump press conferences was a novel and unexpected difficulty, but again, not linked to exponentials.
The graph of price changes is splendid and enlightening. An odd anomaly though: no food. Yes, we can compare beer and booze, but not foods, plain or fancy. It seems a good sort of goods to watch for its effect on people.
(I note, with my wife's aid, that our favorite artisanal bread has - we think - held steady through the pandemic, though it seems that somewhat less fancy ones have gone up; while our favorite olive oil has about doubled in price. But data is not the plural of anecdote.)