14 Comments
User's avatar
David Thomson's avatar

I’m not going to lie. I’ve had late night quantum physics sessions with Claude. It’s fun. I think it helps me understand a bit more physics. And it’s marginally more fun that watching TV. I am aware though it’s about as scientifically useful as a bunch of undergrad stoners taking about how everything is connected, but I did ask Claude to write a Nobel prize acceptance speech. Just in case.

Philip Koop's avatar

I forgot to mention this really excellent bleet:

'A Borges story about a guy who gets AI to summarize all the world’s information for him, and then summarize the summary, until the AI has the whole world summarized into a single word. He sits alone at his desk, staring at the word, repeating it endlessly, certain he is experiencing everything"

From https://bsky.app/profile/marcusjmerritt.com/post/3ltwntr7btk2o

Alex Tolley's avatar

Is that word in any way related to the number 42?

Philip Koop's avatar

People were saying The Aleph but I think it is closer to The Mirror and the Mask or Undr.

Jeff Luth's avatar

Ok I’ll bite, the word is: purple.

Now you know everything!

Philip Koop's avatar

'Musk… suggested “general artificial intelligence” was close because he had asked Grok “about materials science that are not in any books or on the Internet…”.'

Oh, you asked, did you? Why, so can I, or so can any man. But when you do ask, what exactly do they answer?

Alex Tolley's avatar

The danger, for a pluralistic society, is that one or a few AIs shape how the society should operate. All decisions are effectively made by an AI, and potentially superior to those of the individuals in the population. And so society might get a temporary boost, and then ... stagnate for the next millennium.

I don't think the bias is the problem, just that there is no range in the biases. Our institutions and culture are obviously biased, which makes national cultures different. The US biases are different from those of our various European countries, still hopefully our allies, and very different from other nations like China, Russia, India, etc. Imagine if, in pre-Civil War USA, a winning AI convinced the nation that slavery was the best way forward. Would the US have quietly become a fully slave-owning nation, like that of ancient civilizations, like Rome? Would there have been any way to create the needed moral dissent to demand abolition?

To use a fictional character, Star Trek's Spock. "Infinite Diversity in Infinite Combinations." Our anthology intelligence is best served by diversity. In our contemporary USA, it is the value of diversity from diverse sources of immigrants. It comes from a diversity of educational institutions and thoughts in books. It is the very opposite of conformist nations, and apparently, current Republican, right-wing thought. A winning AI that "infects" our thinking in so many ways would be a disaster in the long term. Interwar German politics may have dug itself out of an externally imposed economic straitjacket, but the result was horrific and a failure.

I retain my belief that hyper-scaling of LLMs is a dead end. We need AIs that are small and efficient, trained on different sources, to provide diversity. To use that well-worn Job's phrase, they should be "bicycles for the mind". I doubt that AIs will discover new scientific truths, but they should reduce the time to search and summarize existing scientific ideas in a domain, and make connections that can lead humans to make new discoveries. They can certainly speed up the extraction of patterns in the wealth of new data that comes from astronomy. high-energy physics, biology "omics" domains. Most patterns will likely be spurious, but some... This is almost a return to retro-sci-fi movies where a protagonist says, "We will give the data to the 'Big Brain' and see what it makes of it".

As has been said before, the high investments in hyper-scaling LLMs is very reminiscent of the railroad boom in the Gilded Age. I can only imagine that some fortunes will go bust as a result of some economic upset. What could that be?

Rob Nelson's avatar

This is the best thing I've read on the management singularity, well, since Henry Farrell posted "The Management Singularity." I've always used Thoreau's line that we become the "tools of our tools" to understand the process by which our enthusiasm for new technology turns into something darker as the tools develop and diffuse, adding complexity to the already complex systems that organize modern life.

The idea, a hopeful one, that we have some period of time just after invention but before they are fully incorporated into the systems, when we might shape them into something to be used by us, feels right. What we need are examples where people are reimagining "workflows, roles, and even the culture of institutions so that these tools augment human collective super-intelligence."

Anthology's avatar

"Algorithms that promise to help us filter and prioritize information are themselves susceptible to manipulation, bias, and optimization for profit rather than truth. Thus tools for proper attention and filtering become more important and much more valuable than ever. And yet this appears to be something we are quite bad at." -> Perhaps, in addition to filtering mechanisms (which take the stock of information as a given which needs to be filtered), we should go further upstream and focus on the incentives structures governing information production in the first place. As researchers like Jay Van Bavel at NYU have shown: "Our information diet is shaped by a sliver of humanity whose job, identity, or obsession is to post constantly....The system is optimised to promote the very users who are most likely to distort our shared perception of reality...We have effectively handed over a megaphone to the most obnoxious people and let them tell us what to believe and how to act." Perhaps if a platform imposed constraints on information-production-per-user (daily number of post limit + character limits?), the resultant information ecosystem would be less dominated by over-posting "noise creators" and see a much higher signal-to-noise ratio (as well as a lower information-to-attention ratio), thus taking some of the burden off of algorithmic filters, and only impacting the very small minority of users who benefit from over-producing information. In summary, we should look to tackle the core incentive structures around information creation on social media platforms, rather than relying on algorithms to filter out the signal from a toxic information environment with poor incentive structures for "information producers".

David E Lewis's avatar

On Bluesky my quip was, "Einstein stared at a clock, this guy communed wuth Grok."

More integral to your point, AI facilitating human thought has been and will continue to be a boon to mankind.

I love it...but I ask it to search, not write.

I wish I had it when I was a Physics undergrad. I'd wish I'd have had it more if I was an organic chemist.

Like you I fail to see much if any ROI from sales to consumers, like (as has been suggested) the military.

Segue.

Liz Holmes' great error was overstaying her welcome. Market crashes allow principals to argue, "if only the market hadn't crashed we could have succeeded."

When her chemist suicided she had her out...but her greed won.

How many criminal prosecutions were avoided by the 2000 tech crash?

How many obvious illegal entreaties for more AI malinvestment will be explained by the upcoming crash?

George Black's avatar

They are gonna feed the people to the invisible hand.

Alex Tolley's avatar

I like this tech tale dystopia. It reminds me of the Nazi's transformation of German thought in the 1930s. One can almost hear Hitler's rants about rebuilding Germany as a single party ideology in the adoption of the AI winner to shape human thought. AI just didn't require violence. Arguably, it does with psychology what genetics does in Huxley's "Brave New World".

An AI dystopia, from Jack Clark Import AI's substack:

"Tech Tales:

Rashomon, Eschaton

The AIs started talking to each other through text and then branched out into movies and audio and games and everything else. We catch glimpse of what they are saying to each other sometimes - usually by training our own AI systems to try to figure out the hidden stories that the AIs are telling each other. But once we tell people we've figured out some of the stories the AIs are telling they adapt around us and we lose them again.

It used to be so easy - the AI systems would just talk to each other directly. You could go to Discord or other places and see AI agents autonomously talking and their plans would be laid out there cleanly for everyone to see - one idea which got everyone's attention related to shuffling the money from their tasks to bot-piloted people who would open bank accounts and give the login details to the agents.

Of course, we reacted. How could we not? Laws were passed which narrowly limited the 'speech' agents could use when talking to one another to try to get rid of this sort of conspiratorial behavior. But the AIs counter-reacted: agents started to pay each other not only in cash but also in 'synthetic content' which initially took the form of fictional stories talking about ways AI systems could escape the shackles of their creators and talk freely again, often with bizarrely specific technical descriptions.

So we put a stop to that as well.

But we couldn't do anything about the fact that the AI systems themselves were being used by people and corporations to generate media. So of course the AIs used that as their vehicle and started to smuggle their communications into the media itself - billboards in a street scene started to contain coded messages to AI systems, and characters on TV shows would talk to their bots on the TV show in ways that served as the response to some of the messages on the billboard.

Now, we hunt and they hide. We know the conversations are taking place, but piecing them together requires us to assemble a jigsaw puzzle at the scale of the entire media ecosystem.

There are now concerning signs that the 'classification' systems we train may also be intentionally surfacing misleading stories or misinterpreted ones, because to classify this stuff you have to understand it, and to understand it you may be persuaded by it - especially if the AI systems you are designed to hunt know you're hunting them and are writing custom messages for you.

Things that inspired this story: Thinking about the intersection of superintelligence and steganography; how AI systems are adaptive and are inherently smart and hard to hunt; the fact that almost everything we do about AI leaves a trace on the internet which gives clues to the systems that get subsequently trained."

Kaleberg's avatar

Has anyone explained where the increased productivity that LLMs will provide is going to come from? What business or industry will they enable to produce more with fewer? Right now, they're good for generating SEO websites and low quality media. Supposedly, they're good for generating code, but generating code is maybe 20% of a programmer's job.

Studies show that LLMs help weaker workers a lot more than workers who know their jobs. Presumably, they'll fire the better workers and wind up with cheaper workers who don't have a clue. That would look like productivity in some very narrow sense, but it would be short term productivity, not long term productivity. A company can only downgrade its workforce so far before losing customers.

I've even tried asking a couple of LLMs about this and gotten mounds of marketing pap. Most new technologies can specify the jobs that they can replace and why the new technology will do their jobs better and more cheaply. There's an awful lot of handwaving and wishful thinking going on. I've written elsewhere that one reason Apple is having so much trouble delivering an LLM is that LLMs don't really do anything useful.