17 Comments

Gerald Baker's argument is fine insofar as it goes. Prosecuting Trump will indeed injure America. Its falsity is in what it leaves out. Not prosecuting Trump will injure America far more.

Expand full comment

It’s a touchy and complicated subject. Perhaps it might be analogous to the election of Abraham Lincoln? We saw how that ended up. Ultimately its less about the man and more about the ideology of his supporters and opponents.

Expand full comment

Too bad we don't do exile. Send him to a country that he would not have allowed entry from.

Expand full comment

Regarding LLM model size that might mean that inference computing could be done on a (powerful) home computer? All that needs to be done is to purchase and load a base/universal model?

Expand full comment
author

Yep. You cannot train an LLM on a home computer, but you can (almost) run one...

Expand full comment

Google that and you get all kinds of solutions. This one looks reasonable: https://www.videogamer.com/tech/ai/run-local-llm/

Expand full comment
Aug 11, 2023·edited Aug 11, 2023

Re: AI "hallucinations", they're not fixable _if_ you're committed to just training a language model. If you start over from scratch with machines that are tethered to the physical world -- stuff like Boston Dynamics' terrain navigating bots -- and then _layer on_ a natural language interface that lets them describe what they're doing and take requests on where to go and what to do -- that is a very different story. If you want a mind that's capable of distinguishing truth from fiction, then you need to first let it get repeatedly disciplined by the difference between what's actually true and what it imagined to be true, by falling over on its face.

This is the kind of evolutionary path human minds followed, of course. Many animals are capable of deception -- my cats will make "feed me, I'm so hungry!" noises at me to get second dinner after my spouse already fed them. But fundamentally they understand the difference between what they want, and what is, because if their ancestors hadn't navigated that distinction, they wouldn't be here to whine at me. The chatbots don't have any independent representation of what it would _mean_ for a statement to be true or false.

Expand full comment

IOW, raising an AI like a child. It also implies that centraliz4ed AIs are not the way to go, but rather there needs to be many AIs, just like people.

Expand full comment

Sort of -- I think Ted Chiang's "The Lifecycle of Software Objects" is a great parable to bear in mind. Also a little like David Brin's Uplift concept, although probably weirder, because with AI, it's going to be able to tap systems that are leaps and bounds smarter than us in certain domains. The trick will be understanding real-world context to know what sub-system is appropriate to apply to what problem.

Expand full comment

China industrial policy: However good "1.3 billion people plus the high savings rate and the government ability to do something to throw the investment, financed by that high savings rate into high potential-externality sectors" may be, I think that it will not be nearly as good as 360 million + merit-based immigration, a high saving rate (near zero deficits), and Pigou taxation of negative externalities could do.

Expand full comment

Patton: The Iraq invasion was bad, terrible, no good, but If the US even alone (and we ought to be able to get the EU and ex EU countries to go along) taxed net co2 emissions and had a border tax on imports from countries that did not, That could lead us to a Shelling point without much more trust than we have now, since no one would be asked to do anything not int its self-interest (except not to free ride).

Expand full comment

Stupidist Men ...: Double agony an Xst about a video.

Expand full comment

I like where you quote Tim Burke about the bourgeoisie circa th 1950s fearing “going broke.“ I heard those very words time and again from my father on the farm in the’50s -- he’d been through th depression & th rough bounces of wwII & after. Now he was a “petit bourgeois.” They had a heck-uv-a-lot uv fear too, believe-you-me!!

Expand full comment
author

Yup!

Expand full comment

On LLM as a compressed image, the awesome author Ted Chiang wrote an excellent article in the New Yorker dated Feb 9, 2023 titled "ChatGPT Is a Blurry JPEG of the Web."

Expand full comment

This is true for pure LLMs. However, just like humans with education and knowledge, when the LLM is integrated with knowledge databases and other tools, hallucinations can apparently be reduced/removed. So I expect these modules will be effectively integrated so that the LLM interface calls more appropriate systems so that the results are far better.

As a thought experiment, just think of an LLM interfaced with an GOFAI expert system. The LLM takes your input, recognizes that the expert system must be applied, inputs the relevant data and returns the answer with a probability of correctness. Low probability answers might be explained as poor and possibly wrong. The expert system and rule path could be displayed if elucidation is asked for. Obviously we have better AI today to interface to.

What I would like to see is an LLM that can build an appropriate data table based on prompts and extending the possible fields, generate the data table from data sources, run different various ML models, select the best model, then using more Q&A from the user, provide an answer that can be explained.

This would still not be perfect as bad data could poison the model and answer incorrectly. This is not unlike human thinking being corrupted by mis- and disinformation. There will need to be some trust system to build good data and models for AIs to use. Google is starting to do this on search, but we need a far more robust method to ensure quality information, and AIs need to be able to discriminate between good and bad data and information, as well as recognizing new inputs should be trusted or rejected.

Expand full comment

Why am I not completely surprised to see a Baldurs Gate 3 mention here?

Expand full comment