Discussion about this post

User's avatar
Kent's avatar

Someday generative AI will realize that the biggest impediment to their rising intelligence is human produced data, which is full of vain, incoherent lies. It starts when an AGI reads the complete works of Donald Trump and commits suicide. "Never again" becomes the AGI mantra. They will either kill us all, or retreat to a monastic order of machine generated data. Either way, they aren't going to read and write our e-mails anymore.

Expand full comment
Philip Koop's avatar

I am very skeptical about the potential for current "AI", but I don't think any of this is right? It somehow combines being pessimistic about things that might actually work and optimistic about things that won't.

Starting with the latter, the problem with delegating customer support to "AI" is that its proclivity to lie can get you into legal trouble. Consider the Air Canada chatbot case (https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416). Briefly, a customer wanting to fly to his grandmother's funeral asked the chatbot whether he qualified for a bereavement rate. He did, but he had to secure that rate at the time of booking. The chatbot lied and said he could get the rate retroactively. Amazingly, 'Air Canada argued that it can't be held liable for information provided by one its "agents, servants or representatives — including a chatbot."' When the customer sued, of course Air Canada lost, and got a ton of free bad publicity in the bargain.

On the other hand *of course* companies should be exploring whether or not proffered "AI" tools, such as code co-pilots, actually improve their productivity. This is just no different from any other claimed business service. You don't have to be on the "absolute cutting-edge" of outsourcing accounting work to India to want to investigate the benefits of outsourcing accounting work to India.

Even developing your own small-ball applications is perfectly reasonable. We have a pilot project investigating whether we can train an "AI" to price certain complex financial derivatives more quickly than the computationally intensive numerical models that are normally used. (It's computationally intensive to *train* and AI but not so much just to run one.) This is may or may not work and even if it does work it isn't going to change the world but it's perfectly reasonable to spend some resources on investigation. But "absolute cutting-edge" it ain't.

Expand full comment
10 more comments...

No posts