READING: Ludicity: I Will F***ing Piledrive You If You Mention AI Again
A rant that strikes me as wise this morning: Ludicity says: Executives everywhere are tripping over themselves to discuss how they will use generative AI to prepare for the future of work. But...
A rant that strikes me as wise this morning: Ludicity says: Executives everywhere are tripping over themselves to discuss how they will use generative AI to prepare for the future of work. But unless you are one of a tiny handful of businesses who know exactly what you're going to use AI for, you probably don't need to do anything at all. Focus on business fundamentals, not shiny objects. AI may change everything, or it may not. But most companies' tepid preparations will be futile either way. Rather than wasting resources on superficial AI projects just for show, focus on shoring up your core operations and cultureā¦
This:
Ludicity: I Will F***ing Piledrive You If You Mention AI Again: āSweet merciful Jesus, stop talking. Unless you are one of a tiny handful of businesses who know exactly what they're going to use AI for, you do not need AI for anythingāor rather, you do not need to do anything to reap the benefits. Artificial intelligenceā¦ is probably already baked into your businesses software supply chain. Your managed security provider is probably using some algorithmsā¦ They didn't do much AI work either, they bought software from the tiny sector of the market that actually does need to do employ data scientists. I know you want to be the next Steve Jobs, and this requires you to get on stages and talk about your innovative prowess, but none of this will allow you to pull off a turtle neck, and even if it did, you would need to replace your sweaters with fullplate to survive my onslaughtā¦.
I see executive after executive discuss how they need to immediately roll out generative AI in order to prepare the organization for the future of work. Despite all the speeches sounding exactly the same, I know that they have rehearsed extensively, because they manage to move their hands, speak, and avoid drooling, all at the same time!
Let's talk seriously about this for a second.
I am not in the equally unserious camp that generative AI does not have the potential to drastically change the world. It clearly does. When I saw the early demos of GPT-2, while I was still at university, I was half-convinced that they were faked somehow. I remember being wrong about that, and that is why I'm no longer as confident that I know what's going on.
However, I do have the technical background to understand the core tenets of the technology, and it seems that we are heading in one of three directions.
The first is that we have some sort of intelligence explosion, where AI recursively self-improves itself, and we're all harvested for our constituent atomsā¦. I am open to the possibility of this happeningā¦. I am not tremendously concerned with the company's bottom line in this scenario, so we won't pay it any more attention.
A second outcome is that it turns out that the current approach does not scaleā¦ not enough dataā¦ architecture doesn't workā¦ the thing just stops getting smarter, context windows areā¦ limitingā¦. [While] some industries will be heavily disrupted, such as customer supportā¦ your company does not need generative AI for the sake of it. You will know exactly why you need it if you doā¦.
I keep track of my life administration via Todoist, and Todoist has a feature that allows you to convert filters on your tasks from natural language into their in-house filtering language. Tremendous! It saved me learning a system that I'll use once every five years. I was actually happy about this, and it's a real edge over other applications. But if you don't have a use case then having this sort of broad capability is not actually very useful. The only thing you should be doing is improving your operations and culture, and that will give you the ability to use AI if it ever becomes relevant. Everyone is talking about Retrieval Augmented Generation, but most companies don't actually have any internal documentation worth retrieving. Fix. Your. Shit.
The final outcome is thatā¦ we end up with something that actually actually can do things like replace programmingā¦ be broadly identifiable as general intelligenceā¦. [If] generative AI goes on some rocketship trajectory, building random chatbots will not prepare you for the futureā¦. Having your team type in
import openai
does not mean that you are at the cutting-edge of artificial intelligence no matter how desperately you embarrass yourself on LinkedInā¦. Your business will be disrupted exactly as hard as it would have been if you had done nothing, and much worse than it would have been if you just got your fundamentals right. Teaching your staff that they can get ChatGPT to write emails to stakeholders is not going to allow the business to survive this. If we thread the needle between moderate impact and asteroid-wiping-out-the-dinosaurs impact, everything will be changed forever and your tepid preparations will have all the impact of an ant bracing itself very hard in the shadow of a towering tsunamiā¦.You either need to be on the absolute cutting-edge and producing novel research, or you should be doing exactly what you were doing five years ago with minor concessions to incorporating LLMs. Anything in the middle ground does not make any sense unless you actually work in the rare field where your industry is being totally disrupted right now.ā¦ <https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/>
Someday generative AI will realize that the biggest impediment to their rising intelligence is human produced data, which is full of vain, incoherent lies. It starts when an AGI reads the complete works of Donald Trump and commits suicide. "Never again" becomes the AGI mantra. They will either kill us all, or retreat to a monastic order of machine generated data. Either way, they aren't going to read and write our e-mails anymore.
I am very skeptical about the potential for current "AI", but I don't think any of this is right? It somehow combines being pessimistic about things that might actually work and optimistic about things that won't.
Starting with the latter, the problem with delegating customer support to "AI" is that its proclivity to lie can get you into legal trouble. Consider the Air Canada chatbot case (https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416). Briefly, a customer wanting to fly to his grandmother's funeral asked the chatbot whether he qualified for a bereavement rate. He did, but he had to secure that rate at the time of booking. The chatbot lied and said he could get the rate retroactively. Amazingly, 'Air Canada argued that it can't be held liable for information provided by one its "agents, servants or representatives ā including a chatbot."' When the customer sued, of course Air Canada lost, and got a ton of free bad publicity in the bargain.
On the other hand *of course* companies should be exploring whether or not proffered "AI" tools, such as code co-pilots, actually improve their productivity. This is just no different from any other claimed business service. You don't have to be on the "absolute cutting-edge" of outsourcing accounting work to India to want to investigate the benefits of outsourcing accounting work to India.
Even developing your own small-ball applications is perfectly reasonable. We have a pilot project investigating whether we can train an "AI" to price certain complex financial derivatives more quickly than the computationally intensive numerical models that are normally used. (It's computationally intensive to *train* and AI but not so much just to run one.) This is may or may not work and even if it does work it isn't going to change the world but it's perfectly reasonable to spend some resources on investigation. But "absolute cutting-edge" it ain't.