On the "Agriculture" image. Of course, mining is much bigger (and much more geographically impactful) than it area hectares indicates. Not counting the 1-2KM underground is sorta like saying the Empire State Building is only 400x200 feet. ;-)
From today's edition of Sayash & Arvind's substack "AI Snake Oil":
The philosopher Harry Frankfurt defined bullshit as speech that is intended to persuade without regard for the truth. By this measure, OpenAI’s new chatbot ChatGPT is the greatest bullshitter ever. Large Language Models (LLMs) are trained to produce plausible text, not true statements. ChatGPT is shockingly good at sounding convincing on any conceivable topic. But OpenAI is clear that there is no source of truth during training. That means that using ChatGPT in its current form would be a bad idea for applications like education or answering health questions. Even though the bot often gives excellent answers, sometimes it fails badly. And it’s always convincing, so it’s hard to tell the difference.
Yet, there are three kinds of tasks for which ChatGPT and other LLMs can be extremely useful, despite their inability to discern truth in general:
Tasks where it’s easy for the user to check if the bot’s answer is correct, such as debugging help.
Tasks where truth is irrelevant, such as writing fiction.
Tasks for which there does in fact exist a subset of the training data that acts as a source of truth, such as language translation.
On the "Agriculture" image. Of course, mining is much bigger (and much more geographically impactful) than it area hectares indicates. Not counting the 1-2KM underground is sorta like saying the Empire State Building is only 400x200 feet. ;-)
From today's edition of Sayash & Arvind's substack "AI Snake Oil":
The philosopher Harry Frankfurt defined bullshit as speech that is intended to persuade without regard for the truth. By this measure, OpenAI’s new chatbot ChatGPT is the greatest bullshitter ever. Large Language Models (LLMs) are trained to produce plausible text, not true statements. ChatGPT is shockingly good at sounding convincing on any conceivable topic. But OpenAI is clear that there is no source of truth during training. That means that using ChatGPT in its current form would be a bad idea for applications like education or answering health questions. Even though the bot often gives excellent answers, sometimes it fails badly. And it’s always convincing, so it’s hard to tell the difference.
Yet, there are three kinds of tasks for which ChatGPT and other LLMs can be extremely useful, despite their inability to discern truth in general:
Tasks where it’s easy for the user to check if the bot’s answer is correct, such as debugging help.
Tasks where truth is irrelevant, such as writing fiction.
Tasks for which there does in fact exist a subset of the training data that acts as a source of truth, such as language translation.
Yea: this is very good...