I Now Have a Problem Assigning Take-Home Short-Answer Questions
No, AI cannot think. But it can lead us to think that it can think. And it is now so advanced that it can do so relatively easily...
I greatly enjoy and am, in fact, driven to write Grasping Reality—but its long-term viability and quality do depend on voluntary subscriptions from paying supporters. I am incredibly grateful that the great bulk of it goes out for free to what is now well over ten-thousand subscribers around the world. If you are enjoying the newsletter enough to wish to join the group receiving it regularly, please press the button below to sign up for a free subscription and get (the bulk of) it in your email inbox. And if you are enjoying the newsletter enough to wish to join the group of supporters, please press the button below and sign up for a paid subscription:
I surf on over to <http://rytr.me>, which tells me:
The prompt was “Malthusian Economies”, with keywords: fertility, slow technological progress, élites, exploitation, fertility, patriarchy, positive check, preventative check.
Reading each of these under exam-grading conditions, I would be likely to conclude that there was a mind behind each answer, a mind that had a B-level understanding of the concept of Malthusian economies.
I would also say that they are all far from perfect:
In the first, fertility does not exceed morality, but they, rather, balance.
In the second, I would not say that a Malthusian economy relies on exploitation, but rather that it provides very powerful pressures that produce exploitation.
In the third, the sentence “The positive check is the death rate and the preventative check is the birth rate” is just wrong.
Thus all three are well below A-level understanding. But they appear to be at B-level understanding. And this is a problem.
This is a problem, of course, because there is no mind back there. There is no understanding. There is just a bag of words and a set of numbers that are correlations.
And this fact, of course, provokes three reactions from me:
Perhaps this demonstrates how powerful is our tropism to attribute human mind-level competence to systems for which that attribution is a major category error. There is no more a human-level mind behind each of these answers than it is the case that the lightning is a very large red-haired guy with a big hammer and major anger management problems who drives a cart with two goats. You can make the sociobiology move that such attribution was useful in the environment of evolutionary adaptation, for the potential loss from assuming that something is much dumber than you can be very large. But whether you want to claim this tropism is “adaptive” or not, I do believe that it is a fact.
Perhaps this demonstrates how much “reading” is “taking black marks on a page, and from them spinning-up a sub-Turing instantiation of a human mind, which we then run in a separate sandbox on our Wittwer and interrogate”. Perhaps this demonstrates how enormously wide the gap can be between the mind of the author that wrote the words and the mind that we construct from the words. Perhaps this demonstrates how much reading takes place between the ears. (And, of course, perhaps this demonstrates how much success in schoolwork evaluations comes not from learning the material, but rather from learning how to spread out the chicken-feed in front of the instructor, that they will then glom onto, and leave them to conclude that you know much much more than you in fact do.)
Perhaps this should lead us to wonder how much of our own thought is really thought, really understand it, and how much is just that somewhere inside our brains is a bag of words, and a set of numbers that are correlations.
We very much have in our own brains a bag of words and a set of numbers which are correlations. We also have other things, which we apply in a very general way.
For instance, the ability to create a map of spatial, kinetic relationships - to predict that our path will intersect the path of the buffalo herd - can be applied to lots of other situations. It can even be applied to non-physical situations. (Will spending exceed taxes in 10 years from now? Will my memory allocation strategy result in buffer overlow?)
I'm guessing a bit at just what other modeling apparatus we have, nobody knows for sure. We do know that our minds contain a complete, manipulatable model of our bodies - which can also be used to model other bodies, and other situations as well ("the town is at the elbow of that river"). What else do we have?
As I read the three "responses" I don't see a "B" level understanding or any understanding at all. Certainly not what I would have expected from a university undergraduate. As you say, it's just a bunch of words strung together. But OTOH perhaps that is the level of response expected in a short-answer response to an identification of terms? But then, so would a cut-and-paste of the first paragraph of a Wikipedia article.
Maybe I'm coming in at the middle of a longer series of questions.