The future of education depends on what we expect students to remember and do, not just what they can prompt Chatbots to generate. Thus forget banning AI. Instead teach students what it can’t do...
There is a lot of question-begging going in all this analogizing to the technological advances of the past. Oh yes, to be sure *if* AI functioned as a nail gun does to a hammer, as a compiler does to an assembler, or as a calculator does to pencil and paper, then indeed the problem would be to master the new, higher level of abstraction. But in actuality, in order to integrate AI "similar to the way calculators have been integrated into math and science", we mainly have to account for the fact that the calculator gets a completely wrong answer about 15% of the time. The compensating skill to be developed would not be abstraction but rather estimation, being able to check whether the calculator's answer is in the right ballpark. But this is an advanced skill!
Once your students understand this point, will they really be so enthusiastic about AI? Will they be happy to leave the interesting bits to automation and take on the dull drudgery of copy-editing and correction themselves? It seems unlikely. I would remind you that you yourself, with your remarkably open and flexible mind, have publicly made several attempts to coax useful output from AI without any notable success so far.
Thank god I'm retired & don't have to worry about any of this, but as a kibitzer what worries me about AI is precisely that it gets so many wrong answers. I find AI useful as a fancy search engine for stuff I know is online, but every time I ask it about stuff I know, I get crap simply because the answer isn't available to any search engine. You get the answer only by goin to the library.
In terms of teaching, in fields where you have knowledge to transfer, one should put oneself imaginatively in the position of the student, understand where they are coming from, and then figure out how to analytically continue them from their state to the state of knowledge you'd like them to be in. This cannot be done by teaching axiomatically the first time around.
From Barenblatt's book on scaling:
"Of special importance is the following fact: the construction of models, like any genuine art, cannot be taught by reading books and/or journal articles (I assume that there could be exceptions, but they are not known to me). The reason is that in articles and especially in books the 'scaffolding' is removed, and the presentation of results is shown not in the way that they were actually obtained but in a different, perhaps more elegant way. Therefore it is very difficult, if not impossible, to understand the real 'strings of the work: how the author really came to certain results and how to learn to obtain results on your OWn."
The scaffolding-removal problem is one major blockage to upward mobility, I think. Which is why I think everything should have two versions—a scaffolding-removed and a scaffolding-shown version:
> Emmanuel Derman: In terms of teaching, in fields where you have knowledge to transfer, one should put oneself imaginatively in the position of the student, understand where they are coming from, and then figure out how to analytically continue them from their state to the state of knowledge you'd like them to be in. This cannot be done by teaching axiomatically the first time around.
> From Barenblatt's book on scaling: "Of special importance is the following fact: the construction of models, like any genuine art, cannot be taught by reading books and/or journal articles (I assume that there could be exceptions, but they are not known to me). The reason is that in articles and especially in books the 'scaffolding' is removed, and the presentation of results is shown not in the way that they were actually obtained but in a different, perhaps more elegant way. Therefore it is very difficult, if not impossible, to understand the real 'strings of the work: how the author really came to certain results and how to learn to obtain results on your OWN."
This is a case where I disagree with Brad but do hope he's right [not a bad bet in general]. Abstractions are useful in whatever sense they are functorial; the question is "What is preserved by the shift to and from the abstraction?" LLMs, as interfaces with a corpus of text, by construction do not preserve either semantics or method. Ungodly amounts of money are spent every week on trying to make them reason, with ever-diminishing returns at a plateau that is unimpressive in average and fails in horrifying ways.
Now, the way they do "think" -via linguistic plausibility, not logic or careful-with-the-world analogy- is a way students can and do learn from them. And that's the problem! Not that students aren't learning how to think and do knowledge work LLMs, but that they are: as LLMs do. This makes them, perhaps, able to succeed in many high-prestige jobs; right now thinking like an LLM is perhaps helpful to enter the highest levels of the US government. But that does not make them good thinkers in the way and traditions we're trying to pass on and improve.
I do believe AIs in their wider, proper sense are extremely useful for education. Terence Tao just publicly released a companion to parts of his Analysis I textbook with partial proofs in the Lean computational proof assistant system. The proofs have gaps for students to complete; those completions have to make the proof assistant effectively complete the proof, helping students improve their understanding of the mathematics _and_ training them in a style of mathematical practice that's likely to become more prevalent in the future. That's a potential AI-led educational revolution right there.
I acknowledge there's nothing preventing students from using ChatGPT to help generate those proofs, and I don't claim to have anything close to a feasible response to the equivalent problem in the current situation. Certainly the certification system is hardly sustainable in these conditions.
But I do disagree with the assumption that, because LLMs exist and are so available and used, we need to adjust our methods in ways that specifically engage with them. The existence, popularity, and cognitive unloading of Fox News did force or should have forced people to be aware of them of their impact but/and not to include them in their regular media schedule.
Aren't we fighting the memorization argument that has been going on since antiquity? Writing - OMG, now no one needs to memorize poems, stories, etc., they can just look them up!
AI can produce poor answers, just like poor textbooks. Therefore, the key is to emphasize rigor, which means as much critical thinking as possible to assess the AI output. Check references. Do the references say what the AI states?. That is what you need to instill in your students when you assign essays, etc. By all means, impress upon he student your learning, but their work should be to check some of your work for argumentation in class. E.g., you have formulas that you use to justify productivity and estimated GDP per capita. Rather than have he students accept this, have them search for the background of these equations, and perhaps use spreadsheets to test it against other formulae to see where differences lie, and to discuss whether it is important to some concept.
There used to be a Harvard parody sweatshirt with the motto "veritas" replaced with "verisimilitude". The idea was that hanging around at Harvard taught one to speak and write authoritatively so that whatever one said or wrote had the ring of truth whether or not it was actually true. Now, verisimilitude is available for the masses for a modest monthly fee. Reading even the most abject nonsense and padding - so much padding - produced by an LLM, if one does not look closely and skips the padding, gives one the impression that one is reading an actual argument.
The sheer verbosity and glibness of LLM produced text suggests one possible solution which is to bring back the old Twitter, the one with a 140 character limit. Give students a tweet budget and see if they can make concise arguments. If nothing else, it would be easier to grade such an epistolary essay since it would be brief. Not everyone is going to ace it like Caldwell with his famous "monotremes oviparous, ovum meroblastic", but brevity is the soul of wit. In an age of cheap boilerplate, there is something to be said for concision.
P.S. There was an interesting example of abstraction leaking on this blog some years ago when our host was discussing the way electron shells are filled. Such filling seems to follow regular rules, inner shells first, but as the shells add up, the abstraction leaks. Electron shells are a useful abstraction, but they aren't the same as the solutions to Schroedinger's equations.
Except we cannot solve Schrödinger's equation, can we?
> There used to be a Harvard parody sweatshirt with the motto "veritas" replaced with "verisimilitude". The idea was that hanging around at Harvard taught one to speak and write authoritatively so that whatever one said or wrote had the ring of truth whether or not it was actually true. Now, verisimilitude is available for the masses for a modest monthly fee. Reading even the most abject nonsense and padding - so much padding - produced by an LLM, if one does not look closely and skips the padding, gives one the impression that one is reading an actual argument.
> The sheer verbosity and glibness of LLM produced text suggests one possible solution which is to bring back the old Twitter, the one with a 140 character limit. Give students a tweet budget and see if they can make concise arguments. If nothing else, it would be easier to grade such an epistolary essay since it would be brief. Not everyone is going to ace it like Caldwell with his famous "monotremes oviparous, ovum meroblastic", but brevity is the soul of wit. In an age of cheap boilerplate, there is something to be said for concision.
> P.S. There was an interesting example of abstraction leaking on this blog some years ago when our host was discussing the way electron shells are filled. Such filling seems to follow regular rules, inner shells first, but as the shells add up, the abstraction leaks. Electron shells are a useful abstraction, but they aren't the same as the solutions to Schroedinger's equations.
There are a few closed form solutions to Schroedinger's equation. There's one for an electron in space, one for an electron in a box and one for an electron in a spherically symmetric electric field. That last is a good approximation for electrons around an atomic nucleus. It gets the basic hydrogen spectrum right though it ignores things like the Lamb Shift.
In general, there are only numerical solutions which can be useful. Right now, they can be used for analyzing crystal structures and to simulate simpler chemical reactions. There are approximations that are kind-of / sort-of useful for larger organic molecules but good luck if you add water.
It's like the Navier-Stokes equation. There aren't any non-trivial closed form solutions. but it is possible to get useful numerical solutions. Like the engineer in the old Zeno's Paradox joke, sometimes one can get close enough for practical purposes.
A good start. Thank goodness I'm too old to be teaching again, because AI's challenge to educators now at all levels if ginormous! To me, it all boils down to AI is for the humanities what the calculator was for math. Get used to it, over it, or whatever. Then move on, as you are attempting to do. Asking students: what are the right questions? what are the best ways to use AI to help answer them? Where did AI go astray (it makes mistakes)? What hasn't AI told us, and what do you still need to do to answer those original questions? (I'm making this up as I go....).
Yes: We should be, in every class, (a) presenting a question, (b) modeling how to discover the answer to that question, & then (c) modeling how to persuade people that the answer you have found to that question is in fact the correct one:
> Bob Litan: 'A good start. Thank goodness I'm too old to be teaching again, because AI's challenge to educators now at all levels if ginormous! To me, it all boils down to AI is for the humanities what the calculator was for math. Get used to it, over it, or whatever. Then move on, as you are attempting to do. Asking students: what are the right questions? what are the best ways to use AI to help answer them? Where did AI go astray (it makes mistakes)? What hasn't AI told us, and what do you still need to do to answer those original questions? (I'm making this up as I go...)
I do still mourn, sometimes, that the time I spent in my childhood learning the x-table up to 30 x 30 and the time spent learning the rule-of-72 now looks wasted. But the time I spent learning street fighting math I still find very valuable...
I very much resonate with your questions about what do you want students remember after 5 years, and what do you want them to know and know how to do at the end of the class.
And your three questions about identifying a question, discovering an answer, and persuading others of the answer looks to me like a promising approach.
My broader concerns about the impact of AIs on learning are rooted in how I view the role of memorization and practice. And the difference between short-term and long-term memory.
Thinking is about making connections between things: how A is like B, and how A is unlike B.
Let A be a new thing you're encountering: something you're reading about, an action or phenomenon you're observing in real life.
Our short-term memory holds 5 to 9 things - let's say 7, for convenience.
Our long-term memory doesn't seem to have a limit. Or rather, because of the efficiencies achieved by "chunking" and grouping things up, our long-term memory is sufficiently expandable that the functional constraint is not how much information you can meaningfully hold, but rather, how much time we have available to do the work needed to move things into long-term memory.
But not only is short-term memory miniscule compared to long-term; it also uses the same mental resources as we use to make connections among things - that is, to think.
So you can jam 7 small pieces of info into your short-term memory, and then have no cognitive capacity left to think about them.
Or you can put 4 small pieces of info into your short-term memory and have _some_ cognitive capacity left over.
Or use 2 pieces of info, but now the possible thoughts are extremely limited, because of having only A and B to compare.
When you bring the long-term memory into the game, it's fundamentally different.
Instead of moving among small pieces of info a, b, and c, you have access to large information sets A, B, C, ... Z, AA, AB, AC, ... AZ, BA, BB, BC, ..., CA, CB, CC, ...
When you're playing that game, thoughts are available to you that are impossible for a person somehow trying to use only short-term memory.
The person described above will have _more_ thoughts than a person whose long-term memory is A, B, C, ... Z, BA, BB, BC, ..., BZ, CA, CB, CC, ...
And this second person will have _different_ thoughts than a person whose long-term memory is A, B, C, ... Z, AA, AB, AC, ... AZ, CA, CB, CC, ...
I suppose it's possible to have lots of stuff in your long-term memory and be bad at thinking.
But I don't see how it's possible to have little in your long-term memory and be good at thinking.
It follows that a key part of learning to think is simply building up your long-term memory.
And now back to AI.
When LLM's interact with our conventional ways of teaching and of evaluating learning, it's often easy for the machines to replace the work that students (or anyone) need to do to move things into long-term memory. And so we undermine the development of the ability to think broadly.
The calculator example comes up often in these discussions, and I have a speculation about that. The claim when they were introduced was that they would remove the burden of doing low-level stuff like arithmetic, freeing up our mental energies for the real math of algebra or higher-level mathematical thinking.
On one hand, it's true that arithmetic takes time, and that if my mind is occupied figuring out 45 x 1,325,409, or working out an approximation of 78^(0.5), it can't be doing other, higher-order things, like solving the algebra of an economic model.
On the other, I wonder whether the _ability_ to solve those arithmetic problems is a substrate that makes higher-order mathematical thinking easier.
Confronted with a*(b + c), many students have a hard time not turning it into ab + c. And the idea that a^b * a^c = a^(b+c) is like witchcraft or gobbledegook, rather than something that becomes intuitive once demonstrated.
And I wonder about childhood practice doing sums in the head, memorizing times tables up to 12x12, working larger multiplication or division problems by hand on paper: does that activity build up a feel for quantitative relationships that then helps one feel the rightness of a*(b + c) = ab + ac?
Calculators are still great, because doing arithmetic _does_ take mental effort that then is not available for doing other, higher-order things.
But if we use calculators in ways that prevent learning _how_ to do arithmetic, does that impair the _ability_ to do higher order things that we now have the mental space for?
And are LLM's poised to do the same thing?
If your long-term memory is already well stocked, and your short-term memory is well practiced at reaching into the long-term memory to find things that relate to items a and b that are currently being held in short-term memory, then an LLM might be a tool that allows you to extend your reach.
But if you encounter LLMs early in the educational process, do they take the place of developing your own long-term memory and facility at reaching into that memory?
I think your 3 question scaffolding is a great approach, regardless of AI. Then ask the questions of LLM and audit them for inaccuracies, bs, and glittering generalities. Verify the sources, those are sometimes made up. Maybe compare LLM answers. Ask why do LLM's hallucinate and bs? Perhaps the same reasons as humans (it's easier and often good enough). Finally, students answer the questions as best they can, differentiating themselves from the LLM answers.
Very helpful post and comments as I reflected on my practice of training AI to grade and respond like me by using dozens of pages of my comments on past student work. With each assignment, I feed in my past outputs and let AI crunch the first comments. Verbose and obviously lacking human reasoning logic, I edit and expand these, with corrective and deepening comments for the students in light of my reading of each of their responses. Numerically, I've trained AI to an interrater reliability of 0.98, but I have to nudge it constantly with added training to maintain that by feeding back what I did with its output. I wish I could report improvement, but find only decay over time. Still, the summarizing and hinting from the anthology intelligence saves me time and improves my accuracy. And like any professor worth one's salt, I hope I'm still pretty good at spotting the students who are using AI less productively for real learning than I am.
"entia non sunt multiplicanda praeter necessitatem"
Entia rationis as the necessary gap bridging tools from ontology through metaphysics by which humans acquire scientia is one framework I've used to contemplate such
Aquinas is fun to read as he grapples with the trinitarian homoousios in this context
no easy way out of the brain in a jar thing yet that brain in a jar thing is a most useful way to avoid the Korzybski trap
There is a lot of question-begging going in all this analogizing to the technological advances of the past. Oh yes, to be sure *if* AI functioned as a nail gun does to a hammer, as a compiler does to an assembler, or as a calculator does to pencil and paper, then indeed the problem would be to master the new, higher level of abstraction. But in actuality, in order to integrate AI "similar to the way calculators have been integrated into math and science", we mainly have to account for the fact that the calculator gets a completely wrong answer about 15% of the time. The compensating skill to be developed would not be abstraction but rather estimation, being able to check whether the calculator's answer is in the right ballpark. But this is an advanced skill!
Once your students understand this point, will they really be so enthusiastic about AI? Will they be happy to leave the interesting bits to automation and take on the dull drudgery of copy-editing and correction themselves? It seems unlikely. I would remind you that you yourself, with your remarkably open and flexible mind, have publicly made several attempts to coax useful output from AI without any notable success so far.
Thank god I'm retired & don't have to worry about any of this, but as a kibitzer what worries me about AI is precisely that it gets so many wrong answers. I find AI useful as a fancy search engine for stuff I know is online, but every time I ask it about stuff I know, I get crap simply because the answer isn't available to any search engine. You get the answer only by goin to the library.
In terms of teaching, in fields where you have knowledge to transfer, one should put oneself imaginatively in the position of the student, understand where they are coming from, and then figure out how to analytically continue them from their state to the state of knowledge you'd like them to be in. This cannot be done by teaching axiomatically the first time around.
From Barenblatt's book on scaling:
"Of special importance is the following fact: the construction of models, like any genuine art, cannot be taught by reading books and/or journal articles (I assume that there could be exceptions, but they are not known to me). The reason is that in articles and especially in books the 'scaffolding' is removed, and the presentation of results is shown not in the way that they were actually obtained but in a different, perhaps more elegant way. Therefore it is very difficult, if not impossible, to understand the real 'strings of the work: how the author really came to certain results and how to learn to obtain results on your OWn."
The scaffolding-removal problem is one major blockage to upward mobility, I think. Which is why I think everything should have two versions—a scaffolding-removed and a scaffolding-shown version:
> Emmanuel Derman: In terms of teaching, in fields where you have knowledge to transfer, one should put oneself imaginatively in the position of the student, understand where they are coming from, and then figure out how to analytically continue them from their state to the state of knowledge you'd like them to be in. This cannot be done by teaching axiomatically the first time around.
> From Barenblatt's book on scaling: "Of special importance is the following fact: the construction of models, like any genuine art, cannot be taught by reading books and/or journal articles (I assume that there could be exceptions, but they are not known to me). The reason is that in articles and especially in books the 'scaffolding' is removed, and the presentation of results is shown not in the way that they were actually obtained but in a different, perhaps more elegant way. Therefore it is very difficult, if not impossible, to understand the real 'strings of the work: how the author really came to certain results and how to learn to obtain results on your OWN."
This is a case where I disagree with Brad but do hope he's right [not a bad bet in general]. Abstractions are useful in whatever sense they are functorial; the question is "What is preserved by the shift to and from the abstraction?" LLMs, as interfaces with a corpus of text, by construction do not preserve either semantics or method. Ungodly amounts of money are spent every week on trying to make them reason, with ever-diminishing returns at a plateau that is unimpressive in average and fails in horrifying ways.
Now, the way they do "think" -via linguistic plausibility, not logic or careful-with-the-world analogy- is a way students can and do learn from them. And that's the problem! Not that students aren't learning how to think and do knowledge work LLMs, but that they are: as LLMs do. This makes them, perhaps, able to succeed in many high-prestige jobs; right now thinking like an LLM is perhaps helpful to enter the highest levels of the US government. But that does not make them good thinkers in the way and traditions we're trying to pass on and improve.
I do believe AIs in their wider, proper sense are extremely useful for education. Terence Tao just publicly released a companion to parts of his Analysis I textbook with partial proofs in the Lean computational proof assistant system. The proofs have gaps for students to complete; those completions have to make the proof assistant effectively complete the proof, helping students improve their understanding of the mathematics _and_ training them in a style of mathematical practice that's likely to become more prevalent in the future. That's a potential AI-led educational revolution right there.
I acknowledge there's nothing preventing students from using ChatGPT to help generate those proofs, and I don't claim to have anything close to a feasible response to the equivalent problem in the current situation. Certainly the certification system is hardly sustainable in these conditions.
But I do disagree with the assumption that, because LLMs exist and are so available and used, we need to adjust our methods in ways that specifically engage with them. The existence, popularity, and cognitive unloading of Fox News did force or should have forced people to be aware of them of their impact but/and not to include them in their regular media schedule.
Aren't we fighting the memorization argument that has been going on since antiquity? Writing - OMG, now no one needs to memorize poems, stories, etc., they can just look them up!
AI can produce poor answers, just like poor textbooks. Therefore, the key is to emphasize rigor, which means as much critical thinking as possible to assess the AI output. Check references. Do the references say what the AI states?. That is what you need to instill in your students when you assign essays, etc. By all means, impress upon he student your learning, but their work should be to check some of your work for argumentation in class. E.g., you have formulas that you use to justify productivity and estimated GDP per capita. Rather than have he students accept this, have them search for the background of these equations, and perhaps use spreadsheets to test it against other formulae to see where differences lie, and to discuss whether it is important to some concept.
There used to be a Harvard parody sweatshirt with the motto "veritas" replaced with "verisimilitude". The idea was that hanging around at Harvard taught one to speak and write authoritatively so that whatever one said or wrote had the ring of truth whether or not it was actually true. Now, verisimilitude is available for the masses for a modest monthly fee. Reading even the most abject nonsense and padding - so much padding - produced by an LLM, if one does not look closely and skips the padding, gives one the impression that one is reading an actual argument.
The sheer verbosity and glibness of LLM produced text suggests one possible solution which is to bring back the old Twitter, the one with a 140 character limit. Give students a tweet budget and see if they can make concise arguments. If nothing else, it would be easier to grade such an epistolary essay since it would be brief. Not everyone is going to ace it like Caldwell with his famous "monotremes oviparous, ovum meroblastic", but brevity is the soul of wit. In an age of cheap boilerplate, there is something to be said for concision.
P.S. There was an interesting example of abstraction leaking on this blog some years ago when our host was discussing the way electron shells are filled. Such filling seems to follow regular rules, inner shells first, but as the shells add up, the abstraction leaks. Electron shells are a useful abstraction, but they aren't the same as the solutions to Schroedinger's equations.
Except we cannot solve Schrödinger's equation, can we?
> There used to be a Harvard parody sweatshirt with the motto "veritas" replaced with "verisimilitude". The idea was that hanging around at Harvard taught one to speak and write authoritatively so that whatever one said or wrote had the ring of truth whether or not it was actually true. Now, verisimilitude is available for the masses for a modest monthly fee. Reading even the most abject nonsense and padding - so much padding - produced by an LLM, if one does not look closely and skips the padding, gives one the impression that one is reading an actual argument.
> The sheer verbosity and glibness of LLM produced text suggests one possible solution which is to bring back the old Twitter, the one with a 140 character limit. Give students a tweet budget and see if they can make concise arguments. If nothing else, it would be easier to grade such an epistolary essay since it would be brief. Not everyone is going to ace it like Caldwell with his famous "monotremes oviparous, ovum meroblastic", but brevity is the soul of wit. In an age of cheap boilerplate, there is something to be said for concision.
> P.S. There was an interesting example of abstraction leaking on this blog some years ago when our host was discussing the way electron shells are filled. Such filling seems to follow regular rules, inner shells first, but as the shells add up, the abstraction leaks. Electron shells are a useful abstraction, but they aren't the same as the solutions to Schroedinger's equations.
There are a few closed form solutions to Schroedinger's equation. There's one for an electron in space, one for an electron in a box and one for an electron in a spherically symmetric electric field. That last is a good approximation for electrons around an atomic nucleus. It gets the basic hydrogen spectrum right though it ignores things like the Lamb Shift.
In general, there are only numerical solutions which can be useful. Right now, they can be used for analyzing crystal structures and to simulate simpler chemical reactions. There are approximations that are kind-of / sort-of useful for larger organic molecules but good luck if you add water.
It's like the Navier-Stokes equation. There aren't any non-trivial closed form solutions. but it is possible to get useful numerical solutions. Like the engineer in the old Zeno's Paradox joke, sometimes one can get close enough for practical purposes.
A good start. Thank goodness I'm too old to be teaching again, because AI's challenge to educators now at all levels if ginormous! To me, it all boils down to AI is for the humanities what the calculator was for math. Get used to it, over it, or whatever. Then move on, as you are attempting to do. Asking students: what are the right questions? what are the best ways to use AI to help answer them? Where did AI go astray (it makes mistakes)? What hasn't AI told us, and what do you still need to do to answer those original questions? (I'm making this up as I go....).
Yes: We should be, in every class, (a) presenting a question, (b) modeling how to discover the answer to that question, & then (c) modeling how to persuade people that the answer you have found to that question is in fact the correct one:
> Bob Litan: 'A good start. Thank goodness I'm too old to be teaching again, because AI's challenge to educators now at all levels if ginormous! To me, it all boils down to AI is for the humanities what the calculator was for math. Get used to it, over it, or whatever. Then move on, as you are attempting to do. Asking students: what are the right questions? what are the best ways to use AI to help answer them? Where did AI go astray (it makes mistakes)? What hasn't AI told us, and what do you still need to do to answer those original questions? (I'm making this up as I go...)
I do still mourn, sometimes, that the time I spent in my childhood learning the x-table up to 30 x 30 and the time spent learning the rule-of-72 now looks wasted. But the time I spent learning street fighting math I still find very valuable...
"Abstraction layers"? Isn't this just warmed-over Herb Simon?
Of course it is! But warmed-over Herb Simon is very tasty!
> Ziggy: "Abstraction layers"? Isn't this just warmed-over Herb Simon?
I very much resonate with your questions about what do you want students remember after 5 years, and what do you want them to know and know how to do at the end of the class.
And your three questions about identifying a question, discovering an answer, and persuading others of the answer looks to me like a promising approach.
My broader concerns about the impact of AIs on learning are rooted in how I view the role of memorization and practice. And the difference between short-term and long-term memory.
Thinking is about making connections between things: how A is like B, and how A is unlike B.
Let A be a new thing you're encountering: something you're reading about, an action or phenomenon you're observing in real life.
Our short-term memory holds 5 to 9 things - let's say 7, for convenience.
Our long-term memory doesn't seem to have a limit. Or rather, because of the efficiencies achieved by "chunking" and grouping things up, our long-term memory is sufficiently expandable that the functional constraint is not how much information you can meaningfully hold, but rather, how much time we have available to do the work needed to move things into long-term memory.
But not only is short-term memory miniscule compared to long-term; it also uses the same mental resources as we use to make connections among things - that is, to think.
So you can jam 7 small pieces of info into your short-term memory, and then have no cognitive capacity left to think about them.
Or you can put 4 small pieces of info into your short-term memory and have _some_ cognitive capacity left over.
Or use 2 pieces of info, but now the possible thoughts are extremely limited, because of having only A and B to compare.
When you bring the long-term memory into the game, it's fundamentally different.
Instead of moving among small pieces of info a, b, and c, you have access to large information sets A, B, C, ... Z, AA, AB, AC, ... AZ, BA, BB, BC, ..., CA, CB, CC, ...
When you're playing that game, thoughts are available to you that are impossible for a person somehow trying to use only short-term memory.
The person described above will have _more_ thoughts than a person whose long-term memory is A, B, C, ... Z, BA, BB, BC, ..., BZ, CA, CB, CC, ...
And this second person will have _different_ thoughts than a person whose long-term memory is A, B, C, ... Z, AA, AB, AC, ... AZ, CA, CB, CC, ...
I suppose it's possible to have lots of stuff in your long-term memory and be bad at thinking.
But I don't see how it's possible to have little in your long-term memory and be good at thinking.
It follows that a key part of learning to think is simply building up your long-term memory.
And now back to AI.
When LLM's interact with our conventional ways of teaching and of evaluating learning, it's often easy for the machines to replace the work that students (or anyone) need to do to move things into long-term memory. And so we undermine the development of the ability to think broadly.
The calculator example comes up often in these discussions, and I have a speculation about that. The claim when they were introduced was that they would remove the burden of doing low-level stuff like arithmetic, freeing up our mental energies for the real math of algebra or higher-level mathematical thinking.
On one hand, it's true that arithmetic takes time, and that if my mind is occupied figuring out 45 x 1,325,409, or working out an approximation of 78^(0.5), it can't be doing other, higher-order things, like solving the algebra of an economic model.
On the other, I wonder whether the _ability_ to solve those arithmetic problems is a substrate that makes higher-order mathematical thinking easier.
Confronted with a*(b + c), many students have a hard time not turning it into ab + c. And the idea that a^b * a^c = a^(b+c) is like witchcraft or gobbledegook, rather than something that becomes intuitive once demonstrated.
And I wonder about childhood practice doing sums in the head, memorizing times tables up to 12x12, working larger multiplication or division problems by hand on paper: does that activity build up a feel for quantitative relationships that then helps one feel the rightness of a*(b + c) = ab + ac?
Calculators are still great, because doing arithmetic _does_ take mental effort that then is not available for doing other, higher-order things.
But if we use calculators in ways that prevent learning _how_ to do arithmetic, does that impair the _ability_ to do higher order things that we now have the mental space for?
And are LLM's poised to do the same thing?
If your long-term memory is already well stocked, and your short-term memory is well practiced at reaching into the long-term memory to find things that relate to items a and b that are currently being held in short-term memory, then an LLM might be a tool that allows you to extend your reach.
But if you encounter LLMs early in the educational process, do they take the place of developing your own long-term memory and facility at reaching into that memory?
Do LLMs make it harder to learn how to think?
I think your 3 question scaffolding is a great approach, regardless of AI. Then ask the questions of LLM and audit them for inaccuracies, bs, and glittering generalities. Verify the sources, those are sometimes made up. Maybe compare LLM answers. Ask why do LLM's hallucinate and bs? Perhaps the same reasons as humans (it's easier and often good enough). Finally, students answer the questions as best they can, differentiating themselves from the LLM answers.
Very helpful post and comments as I reflected on my practice of training AI to grade and respond like me by using dozens of pages of my comments on past student work. With each assignment, I feed in my past outputs and let AI crunch the first comments. Verbose and obviously lacking human reasoning logic, I edit and expand these, with corrective and deepening comments for the students in light of my reading of each of their responses. Numerically, I've trained AI to an interrater reliability of 0.98, but I have to nudge it constantly with added training to maintain that by feeding back what I did with its output. I wish I could report improvement, but find only decay over time. Still, the summarizing and hinting from the anthology intelligence saves me time and improves my accuracy. And like any professor worth one's salt, I hope I'm still pretty good at spotting the students who are using AI less productively for real learning than I am.
"entia non sunt multiplicanda praeter necessitatem"
Entia rationis as the necessary gap bridging tools from ontology through metaphysics by which humans acquire scientia is one framework I've used to contemplate such
Aquinas is fun to read as he grapples with the trinitarian homoousios in this context
no easy way out of the brain in a jar thing yet that brain in a jar thing is a most useful way to avoid the Korzybski trap