I’ve been thinking a lot about how AI is going to affect the future of learning. If you’re just arriving, I’ve previously written about which sorts of skills will matter in an AI future and vibe coding as a possible model of what’s to come for skilled work.
One of my tentative conclusions is that a lot of future work is going to look like AI collaboration. People will work with AI to do some of their job tasks, but until we have robust artificial general intelligence (AGI), there will be plenty of human beings steering AI and filling in the tasks that AI doesn’t yet perform well.
This suggests a follow-up question: how do we learn good AI collaboration skills, and how does that differ from learning in the previous era when the entire job was performed by people?
To skip to the punchline, I believe that the shift to AI collaboration doesn’t change as much about the path to mastery as you might guess. Getting good at AI collaboration is largely the same as getting good at doing the work yourself.
However, despite the similarity of the learning path, the skills used in practice with and without AI differ enough that overusing AI when learning a new skill presents a serious problem. I do believe AI can be a boon for learning, but there is also enormous capacity for misuse. I expect that the average user will end up learning much less, rather than more, as a consequence.
Consider the Calculator
While AI is new, automation is not. For centuries, we’ve been using machines to replace human labor, and for decades we’ve been using computers to replace human cognition.
Therefore, while it’s hard to predict the specific trajectory of large language models and machine learning progress, we already have examples of technology taking over a previous human cognitive skill and the implications of that.
To start, consider the calculator.
Before cheap calculators, it was expected that every adult could mentally calculate a tip, confirm that change was correct from a cash purchase or compare prices at a grocery store.
While some interesting anthropological work suggests that people often rely on informal heuristics rather textbook algorithms to do these calculations, nobody doubted that the average person needed basic math skills to get by in society.
For decades, however, we’ve been able to calculate things quickly and easily without needing to do the math. I’ve very rarely used long division since learning it in grade school, and I usually just use a calculator if I need to solve a multi-digit division problem. Today, cashiers are told exactly what change to provide, and the amount of tip is calculated for you on your bill.
Given that, why bother learning to do the math with a pencil and paper? Why learn long division if, in practice, we rarely use it?
A version of this argument was quite popular in educational research in the 1980s. It argued that many academic math skills aren’t used in practice, so perhaps we should teach kids to use calculators rather than waste years of their lives mastering mental calculations they’ll never use.
The impression I got from the literature when researching Get Better at Anything is that this argument is pretty much dead today.1
Why, in a world of easily-accessible calculators, do most educational experts still insist on teaching students arithmetic?
The simplest reason is that mental math doesn’t just build up fluency in performing the algorithms; it becomes the raw material for building mental models and quantitative understanding. In other words, even if you don’t use the long division algorithm, your grade school practice on it helped you develop the idea of what division is and how it works, and that intuition is powerful—even if you mostly rely on a calculator.2
Information in an external source, like a book or a calculator, is not the same as information stored in your long-term memory.
Case in point: Wikipedia has made much of the world’s knowledge more broadly accessible than at any point in history. But people today aren’t far more knowledgeable than we were before its invention. Knowledge is dead if it’s not in your head.
I take the near-universal retention of basic math in most elementary curricula, despite widespread availability of calculators, as essentially confirming a basic point: just because a machine can perform a skill better than a person does not mean it is not worth learning. Very often, learning skills that are automated in practice is a necessary stepping stone to cultivating uniquely human skills.
Learning with AI: Wall or Ladder?
This brings us to AI: given that AI can already do a lot of basic cognitive work, and that AI collaboration is a likely model for many kinds of knowledge work in the future, how should you invest in your own skills?
The answer, I believe, is not too different from before: if you want to learn to code, you need to start by creating programs by hand. Relying on ChatGPT will deprive you of the ability to internalize mental models of how code works, think through programming problems and otherwise develop your “taste” for good code.
Similarly, AI translation will prevent you from learning to speak another language, AI summaries will block you from really understanding the arguments laid out in a book, and AI analyses of your personal problems will make it harder for you to think through solutions to your own problems.
Learning is the process of internalizing information and weaving it together into coherent mental models and interconnected webs of ideas. Learning requires an investment of effort, and AI will make us stupider if that effort is avoided.
At the same time, not all effort in learning is helpful. Much of learning involves cognition that does not directly contribute to understanding and knowledge. Think of learning like powering a motor—all learning requires a source of energy to make progress, but not all energy is transformed into forward motion. Depending on the vehicle, much of it might be wasted as heat and noise.
Therefore, while I think AI is probably going to result in an incredible “dumbing down” of our self-education in the average case, it is probably also going to enable more careful students and teachers to facilitate learning much better than before. Because while AI can simply solve a problem for you, it can also generate worked examples, practice problems and feedback, and guide problem-solving dialog.
Getting this balance right is hard, and I don’t think it’s simply a matter of laziness. Even intelligent students are often wrong about what effort actually matters when it comes to learning, and many teachers are no better. This has always been the case, but AI has raised the stakes as the ability both to enhance learning and to bypass it entirely have expanded.
Thus my personal prediction is that in domains that are already largely under the powers of modern AI, such as languages, programming or chess, we’re going to see a divergence in human abilities. The average person will rely on the AI more, robbing them of the ability to learn the underlying skills. More sophisticated students will use AI to learn better, removing inefficiencies that were unavoidable in pre-AI learning environments.
Some evidence of this is already emerging: preliminary results from AI tutoring have been quite impressive, even garnering enthusiasm from experts who are otherwise skeptical about the previous eras of ed tech and their numerous failures. At the same time, many educators are sounding the alarm on a crash in student aptitude, with UCSD needing to redesign a remedial math course when it was discovered that many of the students were not only failing at high-school math skills, but they couldn’t even perform skills taught in middle and elementary grades.
In some ways, the future I imagined in Ultralearning is accelerating, with technology amplifying the opportunity to learn—as well as the danger of learning nothing at all.
Footnotes
- Educational science doesn’t really progress in the same way other sciences do. It has an unfortunate tendency to revive “zombie” ideas that were killed in one era, often by calling them something different, and reviving debates that were conclusively settled decades ago. The rather sad history of phonics instruction is a case in point, with seemingly each generation needing to relearn its superiority to whole-word methods.
- Another reason is that arithmetic is a prerequisite to algebra, calculus and other more advanced mathematics. However, I treat this as a secondary motivation, as the evidence that most adults can use algebra is pretty disappointing. Thus, if this was the only reason to teach arithmetic, it would show that much of our math education is a waste of time.
I'm a Wall Street Journal bestselling author, podcast host, computer programmer and an avid reader. Since 2006, I've published weekly essays on this website to help people like you learn and think better. My work has been featured in The New York Times, BBC, TEDx, Pocket, Business Insider and more. I don't promise I have all the answers, just a place to start.