- Scott H Young - https://www.scotthyoung.com/blog -

I Wrote Ultralearning. This is What I’d Change Because of AI

My book Ultralearning [1] was published in 2019. It documents the process of intensive self-education that inspired some of my self-guided projects learning languages, computer science, art and more.

The book went on to become a surprise bestseller, with over 200,000 copies sold and dozens of translated editions. To this day, the bulk of new reader emails I get are from people who discovered me through Ultralearning.

A question I get asked a lot is how the book would change if it were published today. In 2019, the conversation about AI was still a whisper. Now, it’s deafening.

Today, I’d like to walk through Ultralearning and look at what’s changed, what hasn’t, and what I think the future holds for learning and education.

What Hasn’t Changed

The basic message of Ultralearning, I believe, still holds up pretty well:

Technology is widening the gulf between the haves and have-nots of human capital. Learning in school is insufficient. To achieve, we need to continually add to our skills and knowledge, and doing so efficiently is imperative given our information-saturated environment.

AI has only accelerated those trends.

While some early reports [2] suggested AI might be an equalizer, helping mediocre programmers and writers produce at a higher level, I think those early takes now seem naive. If anything, the fruitful branches of the skill tree for becoming a professional programmer have only gotten higher—with tasks that were previously for junior devs now wholly within the grasp of automated agents.

Some prognosticators suggest that the culmination of this process will be the devaluing of all human skills. Why bother learning anything at all if AI will soon do it better than you?



I’m skeptical of this as a final outcome. I tend to think there will continue to be humans doing human jobs far into the future, if only because certain kinds of work are inherently humanistic. But the medium-term outcome seems to clearly back the urgent need for humans to learn deeper and more robust skills to compete.

AI has not fundamentally changed the effort involved in learning. Ultralearning was written from a particular vantage point: a person eager to learn and willing to do the hard work required. These people have always been a minority, and AI cannot change the intrinsic effort required.

So, as a proportion of the population, I don’t expect an explosion in impressive autodidacts any more than we saw with the arrival of the Internet. The world’s knowledge is already at our fingertips, but most people will still prefer to watch funny videos instead. AI certainly isn’t changing that.

But, at a tactical level, AI has created new possibilities (and pitfalls) that didn’t exist when I wrote Ultralearning. So let’s look at some of those, following the nine principles of the book.

Principle #1: Meta-learning

This is probably the chapter most in need of a rewrite. Self-education has always stumbled on the bootstrapping-problem of knowledge: how do you organize an effective learning project when you lack the knowledge to organize it?

My solution in the book was to encourage people to do research: figure out how a skill works, talk to experts and map out what you need to learn before you start.

AI has dramatically reduced the cost of doing this kind of research, and not only for academic subjects. Even obscure practical skills can now be broken down into discrete subtopics, practice activities, lists of facts, concepts and more.



My go-to approach to tackling a new topic area these days is to fire up ChatGPT and get it to start with a Deep Research on the topic, beginning with some of my major questions. The resulting document isn’t usually on par with genuine experts, but I very quickly narrow in on what sorts of directions I need to take to fill in my research.

Similarly, if you’re learning a less academic skill set, using AI can surface the current best practices and give you the basic building blocks for a learning project.

I very rarely stay totally within AI responses for meta-learning. It’s always good to get to the ground truth of some genuine expert or teacher’s curriculum. Finding those teachers and experts and the organizing paradigms that lead to them is much easier now with AI.

Principle #2: Focus

AI hasn’t changed this principle. Learning anything requires time. Even when you do projects efficiently, they’re still an enormous amount of work. If you can’t put the time in, you can’t get the results.

Learning also requires attention. If you can’t devote large chunks of undistracted time to a project, you’ll fail to build deep skills and understanding.



The attentional ecosystem has only gotten worse since Ultralearning was published. When I was doing projects in my early twenties, the major distractions were Reddit threads and the occasional Facebook post. Now, an endless treadmill of short-form video content on our phones means we can play the attentional slot machine all day without pause.


Currently, I see AI-generated content as less appealing than human-generated content, so I don’t see it making the problem of addictive social media much worse. Perhaps in a few years AI-generated feeds will be more enticing than human-created content, and I’ll need to revise this point.

Principle #3: Directness

Practice the skill you want to get good at. Do the real thing and avoid substitutes.

AI probably makes this harder. Because AI is so compelling, there’s a temptation to do AI-mediated practice rather than engaging in the hard, scary, and sometimes uncomfortable, real-world skill that directness suggests.

Take language learning, for instance. In Ultralearning, I was highly skeptical of the gamified drills offered by apps like Duolingo. To me, they simply omit so much of the actual skill of conversing in another language that you could play these games for years and still feel uncomfortable ordering food at a restaurant.

Since then, I’ve heard people claim that they’re using AI to learn languages, writing—and even social skills(!!).

Of course, one could easily imagine someone who is having real conversations, publishing essays and attending social events simply using AI to shore up some weak points. But, more often, I worry that people are using the verisimilitude that AI creates to try to avoid doing the real thing entirely.

Principle #4: Drill

The counterpart to directness is drill: breaking down a complex skill into smaller parts, focusing on those smaller parts either in isolation or with greater focus to make selective improvement. These drills can include conjugation exercises for Spanish, practicing layups for basketball, making value studies for painting, and more.

Here AI presents a whole range of new opportunities through AI-generated practice problems, flashcards, worksheets or feedback.

For instance, one of the major difficulties in my language learning projects had been how much weight to put on vocabulary study through flashcards. On the one hand, an efficient spaced-repetition system, backed by some careful mnemonics, can make it much faster to acquire a few thousand words of basic vocabulary. On the other hand, flashcards can lead to brittle knowledge that is difficult to generalize to real conversations.

A major cause of my ambivalence with flashcards is that the paradigm assumes each word is an atomic fact. But what we are actually learning [3] when we learn new words is not merely a definition or translation. Instead, we’re also learning contextual associations for how that word typically appears in spoken or written language. It’s how we know the difference between the words small and petite, or big and grand. These associations have to be learned implicitly, and can’t simply be memorized as part of the definition.

Now, with AI, we can generate flashcards that always place the to-be-learned word in a novel sentence, giving us the needed repetition alongside the variation required for learning contextual cues. This, to me, is a major upgrade over the flashcard paradigm.

Conjugations are another area that is difficult to learn without premade practice questions. The issue is that what needs to be learned isn’t a fixed association (e.g., agua -> water) or a verbalized rule (e.g., “change -ar to -o for first-person present tense”) but rather a procedural mapping [4] that needs to take a variable input and give a variable output.

To learn procedures like this effectively, we need flashcards that vary the input/output relationship to show all permutations of the pattern. The problem is that this used to be hard to do before AI. Now, of course, we can use AI to generate infinite variations of the same basic practice problems, which solves the material gap that exists for a lot of skills.

Principle #5: Retrieval

Memory is strengthened more by recall than by review. If you want to learn something by heart, you need to practice remembering it, not just looking at it.

I’ve seen a lot of claims that AI can be helpful with this aspect of learning. For instance, AI tools can generate quizzes based on the books you’re reading allowing you to deepen your knowledge of the content.

I tend to be a bit skeptical about the utility here. Not because quizzes or practice questions are bad (they certainly aren’t), but a lot of the value in retrieval comes from selecting what knowledge you ought to retrieve.

For instance, a naive way to do retrieval practice is simply to quiz yourself on every factual claim made in a text or book. But rarely is the main goal of learning a complete verbatim memory of every factual claim in a book. Instead, we typically want to be able to restate the main ideas and understand the key points and concepts.

Sometimes we may have more idiosyncratic goals, like remembering the authors of key studies for future research or knowing the dates to put historical events inside a chronological context. But memorizing every single fact in a text is almost never a good use of limited studying time.

This is not an idle concern. The world of knowledge is infinite. The effort needed to memorize every fact from one text is effort that cannot be spent on other texts. I’d much rather remember the gist of ten books—their big, important ideas—than know every bit of trivia contained in just one of them.

Practice problems and quizzes designed by a teacher avoid this problem because the teacher has in mind clear educational goals. When they ask a question on a test, it is because they think it is important to know that fact or idea. But if we give an AI a random text without this pedagogical context, the chance that it’s going to narrow in on what is important is much lower—not because of insufficiently capable AI, but because it doesn’t have a useful goal. If you asked a human to generate a quiz from a random text absent any pedagogical goals, they’d also make a bad quiz.

Retrieval, of course, doesn’t need quizzes to work. Free recall, the paradigm where you simply try to remember as much as you can from a source, works remarkably well and definitely doesn’t require AI. So does writing essays about topics you’re learning, which may soon become a lost art. These are low-tech tools that work amazingly well for retrieving knowledge.

Principle #6: Feedback

Feedback is essential for learning. But we often get sparse or incomplete feedback in our learning efforts, which slows down progress.

In symbolic domains, where the skill is primarily mediated through tokens and text, I think currently-existing AI can do a ton to enhance feedback. If I’m trying to improve as a writer, I can get AI to critique my use of research, word choice and storytelling. If I’m trying to improve as a programmer, I can be shown more efficient design patterns or algorithms for solving the same task.


A while back, I recorded some promotional videos in Mandarin for a translation of my book. I wrote the script myself, but then I asked AI to offer suggestions, and it fixed some places where I wasn’t speaking very idiomatically. Before AI, I would have had to pay someone for that advice.

In non-symbolic domains, where AI still underperforms human beings, the value of AI feedback is a lot more limited. I can’t easily use AI to give me feedback on art, skiing or interviewing ability at the moment, so human feedback remains essential.

AI also can’t replace the need for direct feedback from the environment. Entrepreneurs need data about product-market fit. Comedians need to know whether their jokes are funny. Writers like me need to know what their audience already thinks and believes. That kind of feedback is essential to the skill, and AI can’t offer a substitute.

The more dangerous cases are areas where AI could give good feedback, in theory, but it’s been trained not to because people often don’t like getting true feedback. Sycophancy is rampant. For a lot of us, hearing nice things about our ideas and skills is more desirable than hearing the truth.

Principle #7: Retention

I’ve always had mixed feelings about mnemonics. They can be incredibly powerful. The right chaining of visual associations or spatial memories can make indelible links between hard-to-associate facts. But they also take a while to learn and can be time-consuming to apply.

AI has the potential to make mnemonics more valuable. My friend and language-learning inspiration, Benny Lewis [5], for instance, told me that he’s been using AI these days to help him generate “sounds like” associations for the keyword mnemonic.



For those unfamiliar with the method, the basic idea is to take a foreign language word and create a phonetic clue by mapping it to a similar sounding word or phrase in English (or another language you know well) and then visually mapping that to a highly memorable picture.

For instance, if you’re trying to remember the French word chavirer -> to capsize, you can make a phonetic clue of “shave an ear,” then you have a mental picture of an oversized ear sitting in a canoe, shaving its beard while the canoe flips over. Visualize that mentally once or twice and the association sticks, whereas it may take dozens of repetitions for the direct association to take root.

The keyword method works, but it hasn’t always performed well [6] in lab experiments. The reason is that it often takes too much time and training to get right. Modern LLMs are well-suited to the kind of wordplay tasks required to generate these sorts of images.

Spacing is another area where I expect AI to be some help, particularly the newer agentic AI paradigm. A major hiccup in applying spacing in learning is that it is a logistical nightmare to keep track of all the things you’ve learned and ensure some measure of regular re-exposure. Spaced repetition software does this for flashcards, but, as already discussed, those have fairly narrow applications.

However, I can easily imagine a future where an AI agent helps you manage your workload by resurfacing questions and ideas from material you’ve recently studied. With some guidance, you may even solve some of the retrieval problems mentioned earlier by getting it to quiz you on the major ideas.

Principle #8: Intuition

Understanding is central to learning. But the process of gaining understanding is still somewhat mysterious and poorly understood.

While I’m generally in favor of a knowledge-in-pieces model [7] of conceptual learning, where understandings are built bit by bit through many exposures, it’s also clear that a well-chosen analogy, metaphor or explanation can suddenly make the entire idea “click.”

In Ultralearning, I shared the Feynman Technique [8] my somewhat-apocryphal method of self-explanations that I made heavy use of during the MIT Challenge [9]. The basic method is simple:

  1. Write down the concept or idea you want to explain.
  2. Write out an explanation as if you were teaching it to someone else.
  3. Whenever you get stuck, go back to your study material and notes and re-read until you understand.

The technique works, but it is often frustrated by #3. If you don’t understand, even after reading the notes more deeply, you may waste a lot of time trying to find a better explanation.



Similarly, the method can backfire when conceptual confusion is glossed over rather than dug into—you may maneuver around your own ignorance rather than confronting it. This is why the method benefits from specificity: if you’re having difficulty solving a problem, make the topic of your teaching that exact problem, not the concept it tests in general terms.

AI has massive power to resolve both of these problems. For starters, while I find AI explanations are still somewhat inferior to good teachers, the gap is closing, and well-posed questions can generally get accurate answers. Using AI as a Socratic tutor is one of the ways it can help build understanding.

Second, AIs can ask pointed follow-up questions to reveal gaps in knowledge you don’t even know you are missing. I now frequently upload portions of essays I write where I explain some bit of science or history and ask the AI what I’m getting wrong. Often it nitpicks, but there are definitely occasions where I have a basic misconception.

The pitfall, of course, is that an on-demand system that can explain anything can also make it easy to skip steps #1 and #2 of the Feynman Technique. It’s very easy to ask AI to generate the explanation, skim through it and convince yourself you could have generated it on your own.



The risk of using AI to learn is that not learning at all is always the lowest effort strategy, and most models are designed to allow you to do exactly that. Without guardrails, the default is to skip over the mental work needed to build intuition, even if the technology can, in theory, assist in constructing a deeper understanding.

Principle #9: Experimentation

Experimentation, the process of trying out different things and figuring out what works, both within the skill you’re trying to master and in the process of learning itself, is a recurring theme in Ultralearning.

The new AI tools offer an acceleration of these possibilities. Not only because many new possible methods for learning now exist, such as on-demand Socratic tutoring, procedurally-generated practice problems, knowledge management, mnemonics generation and more, but also because many of the seemingly-useful applications are really pitfalls in disguise.

If I had to go back and redo any of the challenges I wrote about in Ultralearning, the possibilities for learning them would have changed dramatically. The MIT Challenge [9] could have used AI to fill in material gaps, given me extra practice problems and gotten me unstuck when my self-explanations only led to confusion. The Year Without English [10] could have had auto-generated flashcards, grammar explanations and corrective feedback on conversation recordings. I could have vibecoded software that could automatically give me detailed corrective feedback on the accuracy of my portrait drawings [11].

What wouldn’t have changed is the mental effort involved in learning skills, nor the joy and struggle in actually learning them. Despite the momentous technological changes we’re experiencing, I am still convinced that both the value and strain in learning new things will be an enduring constant.