I’ve been thinking a lot about the question of how AI shapes which skills are worth learning in the future.
It used to be a pretty safe bet that learning to code was a good career move. Programming was in demand, and it was a career path that paid a good salary without requiring expensive credentials. Now, there’s a lot of panic that programmers may soon be out of a job, replaced by AI coding agents.
Anxiety about AI being used to automate work isn’t limited to programmers. Most professionals I know are at least a little concerned that advancing AI might put them out of a job.
I’m certainly concerned. Many of my peers are already reporting their traffic has collapsed, in part due to search engines using AI to summarize their content so readers no longer need to visit their websites.1
I’m not sure what the future holds. I’m not even quite sure what is the right way to think about the future. Views online range from one hyperbolic extreme to another, and even many experts have wildly diverging forecasts of what’s to come.
But ignoring these issues doesn’t seem like a good solution either. So I’d like to try to lay out some of my thoughts now, however imperfect, in the hopes that writing about it (and the feedback I get from sharing) will sharpen my thinking on this topic.
Factors to Consider
For now, I will only consider career skills. While AI probably impacts on the hobbies you do for fun, I think the analysis there is different and isn’t the aspect of the problem that people seem most concerned about.
As someone who learns languages mostly for fun, for instance, the rise in AI translation has been an incredible boon. But I can also understand why professional translators may not feel so rosy.
The question of what’s worth learning, given the rapid changes in AI, is hard to answer on its own. Instead, it probably makes sense to look at some of the components separately and try to merge them together. I see a few different sub-questions that all have a bearing on this issue:
- How do currently-existing AI tools change the demand for certain kinds of labor?
 - What do the possible trajectories for future AI development suggest for the demand for labor?
 - How does the path to economically viable skills change as a result of AI?
 
Let’s think through each of these.
1. How does currently-existing AI impact labor demand?
A good starting point for thinking about AI is simply to assess the situation as it exists today. What’s the economic impact of current AI technology? How has that impacted the demand for labor?
Here the data are somewhat comforting for those worried about mass job displacement. Research from Yale found that, in the roughly three years since ChatGPT’s explosive debut, the impact on employment has been basically invisible.
This doesn’t appear to be because companies are slow to adopt AI technology. Another study found that nearly a third of companies had begun to use AI to automate some worker tasks. Despite that, only 5% of those companies have actually had changes in employment as a result, over half of which were employment increases.
Analysts at Goldman Sachs estimate that, given current AI technology, 6-7% of current jobs may be displaced. However, they believe that this will be accompanied by the rise of new jobs.
Historical analogues also support a somewhat less-alarmist view about the effects of transformative technologies. While AI is new, machine-based automation is not. Technological change results in what the economist Joseph Schumpeter called creative destruction, where new opportunities are created as previous ones are destroyed.
But even if demand for labor as a whole is unlikely to drop given current AI capabilities, that doesn’t mean there are no causes for concern. The Goldman Sachs report also noted that many executives admitted AI was slowing their recruitment in tech and finance.
Another report from the Stanford Digital Economy Lab notes that early-career workers in AI-exposed fields (such as programming) have seen a relative decline in employment, even as employment among workers aged 30 years and older increased. This matches my intuition that AI coding agents can do a lot of junior developer tasks pretty well, but struggle to match the experience needed to tackle more serious work.
Overall, this paints a more optimistic picture than what you might expect, given that 60% of workers are afraid of having their jobs replaced by AI. At least at current capabilities, the impact of AI doesn’t seem to be all that different from normal technological changes—some benefit, and some lose out, but there’s no collapse in the demand for skilled work.
2. What about the future trajectory of AI?
Of course, it’s naïve to say that, simply because AI isn’t having a major impact on jobs today, it never will. Most of my concerns about AI aren’t about the currently-existing technology, but where the technology is headed.
And, because investments in career skills need to pay out over years and decades to be worthwhile, we can’t simply dismiss questions of what kinds of capabilities will exist in ten years as being too speculative. While radiologists and programmers today may not be facing mass unemployment, if such a displacement is likely in ten years, it would still be important to factor in when investing in career skills.
In thinking about where AI is headed, I’ve benefited a lot from Tyler Cowen’s book, Average is Over, and the model of human-computer collaboration he describes. Using chess as an example, Cowen argues that the progress of technology goes through four phases:
- In the first phase, humans are simply better than computers. Early-era chess programs couldn’t beat even modestly adept human players, although they might fool novices.
 - In the next phase, the best humans are better than the best computers, but computers can trounce non-elite experts. With pre-Deep Blue chess computers, Kasparov could give the AI a run for its money, but a non-grandmaster wouldn’t stand a chance.
 - After this, computers are better than the best humans, but humans plus computers do better than computers alone. Cowen describes “centaur chess” where players have access to multiple AI recommendations. Humans alone may be inferior, but they can form a superior team when augmented with AIs.
 - Finally, when AI skills sufficiently exceed human capabilities, even collaboration isn’t fruitful. Human input makes decisions worse. We are likely already at this point with chess.
 
My sense of the current landscape is that most commercial applications of AI are in the first phase. Humans are better, but cost considerations may still shift decisions in favor of AI. For instance, call center work will likely be increasingly taken over by AI, not because talking to an AI is “better” than a real person, but simply because it’s cheaper. 2
In other areas, we’re closer to the second phase. Coding may be one of those, with the best coding agents doing better than non-elite programmers (at least on short-horizon tasks). But there are still lots of opportunities for collaboration as the AIs have different strengths and weaknesses than human coders.
Experience from previous AI successes in narrower fields suggests the third “centaur” stage of AI, where human-computer teams do better than either alone, may last for longer than we might think. Radiology seems to be a good example where, even if machine vision can beat the human eye, the more wide-ranging facets of the job still require radiologists.
Even if we reach artificial general intelligence (AGI), and computers can truly perform all of the cognitive and physical work of human beings, we may still end up in a situation where the many differences between AI agents and human beings allow humans and computers to work side by side for some time. If chess is any example, it might look like a human being “managing” a team of AI agents, understanding their strengths and weaknesses.
Of course, if we reach artificial superintelligence (ASI), where computers can do far better than humans at all tasks (including, presumably, the task of managing other AI agents), then it’s hard to say what the landscape for career skills will look like. Both extinction and utopia no longer seem like far-fetched possibilities, and making predictions about what the world will be like seems almost impossible.
Thus, a bit like the story of a drunk looking for his keys underneath the streetlamp—because that’s where the light is, if not necessarily his keys—I’m going to focus more on how to think about skill investments in three different scenarios:
- AI plateaus at roughly the level we see now. In other words, the 2022-2025 era of massive leaps in practical capabilities and usefulness flattens out. New versions of GPT become like new versions of an iPhone—new bells and whistles, but few radical changes.
 - AI continues to improve, but AGI remains elusive. We get better and better agents that can handle more tasks, but they remain (relatively) narrow tools. This could be because some tasks remain difficult to automate, so there is still plenty of work for humans to do, or getting enough 9s of reliability is difficult, so we need human beings to supervise and step in, even if the AI is capable in principle of doing the entire job.
 - We reach AGI. All cognitive (and physical?) tasks can be performed by machines, but there are enough differences in relative strengths and weaknesses that collaboration is still possible. In this stage, there’s an extended “centaur” period where humans collaborating with and managing AI becomes a dominant mode of work.
 
The most extreme scenario, where we move rather briskly to Stage 4 of Cowen’s model is probably the most alarming, but also seems the least tractable for analysis.
3. How does the path to valuable skills change?
Even assuming the more moderate AI scenarios, this still leaves open the question of where best to spend your time in building skills and human capital. In those scenarios, people still work for a living, but the nature of work is likely to change. Some skills will rise in importance, others will fade away.
A first distinction that’s important to note is that tasks are automated, not jobs. Real jobs involve many different tasks and responsibilities. So long as some of the tasks require human performance, there can still be a place for human skill.
Therefore, it’s a mistake to simply look at a job that currently has a large AI exposure, and assume that workers are inevitably headed towards unemployment. While that’s a possible scenario, it could also go the other way: AI automating some tasks could dramatically increase productivity, thereby increasing the overall value to people in that line of work.
To make that concrete, consider programmers. If demand for software is fixed, and AI can do most of the work of current coders, then the demand for the industry can be met by employing far fewer developers. Alternatively, if AI enables current programmers to make far more (or better) software, the industry’s productivity could go up dramatically. That may increase the demand for software, actually creating a higher demand for programmers than before. For instance, something similar happened with automobiles—once automation increased productivity to make a car widely affordable, employment in manufacturing increased rather than decreased.
But even if the amount of people employed doesn’t change, the skills needed to perform valuable work may change. I can imagine a few possibilities:
- If AI augments mediocre workers more than the most-skilled workers, it might lead to wage compression. There’s already some evidence of this, with ChatGPT increasing productivity while reducing the variance between employees in a simulated writing task.
 - If AI automates the work that lower-skilled employees do, but fails to automate the work that the highest-skilled workers do, that might lead to increasing wages for the most skilled with fewer gains for the bottom. The Stanford report on reduced early-career tech hires cited earlier may be evidence of that, with entry-level programming opportunities falling even while mid-career programmers are better off.
 - Alternatively, AI might automate many jobs entirely, but create new ones in their stead. Perhaps the future of programming doesn’t involve writing or understanding code, but rather being adept at managing a small army of coding agents and coaxing out their ideal behavior.
 
In the first scenario, AI may reduce the need for learning a particular skill, because the AI helper can do much of it for you. This would represent a kind of “dumbing down” of the craft and, while that would certainly be bad for the currently-skilled, it might create opportunities for economic mobility, just as the switch from skilled mechanics to assembly lines created middle-class jobs for large swaths of the population.
In the second scenario, AI extends current trends in skill polarization. Because the simpler parts of the job get automated away, what is left requires greater levels of skill. Something similar happened in the last few decades with ever-greater requirements for college degrees and post-graduate training. In such a world, getting to the frontier of a skill is both more pressing and more rewarding, but also much harder to do as there are fewer opportunities for apprenticeship, since little of the simpler work is of economic value.
In the third scenario, it could be that a completely different group of people come to dominate a career specialty. Maybe tomorrow’s programmers will need to be AI whisperers, understanding the quirks of AI’s “psychology” more so than the underlying codebase those agents are creating.
Depending on the exact outcome, it’s easy to imagine AI making investing in a particular skill less valuable (because AI augmentation will raise the productivity floor), more valuable (because AI automation will mean only the very best or highest-skill components remain), or even valueless (because the nature of the skill needed changes so dramatically it has essentially no overlap with the current skills required to perform the same work).
Some Tentative Conclusions
Given the wide uncertainty in both the progress of AI technology and what wide-scale adoption and diffusion of that technology will mean for actual jobs, it’s hard to form any firm conclusions about what an individual should prioritize learning right now.
Some ideas do seem reasonable to me:
1. Learning to work better with AI.
As the old saying goes, “if you can’t beat ‘em, join ‘em.” In many scenarios, human-plus-AI collaboration would be an important aspect of work, perhaps continuing decades into the future if the more extreme rapid take-off scenarios are discounted.
In this sense, learning to use AI is not so different from previous technological innovations, such as learning to use personal computers or the Internet. Eventually, work will be so suffused with AI interactions that the major divide will be between those who feel comfortable with using AI and those who do not.
For my own sake, I’ve been trying to explore using AI to boost my own work. From my experience, AI is pretty lousy as a ghostwriter, but it has been tremendous in helping me surface research and curate my reading lists. It’s hard for me to imagine going back to the pre-AI research strategies I used when preparing my last book.
3. Learning to work away from AI.
Another possibility is to retreat from the advancing wave. If you can focus on skills and strengths that are further away from the frontier of AI capabilities, you’re less threatened by changes in technology. This could involve shifting career directions. I expect gardeners and nurses to be some of the last jobs to be significantly automated, as they have large non-routine physical components that current AI is bad at.
I’m also trying to take proactive steps to AI-proof my career by shifting my work in ways that AI has trouble replicating. Part of the motivation for returning to public projects, like my recent Foundations project, was predicated on the belief that, in a future with cheap intelligence, my career as a writer will be better protected if I focus more on sharing personal experiences and not just aggregating information.
3. Learning more general skills.
I’ve long been skeptical of the idea of content-free generic skills. It seems much of what we know is grounded in specifics.
But even if the idea that we ought to abandon domain knowledge and simply teach generalities like “problem solving” or “critical thinking” is a pipe dream, skills and knowledge do vary by their degree of generality. Calculus is more generally-applicable than dishwasher repair, and being good at giving presentations is more general than knowing the ins-and-outs of the latest version of Powerpoint.
In an environment of change, it’s better to be the hardy dandelion rather than the hothouse orchid. Similarly, I expect with AI-induced change, people who have maintained diverse interests and skills will be best positioned to take advantage of the change, whereas extreme specialists will face a greater risk of extinction.
===
At least, that’s how I’m thinking about these issues, as of right now. But I’m curious about your position. Would you describe your feelings about AI as more excited or nervous? Do you think AI is undersold or overhyped? Have you made any changes in your career investments as a result of AI? I’d like to know! Write your thoughts in the comments.
Footnotes
- Whether this counts as useful innovation (AI reads content so you don’t have to) or parasitic on the actual work (instead of the creator doing the work and reaping the value, the creator still makes the content, but AI reaps the value of the work) is another question.
 - In this sense, mediocre AI progress may be actually be more dangerous. It can automate away real jobs, but the productivity improvements may be so small that they don’t create new opportunities for labor in their wake.
 
		
		
I'm a Wall Street Journal bestselling author, podcast host, computer programmer and an avid reader. Since 2006, I've published weekly essays on this website to help people like you learn and think better. My work has been featured in The New York Times, BBC, TEDx, Pocket, Business Insider and more. I don't promise I have all the answers, just a place to start.