Last week [1], I shared some tentative thoughts on how AI changes the nature of learning useful skills and developing human capital.
It’s tough to make predictions, especially about the future. Thus, many of my early conclusions try to answer the question of what we should learn given the large amount of uncertainty, rather than adapting narrowly for a specific vision of the future.
I don’t know what’s coming in terms of AI capabilities. And even if I somehow knew, the second- and third-order effects of technological change would still be hard to predict. When the Internet was nascent, experts easily predicted the rise of global communication. But nobody, as far as I can tell, predicted social media and its effects on politics and discourse.
Similarly, noting that AI might automate or speed up some tasks is much easier than trying to determine what society will look like once that automation has been taken into account.
As a trivial example, I remember when early AI image and video generators were demoed. I thought they were a near-magical technology, but I hadn’t anticipated slop, where the telltale signs of AI generation become synonymous with low-quality.
Of course, art is a somewhat unusual case. The meaning behind art has always gone beyond its superficial characteristics. A Leonardo da Vinci painting can sell for hundreds of millions [2], while a nearly-identical copy is worthless. So it wouldn’t be unprecedented if AI art was devalued, even if it is technically competent.
But what about something that has more utilitarian, not merely aesthetic, properties, such as programming? Is “vibe coding”—where you instruct a coding agent to write code on your behalf—a democratization of programming ability? Or is it just more AI slop, flooding the software market with junk?
Vibe Coding: Magic or Slop?
Talking to a lot of professional programmers, there seems to be some skepticism about the quality of vibe coding. And, since professional programmers have a better ability to discern quality code than non-coders, this judgement carries some weight.
However, we also can’t rule out self-interest here. Just as many artists and illustrators deplore image generation, in part because of its aesthetic inferiority and also out of concerns it might lower the value of their hard-won skills, I suspect a lot of programmer protests against vibe coding are recognition that it may be an existential threat to their livelihood.
As for me, I’m a somewhat unusual case. I’m a programmer, but I’m an amateur. I learned some coding in high school and university, and I learned a lot more theory and did more hands-on work during my MIT Challenge [3]. Since then, however, I’ve mostly dabbled.
As a result, I’m a lousy judge of code quality. I don’t have decades of experience managing large code bases to have a taste for good code. I defer to more experienced practitioners.
But, I also don’t have skin in the game. If the entire industry happily agreed to let coding agents write all code, it wouldn’t make much difference to my career. Thus, I’m more dispassionate about vibe coding than people whose livelihoods depend on having mastered this particular rare and valuable skillset.
My Experiences Vibe Coding
When coding agents first arrived several months ago, I was happy to use them to create a few simple programs that I wanted to make but lacked the time and motivation to pursue on my own:
- I made a helper script for watching videos in Chinese. It can download a raw video from YouTube, transcribe it, and generate a study guide which includes an English summary and key words. The goal was to reduce the cognitive load for content I was interested in but found hard to follow.
- I made a tool to help with planning watercolor paintings. It automatically splits a reference photo into lights, midtones and darks and can smooth out features for added simplification.
- I created a reading game for my son. It matches simple phonetic words to pictures. He enjoyed playing it for a little while before he got bored.
In all of these cases, vibe coding was a major improvement over the alternative, since I could “make” a script or simple application with only a handful of back-and-forth calls to the coding agent. In most of these cases I probably could have written the code myself, but it would have taken a lot longer and the end product would probably not have been worth my time.
One thing that struck me, however, even with those simple applications, was how much conceptual knowledge was required. Even if I wasn’t writing the code, I needed the knowledge of how code works and what design constraints to set up to get a good result. Even if vibe coding is not the same as writing and understanding code from the bottom up, it requires more technical savvy than AI image generation or chatbot Q&A.
This seems even more true once you move from simple scripts to bigger projects. My latest vibe coding effort has been trying to design a flashcard app for language learning. This is largely to scratch my own itch—I love Anki [4], but I find its usefulness drops off at intermediate levels and above.
I had the idea of creating an application that would generate study material by drawing on the actual corpus of words in a language, thus allowing me to fine-tune the difficulty automatically. This would help with languages where I have some proficiency, but native media consumption still has a frustratingly high cognitive load.
This experience of vibe coding has been interesting. It’s basically removed all of the actual implementation difficulty, but I’m still left with a lot of the conceptual difficulty of deciding what the actual behavior of the software should be. In this sense, vibe coding may not involve writing code, but it seems to rely on having abstract knowledge of how software works.
For instance, in trying to design a feature that weighed whether to give the user a new card or have them review an old card, I had an extended back-and-forth with ChatGPT, trying to integrate research findings from things like Zipf’s law [5], ACT-R [6] modeling of skill acquisition, and implementing Bayes’ rule [7] to estimate parameters like the level of prior knowledge. When I brought these ideas up, ChatGPT was happy to follow my thinking and suggest helpful additions. But if I didn’t nudge the conversation in those directions, the AI never suggested them spontaneously.
In some senses, vibe coding, far from being a kind of slop, was a more rarefied kind of activity. I’m spending all of my time thinking about the underlying algorithms and the central “puzzle” I’m trying to solve, and very little time debugging things like user interface problems or stumbling over elements of unfamiliar programming languages.
At the same time, despite ChatGPT’s incredible breadth of knowledge, it is weak at bringing together insights. I’m confident that if I wanted a generic flashcard application, the coding agent would deliver something suitable, but it can only do something original with much prodding and guidance.
Centaur Coding and What Skills Remain
This recent experience toying around with a flashcard app fits most closely the collaborative regime I described in my last essay [1]: AI as a tool to augment human intelligence and skill, rather than a straight substitute.
In some ways, vibe coding dramatically reduces the skill level required to produce working software, at least that of minimal complexity. It speeds up prototyping and allows us to explore bespoke software that would previously have been prohibitively expensive to create.
But if vibe coding becomes reliable enough in the future that most code is written by coding agents, it also shifts more burden onto human beings to have better conceptual understanding, product vision and design taste. A world where everyone can code is probably one where there’s a lot more slop, but perhaps also a much higher bar for what counts as genuinely useful software. In that sense, vibe coding may impact the variance more than the mean—making software both a lot worse and a lot better at the same time.
Another takeaway is that, to the extent that coding agents take away a lot of the low-level programming skill required, software developers are going to have to rely more on abstract concerns about things like architecture, algorithms, user experience and design to add value to the process. A successful computer science education may be more theory-heavy and less hands-on.
This was a bias I noted in MIT’s open classes [8], which are heavy on theory but light on practical coding skills. At the time, I felt like that bias hurt me as a programmer, as I learned the theory of computation and multivariate calculus but never learned JavaScript.1 [9]
But now it feels like that might actually be more useful in the long run. Perhaps AI-mediated careers will increase the value of more general liberal arts/science education as opposed to more vocationally-focused schooling?
I’m curious what professional programmers think of the recent vibe coding trend. Do you use coding agents in your work? What do you think is the future of programming skill as things like Copilot and coding agents become more adept at handling the routine parts of tasks? Which aspects of the job do you think are more dependent on human strengths?