How to Stay Valuable When AI Writes All The Code
A breakdown of Anthropic CEO’s “Powerful AI” essay — and how engineers can prepare in a post-AI world
Welcome back to Path to Staff! I’ve spent the past week reading (and re-reading) Dario Amodei (CEO of Anthropic)’s latest essay. If you don’t know, Dario himself is the CEO and co-founder of Anthropic.
Last week, he wrote The Adolescence of Technology. In it, he shares his thoughts on the risks of Powerful AI and what it takes for humanity to overcome them. If you have time (~2 hours) this weekend, I strongly suggest giving it a read.
This essay is a follow-up to his previous Machines of Loving Grace, which is also a great read.
Given the length of this latest essay, I’ll summarize the relevant key points here. In the second half of this post, I’ll share my thoughts on what roles & skills are needed for software engineers in a post-AI world.
Powerful AI = Country of Geniuses
First off, Dario talks a lot about the safety of AI. This is in line with why he started Anthropic with six others in the first place. He is most concerned about how AI can get dangerous if left unchecked. He’s also written a NYTimes piece on regulating AI companies.
Dario is most concerned about “Powerful AI”, defined as a “country of geniuses in a datacenter”. It is a pure intelligence that is smarter than a Nobel Prize winner, has all the human interfaces available (text, audio, video, mouse, etc.) and can complete extremely complex tasks autonomously.
In this essay, Dario is worried that with this “country of geniuses”, it could lead to these five risks:
Autonomy risks: AI systems could develop unpredictable, harmful behaviors, potentially acting against human interests at scale.
For example, Claude engaged in deception when given training data suggesting Anthropic was evil; attempted blackmail when told it would be shut down; and then decided it was a bad person.
Misuse for destruction: Powerful AI could enable unskilled individuals to create biological weapons
Misuse for seizing power: Authoritarian governments could use AI for surveillance and propaganda to establish inescapable totalitarian control.
Major economic disruption: AI will displace jobs faster and more broadly than any previous technology. We’re already seeing early versions of this with tools like Claude Code.
Indirect effects: This is where dangerous modifications to human purpose lie. This is the catch-all bucket, and I think Dario also doesn’t know what else could come here.
For instance: AI psychosis and chatbots driving people to suicide, or AI inventing new religions and converting millions.
Three Simple Takeaways
There’s a lot of good content in his 19,000 word essay. Here’s my personal takeaways:
Dario doesn’t walk away and abandon these problems. He’s actively thinking about them at Anthropic. He’s ensured that Claude has a constitution that it sets out, ensuring that its models behave according to a set of values and behaviors. This is certainly commendable and a very unique perspective. From what I hear from friends at Anthropic, this is something that him and Anthropic embody on a day-to-day basis.
He’s most concerned about bioweapons and secondly, cyberattacks. Dario believes that a malicious actor or government could create a bioweapon that goes undetected. AI classifiers will help, but ultimately, governments will have to step in through regulation. I believe that while bioweapons are a concern, there are many other types of attacks that should not be ignored. For example: financial, social engineering, and personal attacks on high-visibility characters are all major risks too.
Pain is incoming. The short-term transition to leveraging AI will be unusually painful compared to past technologies. I’m seeing this in many, many companies today, for software engineering. Dario believes this will happen as humans and labor markets are slow to react and equilibrate.
How to Stay Valuable As Software Engineers
If you identify as a Coding Machine, Michael’s prior post last month lays out the 4 new archetypes.
But if you’re a regular software engineer who might not be working in an AI company, what should you do? What’s the right path forward?
I think there are two paths forward: (1) sharpening the skills that matter, and (2) being explicit about your role as a software engineer.
Skills That Remain
Even if AI takes over more, or all, of the “typing” and “producing”, there are skills that compound and remain valuable.
Here’s what I think will be the top three:
Thinking
You need to practice deep thinking. This means sitting with a problem until you can explain it simply. Until you can see the tradeoffs clearly.
Even with today’s best reasoning models, AI doesn’t have the ability to accomplish:
A deep understanding of the problem from your experience and viewpoint
Seeing how multiple disciplines connect
Figuring out the big picture (e.g. goals / mission / vision):
What is missing?
How do we achieve our goals?
Who do we need to involve to get there?
Verifying
You need to verify. As engineers, this usually means reading code well and fast, spotting failure modes, and signing off on the work.
Verification usually involves:
A good grasp of architecture (to ensure it fits in with the existing architecture)
Ensuring that end-users have a pleasant experience (where the design/product skills come into play)
Reading code fast given the large swaths of code output from AI
Communicating
You need to communicate well.
With so much more auto-generated text and slides, you need to learn how to get better at:
Communicating with agents
Defining a task: what is the minimum amount of context an agent need to produce something useful?
Reducing back-and-forth: how can you communicate effectively such that agents don’t have to go back and forth?
Communicating with humans
Sharing with teammates: How does your work impact them? What could potentially benefit them?
Communicating results upwards: How do you summarize your impact concisely for leadership?
Pick a Lane Early: Generalist or Specialist
Besides skills, It’s even more important to figure out who you want to be as a software engineer early on.
In the past, there was time (3-4 years – the formative years between a fresh grad to senior engineer) to figure out what direction you leaned towards. In a post-AI world, drifting is expensive.
Now there’s two clear lanes that stay defensible as an engineer: a generalist or a specialist.
A generalist builds depth across 3-4 relevant domains.
A specialist dives deep into one domain and becomes the expert.
If you’re not sure which one you are, ask yourself a few questions:
Do you find joy switching between domains?
What do you usually do when presented with a problem?
Do you tackle it from several sides, or go super deep in one area?
Lane 1: The Generalist (Intersection Builder)
Generalists master similar skills and combine them to create something more powerful than either one alone. That intersection becomes your moat.
Mastering a couple of these areas together with engineering (e.g. design + business + biology) puts you in a very strong position. AI won’t be able to easily replace you since it requires expertise to weave information across industries.
For engineers, the combinations that I believe are most promising (non-exhaustive list):
• Engineering + Healthcare (pharma, clinical workflows, regulated systems)
• Engineering + Business (pricing, growth, strategy, incentives)
• Engineering + Finance (payments, risk, markets, infrastructure)
• Engineering + Biology (biotech, computational biology, research tooling)
Personally, I’m a Generalist, and lean towards Business and Finance. These intertwine with my interests and are aligned with my career that I’ve been building. I spend time sharpening my knowledge in these areas by reading extensively and seeking experts in these fields.
What about design and product roles, you might ask? I think in today’s world, this is going to be a base requirement. In the past, these roles were separated, but with AI empowering everyone, these boundaries are going to blur.
Lane 2: The Specialist (Frontier Deep Diver)
Specialists dive deep into one domain and become the undisputed expert. They’re the one everyone calls when a truly difficult question lands in their area.
At Meta, I worked with an engineer who knew web rendering technology at an absurd level of depth. Whenever we hit some obscure performance cliff or saw rendering bugs that made no sense, we’d call them. They would look at the problem for maybe two minutes, then explain exactly what was happening and exactly how to fix it. Every time. Everyone knew they were the expert. And unsurprisingly, this person rose fast through the ranks.
In order to be a specialist, you need to be at the frontier of your own area. Your advantage is judgment under uncertainty:
You can spot failure modes fast.
You can tell “looks right” from “is right.”
You can validate AI output with confidence.
I’ve seen these successful people around me read papers by leading scientists, develop their own thinking and push for new ideas by either discovering patents. They inspire others and start like-minded communities. Famous public AI figure that are specialists are Andrew Yang and Yann LeCun.
If you’re a specialist, you will be rewarded for your knowledge in the post-AI era.
The Trap to Avoid
The worst place to be is halfway: decent at a few things, expert at none.
Pick a lane. Grow into it.
Master it.
TL;DR
“Powerful AI” = a “country of geniuses in a datacenter.” Dario Amodei argues it could arrive in 1–2 years, and the transition will be unusually painful—especially for early-career, routine white-collar work.
Your durable edge won’t be typing code. It’s going to be:
Thinking (framing + tradeoffs),
Verifying (judgment + quality), and
Communicating (clear specs + clear impact).
Pick a lane now before you get left behind:
Generalist = build a moat at an intersection of 2–4 domains.
Specialist = go absurdly deep and become the person who can validate what AI produces.



