OpenAI Codex CLI AMA: Key Insights from the Team's Internal Usage
Fellow developers,
OpenAI's Codex team just did an AMA, and I read through the entire conversation. Honestly, the information density was incredibly high—not just about the tool itself, but about what's going to happen to those of us who write code for a living.
If you're not familiar with Codex, it's basically an AI programming assistant that can write code, refactor it, even plan entire projects. But what made this AMA really valuable was hearing how OpenAI's own engineers use this tool, and their thoughts on the future of programming work.
OpenAI Team's Internal Practice: 99% of Code Written by AI
What shocked me most was team member Eason Goodale's share. He said 99% of his code modifications are done by Codex, with a goal of "not typing a single line of code by hand" next year. Sounds exaggerated, but looking at the whole team's usage, this might actually be the future norm.
Joseph Trasatti shared a fascinating work pattern: he uses Codex to rapidly build prototypes, needing only about 5 prompts to build 3 different versions of a feature in a single day. Even if these prototypes get thrown away, he doesn't feel bad. Why? The build cost is just too low. Think about it—that painful feeling when you spend days writing code, realize it's the wrong direction, and have to start over. That psychological burden is gone now.
Even more interesting is designer Ed Bayes' approach. He spends 70% of his time using Codex, only 30% on design tools. Design teams can now directly handle code and implement various UI interactions. The boundary between frontend and design is blurring.
Product manager Alexander Embiricos' example might be even more extreme. He doesn't know Rust, but Codex wrote almost all his Rust code. He can even start tasks on his phone's ChatGPT during meeting breaks, then sync the code to his desktop through VS Code extension when he's back. This work style would've been pure science fiction just a few years ago.
Evolution of Developer Roles: From Writing Code to Designing Systems
So what will we developers become? Just "prompt engineers" who write prompts?
Joseph Trasatti's perspective is enlightening. He believes abstraction levels will keep rising, our work will be closer to "system level" rather than "code level". He now hands all simple CRUD interfaces to Codex and doesn't want to write them any other way. His vision for the future: individual engineers owning large product spaces, becoming true full-stack generalists who understand not just tech, but product thinking and design capabilities.
He used a great metaphor: Iron Man and Jarvis. Even if AI becomes much better at programming than humans, we're still the ones in charge, with AI as our super assistant. There might be AR interfaces in the future, letting us visually see system design, directing AI to build different parts like talking to a colleague. In this mode, engineering remains "the coolest job with the highest autonomy."
Hanson Wang raised an interesting direction: designing new programming languages for AI. Since more code is being written by AI, designing languages that AI won't easily misuse makes a lot of sense.
Real User Pain Points: Speed, Limits, and Fine Control
Of course, reality is always harsh. The most common user complaints in the AMA happen to be the same ones I've encountered:
Speed issues. The new GPT-5-Codex is smarter but noticeably slower, sometimes taking 15 to 20 minutes. For developers used to rapid iteration, this wait time is genuinely hard to accept.
Usage limits. Many users complain that a 25-minute task can exhaust their quota, then they have to wait 3 or 4 days to continue. Worst of all, there's no middle ground between the $20 Plus and $200 Pro plans. Users strongly demand a middle tier around $50 to $100.
Fine control. This is an "extremely important" but overlooked feature: granular accept/reject of AI-generated code. GitHub Copilot lets you handle each code block individually, which is crucial in real work. AI output is usually good but not perfect—we need this fine control.
Cross-platform experience. Users want seamless work switching between ChatGPT, Codex CLI, and IDE plugins. Imagine starting a task on your phone and immediately continuing on your desktop—that's the experience we really need.
Practical Advice for Developers
Based on this AMA and my own experience, here are some suggestions:
Learn to write good prompts. This is really a craft. Expressing needs clearly and accurately is more important than you think. Start with simple tasks, gradually master the skill of talking to AI.
Try local deployment boldly. Codex CLI is open source, you can run it locally with ollama
and codex --oss
without worrying about API costs. Great learning opportunity.
Upgrade your systems thinking. Since AI can handle low-level code, where's your value? In system design, architecture decisions, business understanding. These are things AI can't replace short-term.
Stay critical. AI-generated code "looks beautiful" but you need to be the quality inspector. Treat AI as an efficient junior developer—you're the senior, you're the architect.
Automate everything automatable. Dependency updates, code audits, test generation—all these repetitive tasks can go to AI. Use the saved time for more creative work.
Final Thoughts
The AI programming era is really here. Codex is just the beginning, but its direction is clear: we won't be replaced, but our work methods will completely change.
In the future code world, AI programming tools will exist naturally in the background like compilers. And we'll stand on these tools' shoulders to build systems we couldn't imagine before.
This isn't a threat, it's an opportunity. The key is how you seize it.
Want to read the full AMA? Here's the original Reddit thread.
Comments
Loading...