DHH Is Wrong: Rails Isn’t a Great Fit for AI‑Era Development
In recent years, the debate over whether “Rails is still the best productivity framework” has resurfaced. My conclusion is straightforward: under an AI‑native product and engineering paradigm, Rails’s core assumptions and bottlenecks make it hard to remain the best choice.
This isn’t a dismissal of the Ruby language. It’s an examination of the “monolithic full‑stack, convention‑over‑configuration, server‑rendered views + glue‑style ActiveRecord” philosophy and its structural mismatch with the practical needs of AI engineering.
Why Rails/Ruby Is No Longer Ideal for AI Programming
Rails isn’t a great fit for AI programming for several reasons.
First, Ruby lacks strong static typing by default, which complicates engineering at scale. Yes, companies like Shopify and Stripe have used Ruby at massive scale—but they rely on type checkers. Without typing, AI tools are far more likely to generate subtly incorrect code that slips through early checks and leads to “inch‑off, mile‑wrong” failures later.
Second, Rails imposes a tightly coupled front‑end/back‑end worldview. DHH personally dislikes modern front‑end toolchains, and there’s no official solution if you want one. There are good community options, but none are first‑party. With a framework like Rails, going off the official path can make future migration and maintenance disproportionately costly—one of my biggest concerns.
Third, the ecosystem significantly lags Node.js/TypeScript and Python for AI. For example, nothing rivals Vercel’s AI SDK in breadth and cohesion. Beyond AI, mobile has React Native and Expo; Ruby once had RubyMotion, which is now dated. As a developer with 10+ years of Ruby/Rails experience, I wouldn’t start new projects on it today.
Framework Design: Why AI‑Native Development and Rails Don’t Mesh
1) Architectural assumptions clash:
- Rails assumes request–response synchronicity, tightly coupled MVC, and a database‑centric design.
- AI workflows are long‑running, event‑driven, streaming, multi‑agent, and tool‑orchestrated.
2) Throughput and concurrency model:
- Heavy LLM/retrieval/function‑calling workloads need high concurrency, non‑blocking I/O, job queues, and strong observability.
- The traditional Rails stack (Puma + ActiveRecord + callbacks) tends to bottleneck on I/O‑bound workloads and amplifies N+1 patterns.
3) Context and state management:
- AI apps revolve around “long‑term memory,” “conversation context,” and “work logs with replayability.”
- That aligns better with event sourcing, append‑only logs, vector indexing, and composable pipelines—not CRUD‑driven page/form models.
4) Multimodality and edge collaboration:
- Voice, vision, document agents often coordinate across edge and cloud.
- A monolith is ill‑suited to fine‑grained splitting across cloud functions, workers, and GPU services.
5) Cost and observability:
- AI cost lives in inference and retrieval paths, demanding span‑level tracing and dynamic policies (tiered models, backoff, guardrails).
- Traditional Rails monitoring emphasizes page requests and DB queries, not “prompt → retrieval → tool → agent iteration” end‑to‑end traces.
What Stacks Fit AI‑Native Better
Don’t chase fads—choose designs aligned with events, streams, jobs, and observability:
- Runtime: cloud‑native (serverless/workers/queues) with parallelizable orchestration (Temporal/Cloud Tasks/Workflows)
- Communication: HTTP/3, WebSocket, Server‑Sent Events for long‑lived, streaming connections
- Data: Postgres + vector indexing (pgvector) or a dedicated vector store; event logs (Kafka/Pub/Sub) for replayable context
- Service boundaries: split “human‑in‑the‑loop,” “retrieval,” “tool execution,” and “evaluation/observability” into independently scalable units
- Languages/frameworks: prioritize non‑blocking I/O and lightweight composition—TypeScript/Node, Go, Elixir, Rust, or server‑side Swift (for Apple ecosystems)
Rails Can Work—But You’ll Swim Upstream
If your team is deeply experienced with Rails, you can adapt with add‑ons:
- Offload long tasks to Sidekiq/queues
- Use ActionCable/SSE for streaming outputs
- Add OpenTelemetry to trace LLM/tool spans
- Externalize retrieval and agent services (avoid ActiveRecord‑centric coupling)
The catch: once you’ve done this, you’re already routing around Rails’s main path and paying complexity and consistency costs.
Conclusion
AI‑native development is not “slapping an LLM onto an old MVC.” It requires rethinking request models, state management, observability, cost control, and deployment topologies. Rails remains elegant for classic web CRUD, but for multi‑agent, streaming, and job‑orchestrated AI apps, it’s rarely the least‑effort choice.
Choose tools that go with the grain of your problem—not just the ones you know best.