Back to Insights
January 27, 2026

The Just-in-Time Software Revolution: What It Means for Engineering Organizations

The shift from static codebases to on-demand, intent-driven software is reshaping how engineering teams operate. Here's what leaders need to understand.

AI Engineering Leadership Agentic AI Process Improvement

Something fundamental has changed in how software gets made, and most engineering leaders I talk to haven't fully reckoned with it yet. We've crossed a threshold where software is no longer a static artifact — something you write, ship, and maintain. It's becoming a transient, on-demand response to human intent. You describe what you want. An agent builds it. The code exists for as long as it's needed, in exactly the shape it needs to take.

This is the Just-in-Time software revolution. And it changes everything about how engineering organizations need to operate.

The Shift from Writing Code to Steering Intent

For decades, the unit of engineering work has been the code commit. An engineer understands a problem, translates that understanding into syntax, tests it, reviews it, and ships it. The entire machinery of modern software engineering — version control, CI/CD, code review, sprint planning — is built around this workflow.

That workflow is being replaced. Not gradually, not theoretically — right now. We've moved from "writing code" to "steering intent." The engineer's job is increasingly to describe what should happen, provide the right context, set the constraints, and evaluate whether the output meets the need. The actual generation of code is becoming a commodity operation performed by reasoning agents that can decompose complex problems, gather information across data silos, execute code, and iteratively refine their output until it meets the stated objective.

This isn't science fiction. It's the daily reality for a growing number of engineering teams.

From Copilots to Agents

The trajectory here matters. We started with autocomplete — GitHub Copilot suggesting the next line of code. That was useful but incremental. It didn't fundamentally change how teams worked. It just made individual engineers slightly faster at the mechanical parts of their job.

What's happening now is categorically different. We've moved from simple LLM-powered suggestions to autonomous reasoning agents capable of multi-step planning, memory persistence across sessions, and direct interaction with digital environments. These agents don't just complete your code — they decompose complex problems into subtasks, gather information from multiple sources, execute and test code, observe the results, and refine their approach.

Look at what Google is doing with their code completion agent — it's not just generating code. It's functioning as a coordination layer, understanding the relationships between files, managing dependencies, and orchestrating changes across an entire codebase. The agent has context that spans far beyond the single file your cursor happens to be in.

This is the leap from tool to collaborator. And it demands a fundamentally different organizational posture.

The "Vibe Coding" Reality

The numbers are hard to argue with. Somewhere between 30-40% of new enterprise code is now AI-generated. 92% of US-based developers report using AI coding tools daily. The modern IDE doesn't just syntax-highlight your code — it understands entire dependency graphs, can reason about architectural implications, and generates solutions that account for the broader system context.

The workflow has shifted to what some are calling "vibe coding" — the developer provides intent, and the agent executes. Under the hood, the generation process has become remarkably sophisticated: an abstract syntax tree gets created from the intent, then translated into the target framework and language, then optimized through a self-reflection loop where the agent evaluates its own output for correctness, performance, and adherence to existing patterns.

These "flow-state" IDEs are not toys. They're producing production-grade code at a pace that would have been unthinkable two years ago. And they're getting better every month.

But here's where I part ways with the hype cycle.

The Process Problem Underneath

The technology is moving fast. Organizations are not. And that gap is where the real risk lives.

This is the point I keep coming back to in every conversation with engineering leaders: companies that lack solid engineering processes will see AI amplify their dysfunction, not solve it. The agents don't care whether your architecture is sound. They don't know that your team has no code review standards, or that your security scanning pipeline has been "on the roadmap" for three quarters. They'll generate code at whatever standard you implicitly set — which, for most organizations, is lower than they'd like to admit.

The security debt problem is a perfect illustration. Studies show that 24.7% of AI-generated code contains security vulnerabilities. Not because the models are inherently insecure — but because there's no process to catch the flaws. No automated scanning in the pipeline. No security review gate calibrated for the volume of AI-generated code. No feedback loop that teaches the agent what your organization considers acceptable.

When an engineer writes 50 lines of code a day, a manual security review might catch the issues. When an agent generates 500 lines in an hour, that same manual process collapses. The volume breaks whatever ad-hoc quality gates you had in place.

And the organizational shift compounds this. Engineers are transitioning from specialists who deeply understand the code they write to orchestrators who direct agents and evaluate output. That's a fundamentally different skill set. It requires different training, different tooling, different management practices, and different career ladders. Most organizations haven't even started thinking about this transition, let alone managing it.

The 3x Rule and ROI Discipline

There's a useful heuristic emerging around AI feature economics: any AI-powered capability should create value at least three times its compute cost. This sounds simple, but it forces a rigor that most organizations skip.

The cost model for AI-generated software is different from traditional development. You're shifting from CapEx — hiring engineers, investing in their growth, building institutional knowledge over time — to OpEx, where compute costs scale with usage. Every API call, every token processed, every retrieval operation has a direct cost. Input tokens, output tokens, retrieval augmentation — these all add up, and they add up differently depending on how well-structured your prompts, your context, and your data are.

But here's what I keep telling leadership teams: the real ROI from this revolution doesn't come from the technology deployment. It comes from the process improvement that the technology forces you to do. The organizations that are seeing genuine 3x or better returns aren't just plugging AI into their existing workflows. They've redesigned their workflows to be AI-native — structured inputs, clear acceptance criteria, automated quality gates, tight feedback loops.

The technology is the easy part. The process redesign is where the value actually lives.

What to Do About It

If you're an engineering leader staring at this shift — and you should be — here's the sequence that actually works.

Diagnose your processes first. Before you hand your codebase to an army of AI agents, understand where your current engineering processes are strong and where they're held together by heroics and tribal knowledge. Map the actual workflows, not the documented ones. Identify every quality gate, every handoff, every point where information degrades or gets lost.

Understand where agents help vs. where they create faster chaos. AI agents are exceptional at well-defined tasks with clear inputs, structured outputs, and measurable success criteria. They are terrible at navigating ambiguous organizational dynamics, undefined requirements, and processes that depend on someone "just knowing" how things work. If you can't articulate the process clearly enough for a new hire to follow it, an AI agent won't do any better — it'll just fail faster.

Invest in governance and guardrails before scaling agentic autonomy. This means automated security scanning in the pipeline, not as an afterthought. It means code quality standards that are enforced programmatically, not through hope and code review. It means clear policies about what agents can and cannot do autonomously, and at what point a human must review. Get this infrastructure in place before you turn up the dial on AI-generated code volume.

Build feedback loops from day one. The organizations that are struggling with AI adoption are the ones that deployed agents and then have no systematic way to evaluate whether the output is good. You need metrics — not vanity metrics about how many lines of code were generated, but outcome metrics about defect rates, security findings, time-to-production, and customer impact. If you can't measure whether the AI is helping, you're flying blind.

The Organizations That Win

The just-in-time software revolution is real, it's accelerating, and it will fundamentally reshape engineering organizations over the next few years. But the winners won't be the organizations that adopt AI the fastest. They'll be the ones that fix their processes AND adopt AI — not one or the other.

I've spent over twenty years watching engineering organizations try to solve process problems with technology. It never works. New tools on top of broken processes just give you faster, more expensive broken processes.

The organizations that will thrive in this new era are the ones that treat the AI revolution as a forcing function — a reason to finally do the hard, unglamorous work of getting their engineering processes right. Because once your processes are sound, the AI becomes almost trivially easy to integrate. And when it does, the results are genuinely transformative.

The revolution is here. The question isn't whether to participate. It's whether your organization is ready to participate without making a mess. And if the honest answer is "not yet" — that's fine. Start with the process work. The agents will wait.

Need help with your AI transformation?

We help engineering organizations build the processes and guardrails needed to adopt AI without creating chaos.