A year in

Twenty-five years managing software delivery. SaaS companies, enterprise programs, delivery organisations that reported to regulators and boards. Tools came and went. The underlying work stayed the same. Strategy sets direction. Tactical layer plans. Teams execute. Somebody, somewhere, signs for every decision that ships.

I have been using AI in software development for over a year now. Every day. Primary tool, no longer an experiment.

The first months felt like what the social media keep claiming. A real step change. Refactors I would have deferred because they weren't worth a weekend got done between coffees. Tasks that used to take an evening started taking an hour.

Then I noticed something I didn't like.

The discipline was thinning

The reflexes I had spent twenty-five years building into teams (who decided what, who reviewed what, why one architecture and not another) did not have anywhere to attach.

On a solo project, fine. I am the architect. I am the reviewer. I am the one who takes the 2am call if something breaks. The agent ships, I sign, it is all me.

A thoughtful organisation does not work that way. A real delivery org has a CTO explaining a production incident to the CEO on Monday morning. A Change Advisory Board asking why the encryption library was swapped. An auditor asking which named human approved the change that touched the billing service. A platform team finding out three weeks later that a new dependency got written into four services and disagreeing with the choice.

All of that machinery assumes every decision has a name attached to it. Pull Claude Code, Cursor, Copilot, or any of them into that environment and the names start getting fuzzy. The agent produced it. The developer pressed accept. Who actually decided? What did they actually review? In most organisations right now the honest answer is: not sure.

The full-agentic answer doesn't scale

A serious movement is betting that all of this gets solved by the models getting better. Cleaner code, fewer hallucinations, agents running longer without going off the rails. Fine. The models will improve. Model quality does not solve this problem.

Accountability is not a model-quality problem. An auditor asking who approved a change does not accept "the agent decided, and the agent was very confident." A board member asking why this vendor does not accept "the agent evaluated the options." Accountability is a human concept. A governance concept. A smarter model does not automate it.

The second issue with the full-agentic position is scale. The current demo is one developer running agents to build a prototype on a Saturday. I have built that way myself. It is a real capability, not a gimmick. But it lacks a governance model. It is not how serious organisations ship software. Complex products are built by teams, by trains of teams, by coordinated groups of humans working across shared systems and shared consequences each with their specific expertise. One developer with an agent does not generalise to that world by stacking more agents on top of it. It generalises by adding governance.

Every new capability opens a governance gap

I have watched this pattern three or four times now. Every capability that genuinely changes execution speed opens a governance gap. The Internet opened one, Cloud opened one, the smart phone opened one and now AI does. Each time, the response that actually worked was not "slow down." It was: build the governance model that makes the new speed safe. Build the audit trail. Build the attribution. Build the review discipline. Let the teams keep going fast.

AI-generated code is the same pattern at a bigger scale. Faster than anything before it. The gap opens faster because the code volume is higher and the trace from decision to commit is shorter. Governance has to move faster too, or the gap wins.

Conductor contains no AI

This is the part people keep getting wrong.

They see "AI-governed SDLC platform" and they assume Conductor itself must be an AI platform. Another Cursor. Another Copilot with opinions. Something trying to beat the frontier labs at their own game.

It isn't. Conductor contains no AI in its internals. You bring your own.

Claude. GPT. A local Qwen. Whichever model your security team has approved this quarter. Conductor wraps the model you chose in governance. That is the whole product.

What Conductor supplies is the scaffolding. Every change is attributable to a named human architect. Every decision has an audit trail that outlives the session. Work flows through a mode sequence (Interrogation, Context Interpretation, Exploration, Plan, Execution, Validation) so nothing reaches production without passing each gate. Project Guardrails constrain what the agent is allowed to touch. Skills encode how your organisation actually wants software delivered, selectable per project. ThreadBriefs capture the reasoning behind every piece of work, not just the diff.

The AI is commoditised. Governance is not.

AI agents without receipts are liabilities. The speed is real. The liability is real too. Both can be true at once.

Old-world discipline, new-world agents

Decades of software and IT delivery management did not become obsolete when agentic coding arrived. The opposite happened. The old discipline is exactly what new-world agents need, because the old discipline was never really about humans. It was about making decisions traceable, reviewable, and defensible. Humans happened to be the ones doing the work.

If you cannot answer "who changed what, and why" about what your agents shipped last sprint, you do not have an AI problem. You have a governance problem. Governance problems get expensive in specific, predictable ways: audit findings, compliance breaches, architectural drift, outages nobody can explain, customer questions nobody can answer.

Old-world discipline. New-world agents. That is all Conductor is.

Your agents ship. Who signs?

Conductor is on waitlist. We are taking a small number of pilot customers. Engineering organisations where "the AI did it" is not an acceptable answer when something goes wrong.

Join the Waitlist