00 Why we rewrote our method
Most software-delivery methodologies were designed when AI was a library you imported. Today AI is a colleague you collaborate with — one that reads, reasons, writes, tests, deploys, and monitors alongside humans.
Our previous method (2019) served us well for over 700 projects. It was faithful to agile doctrine: Kick-off, Planning, Development, QA, Delivery. It treated AI as a tool someone might use inside a phase.
That model is obsolete. In 2026, the teams shipping great software don't add AI to a phase. They put AI in the loop continuously — across discovery, architecture, build, assurance, and operations — with humans setting direction and owning the decisions that matter.
The ITSense Method is our rewrite of software delivery for that reality.
01 The seven principles
These are the commitments behind every engagement. They aren't aspirational — they're enforced by tools, not by slides.
Context is the product.
Before we write a line of code, we build shared, lasting context — every meeting, every document, every existing system — and we keep it in sync for humans and AI alike.
AI acts. Humans decide.
AI agents do the work. Humans own direction, trade-offs, and irreversible actions. The moment a decision is material — to users, to money, to security — a human signs it.
Shippable by default.
We work in weekly increments of usable value, not quarterly milestones. If an increment can't be demoed to a real user, it doesn't count.
Observability from day zero.
Telemetry, costs, model performance, and error budgets are instrumented before the first production deploy — not after an incident.
Multi-model, not monoculture.
We pick the best model per task. Claude, OpenAI, Gemini, Cohere, Meta, open weights. Vendor resilience is a feature, not a concession.
Security and compliance as code.
Regulation is enforced in pipelines, not in memos. SOC 2, PCI-DSS, HIPAA, BSA/AML, NYDFS Part 500, and local regulation ship as automated checks.
Every decision is documented.
Architecture Decision Records are co-produced by humans and AI, versioned in Git, and treated as first-class artifacts.
02 The substrate — always-on context
Every engagement runs on a persistent substrate that exists before phase 1 and outlives any later phase. It's the nervous system that lets AI participate instead of just assist.
| Layer | What it does | Implemented with |
|---|---|---|
| Persistent context | Shared memory across all AI agents and humans on the engagement. Every artifact, transcript, decision, and PR enters context. | Claude Cowork |
| Tool access | AI agents read and act on real systems — GitHub, Jira, cloud consoles, data warehouses — without a human middleman. | Model Context Protocol (MCP) |
| Meeting intelligence | Every discovery, review, and retro is captured, transcribed, and turned into structured intelligence cards. | Plaud + ITSense pipeline |
| Multi-model routing | Each task goes to the best model — Opus for reasoning, Haiku for fast loops, OpenAI/Gemini/Cohere/Meta for specific strengths. | LangChain + custom router |
| Source of truth | Code, infrastructure, and decisions live in Git. Nothing important is unversioned. | GitHub + IaC |
03 Sense — Discovery with AI in the loop
Goal: understand the problem deeply before proposing a solution.
- Claude ingests every relevant artifact — existing codebases, documentation, prior ADRs, interview transcripts, data schemas, system maps — via MCP.
- AI agents produce a first problem brief. Humans question, correct, extend.
- We explicitly map which tasks AI will own, which humans will own, and which are joint. It's a deliverable, not a guess.
- Risks are enumerated by AI (faster, more exhaustive) and prioritized by humans (judgment is still ours).
3.1 Deliverables
Problem brief with stakeholder map · AI-human assignment map · prioritized risk register · measurable success criteria.
Typical duration: 1–2 weeks for a two-week Discovery engagement.
04 Shape — AI-paired architecture
Goal: decide how the system will be built and what we ship when.
Architects pair with Claude Opus to produce Architecture Decision Records (ADRs). One decision per record. Explicit trade-offs.
Specialist AI agents take on named roles — Senior Backend Engineer, Security Reviewer, Data Architect, FinOps Analyst, Compliance Officer — and critique the architecture from their angle. Humans resolve.
The roadmap is structured as shippable weekly increments, not multi-sprint epics. Infrastructure, observability, and compliance are designed up front — not bolted on later.
05 Forge — Build with AI pair-programming as default
Goal: write and ship code. Fast, reviewed, tested, observable.
Every engineer pairs with Claude Code on every task. It's the default — not an option. Claude Cowork runs parallel agents that write unit tests, documentation, migration scripts, and security checks concurrently with feature development.
Every pull request goes through an AI review before reaching human review. The AI review checks: style, security, correctness against specs, test coverage, performance risk, accessibility, and alignment with ADRs.
Humans approve every merge. Humans deploy every change to production. Weekly demos to the Product Owner — one shippable increment per week is the cadence.
06 Prove — AI-assisted assurance and security
Goal: prove that the system works, is secure, and meets regulation — with evidence, not assertions.
AI generates test suites at unit, integration, and end-to-end levels based on specs and production traces. Adversarial testing: AI agents act as a red team, trying to break the system — injection, bypass, abuse, edge cases the team didn't imagine.
Compliance is enforced with automated gates: SOC 2, PCI-DSS, ISO 27001, HIPAA, BSA/AML, NYDFS Part 500, and local financial regulation where it applies. UAT is run by the Product Owner with AI-assisted scenario generation.
07 Operate — AI-native operations
Goal: run the system in production and improve it continuously.
Observability with LLM-summarized incidents. When an alert fires, Claude produces a 3-line summary, probable cause, and proposed remediation in under 60 seconds.
Incident response is partially automated — Claude can run non-destructive remediation via MCP (restart a service, clear a cache, rotate a credential) under a strict allow-list. Destructive or customer-facing actions require human approval.
Model and prompt observability is a first-class concern. We trace latency, cost, hallucination rates, and model/version drift per workflow.
08 Human Oversight Protocol
AI-first doesn't mean AI-out-of-control. A defined set of actions always requires explicit human approval. They're documented per engagement and enforced by tools — not by trust.
| Action class | Example | Approved by |
|---|---|---|
| Customer-visible change | Content, copy, UI flows, emails, notifications | Product Owner |
| Irreversible data action | Schema migration, purge, mass update | Data + Engineering Leads |
| Security-sensitive change | Auth, access control, secrets rotation, WAF rules | Security Lead |
| Financial / legal consequence | Pricing, billing, contracts, terms | Business owner |
| Production deploy · critical path | Main business flow, payments, KYC | Engineering Lead |
| New vendor / new model | Switching LLM provider, adding a dependency | CTO / Architect |
| Compliance-regulated change | AML rules, PHI flows, financial reporting | Compliance Officer |
Every approval is logged, attributable, and auditable. AI proposes, humans dispose. Always.
09 What we measure
Every ITSense Method engagement is instrumented against these metrics. They show up in the weekly status — not just at the end.
| Metric | Target | Why it matters |
|---|---|---|
| Weekly-increment hit rate | ≥ 90% | Validates that "shippable by default" is real |
| AI review coverage | 100% of PRs | Enforces pair-programming as the default |
| Human review latency | < 24h | Stops AI work from waiting on humans |
| Time to incident summary | < 60s | AI-assisted ops, delivered |
| Change failure rate | < 10% | DORA — shipping quality |
| Lead time for change | < 48h commit → prod | DORA — shipping speed |
| Compliance pipeline | 100% green pre-deploy | Non-negotiable gate |
10 The shape of a typical engagement
┌─────────────────────────────────────────────┐
│ Context substrate (always active) │
│ Claude Cowork · MCP · Plaud · Multi-LLM │
└─────────────────────────────────────────────┘
│
┌──────────┬───────────┼───────────┬──────────┐
▼ ▼ ▼ ▼ ▼
┌───────┐ ┌───────┐ ┌───────┐ ┌───────┐ ┌───────┐
│ SENSE │→ │ SHAPE │ → │ FORGE │ → │ PROVE │→ │OPERATE│
└───────┘ └───────┘ └───────┘ └───────┘ └───────┘
│
▼
┌─────────────────────────────────────────────┐
│ Human Oversight Protocol │
│ Every irreversible action · signed by │
│ the human who owns the outcome │
└─────────────────────────────────────────────┘
11 Mapping to the prior #ITSenseMeth
For clients and team members familiar with the 2019 method, here's the crosswalk:
| Old phase (2019) | New phase (2026) | Key change |
|---|---|---|
| Kick Off | Sense | Was a meeting. Now it's a 1–2 week engagement with AI-assisted context ingestion. |
| Planning | Shape | Was backlog refinement. Now it's AI-paired architecture + specialist-agent critique. |
| Development | Forge | Was Scrum. Now it's AI pair-programming as the default with continuous AI review. |
| Quality Assurance | Prove | Was QA + UAT. Now it's AI-generated tests, adversarial testing, compliance-as-code. |
| Production Delivery | Operate | Was a handover. Now it's a phase we start and never leave. |
The old method treated delivery as a terminal event. The new method treats production as the start of operations — which is where the system actually earns its keep.
12 Standing commitments
We won't hide AI behind human work. Where AI did it, we say so.
We won't claim AI autonomy we don't have. Human oversight is real and logged.
We won't recommend a model because of a partnership. We pick the right tool for the task.
We'll document every material decision when we make it — not in retrospect.
We'll measure ourselves publicly against the SLOs on this page.