The ITSense Method

AI-native delivery for mission-critical software.

Seven principles. An always-on context substrate. Five phases with AI in every loop. A Human Oversight Protocol on every irreversible action. Replaces #ITSenseMeth v1 (2019).

Authors: ITSense — SoHo NY + Bogotá Version: 2.0 · April 2026 Scope: All new engagements

00 Why we rewrote our method

Most software-delivery methodologies were designed when AI was a library you imported. Today AI is a colleague you collaborate with — one that reads, reasons, writes, tests, deploys, and monitors alongside humans.

Our previous method (2019) served us well for over 700 projects. It was faithful to agile doctrine: Kick-off, Planning, Development, QA, Delivery. It treated AI as a tool someone might use inside a phase.

That model is obsolete. In 2026, the teams shipping great software don't add AI to a phase. They put AI in the loop continuously — across discovery, architecture, build, assurance, and operations — with humans setting direction and owning the decisions that matter.

The ITSense Method is our rewrite of software delivery for that reality.

01 The seven principles

These are the commitments behind every engagement. They aren't aspirational — they're enforced by tools, not by slides.

1.1 · Principle

Context is the product.

Before we write a line of code, we build shared, lasting context — every meeting, every document, every existing system — and we keep it in sync for humans and AI alike.

1.2 · Principle

AI acts. Humans decide.

AI agents do the work. Humans own direction, trade-offs, and irreversible actions. The moment a decision is material — to users, to money, to security — a human signs it.

1.3 · Principle

Shippable by default.

We work in weekly increments of usable value, not quarterly milestones. If an increment can't be demoed to a real user, it doesn't count.

1.4 · Principle

Observability from day zero.

Telemetry, costs, model performance, and error budgets are instrumented before the first production deploy — not after an incident.

1.5 · Principle

Multi-model, not monoculture.

We pick the best model per task. Claude, OpenAI, Gemini, Cohere, Meta, open weights. Vendor resilience is a feature, not a concession.

1.6 · Principle

Security and compliance as code.

Regulation is enforced in pipelines, not in memos. SOC 2, PCI-DSS, HIPAA, BSA/AML, NYDFS Part 500, and local regulation ship as automated checks.

1.7 · Principle

Every decision is documented.

Architecture Decision Records are co-produced by humans and AI, versioned in Git, and treated as first-class artifacts.

02 The substrate — always-on context

Every engagement runs on a persistent substrate that exists before phase 1 and outlives any later phase. It's the nervous system that lets AI participate instead of just assist.

LayerWhat it doesImplemented with
Persistent contextShared memory across all AI agents and humans on the engagement. Every artifact, transcript, decision, and PR enters context.Claude Cowork
Tool accessAI agents read and act on real systems — GitHub, Jira, cloud consoles, data warehouses — without a human middleman.Model Context Protocol (MCP)
Meeting intelligenceEvery discovery, review, and retro is captured, transcribed, and turned into structured intelligence cards.Plaud + ITSense pipeline
Multi-model routingEach task goes to the best model — Opus for reasoning, Haiku for fast loops, OpenAI/Gemini/Cohere/Meta for specific strengths.LangChain + custom router
Source of truthCode, infrastructure, and decisions live in Git. Nothing important is unversioned.GitHub + IaC

03 Sense — Discovery with AI in the loop

Goal: understand the problem deeply before proposing a solution.

  • Claude ingests every relevant artifact — existing codebases, documentation, prior ADRs, interview transcripts, data schemas, system maps — via MCP.
  • AI agents produce a first problem brief. Humans question, correct, extend.
  • We explicitly map which tasks AI will own, which humans will own, and which are joint. It's a deliverable, not a guess.
  • Risks are enumerated by AI (faster, more exhaustive) and prioritized by humans (judgment is still ours).

3.1 Deliverables

Problem brief with stakeholder map · AI-human assignment map · prioritized risk register · measurable success criteria.

Typical duration: 1–2 weeks for a two-week Discovery engagement.

04 Shape — AI-paired architecture

Goal: decide how the system will be built and what we ship when.

Architects pair with Claude Opus to produce Architecture Decision Records (ADRs). One decision per record. Explicit trade-offs.

Specialist AI agents take on named roles — Senior Backend Engineer, Security Reviewer, Data Architect, FinOps Analyst, Compliance Officer — and critique the architecture from their angle. Humans resolve.

The roadmap is structured as shippable weekly increments, not multi-sprint epics. Infrastructure, observability, and compliance are designed up front — not bolted on later.

05 Forge — Build with AI pair-programming as default

Goal: write and ship code. Fast, reviewed, tested, observable.

Every engineer pairs with Claude Code on every task. It's the default — not an option. Claude Cowork runs parallel agents that write unit tests, documentation, migration scripts, and security checks concurrently with feature development.

Every pull request goes through an AI review before reaching human review. The AI review checks: style, security, correctness against specs, test coverage, performance risk, accessibility, and alignment with ADRs.

Humans approve every merge. Humans deploy every change to production. Weekly demos to the Product Owner — one shippable increment per week is the cadence.

06 Prove — AI-assisted assurance and security

Goal: prove that the system works, is secure, and meets regulation — with evidence, not assertions.

AI generates test suites at unit, integration, and end-to-end levels based on specs and production traces. Adversarial testing: AI agents act as a red team, trying to break the system — injection, bypass, abuse, edge cases the team didn't imagine.

Compliance is enforced with automated gates: SOC 2, PCI-DSS, ISO 27001, HIPAA, BSA/AML, NYDFS Part 500, and local financial regulation where it applies. UAT is run by the Product Owner with AI-assisted scenario generation.

07 Operate — AI-native operations

Goal: run the system in production and improve it continuously.

Observability with LLM-summarized incidents. When an alert fires, Claude produces a 3-line summary, probable cause, and proposed remediation in under 60 seconds.

Incident response is partially automated — Claude can run non-destructive remediation via MCP (restart a service, clear a cache, rotate a credential) under a strict allow-list. Destructive or customer-facing actions require human approval.

Model and prompt observability is a first-class concern. We trace latency, cost, hallucination rates, and model/version drift per workflow.

08 Human Oversight Protocol

AI-first doesn't mean AI-out-of-control. A defined set of actions always requires explicit human approval. They're documented per engagement and enforced by tools — not by trust.

Action classExampleApproved by
Customer-visible changeContent, copy, UI flows, emails, notificationsProduct Owner
Irreversible data actionSchema migration, purge, mass updateData + Engineering Leads
Security-sensitive changeAuth, access control, secrets rotation, WAF rulesSecurity Lead
Financial / legal consequencePricing, billing, contracts, termsBusiness owner
Production deploy · critical pathMain business flow, payments, KYCEngineering Lead
New vendor / new modelSwitching LLM provider, adding a dependencyCTO / Architect
Compliance-regulated changeAML rules, PHI flows, financial reportingCompliance Officer

Every approval is logged, attributable, and auditable. AI proposes, humans dispose. Always.

09 What we measure

Every ITSense Method engagement is instrumented against these metrics. They show up in the weekly status — not just at the end.

MetricTargetWhy it matters
Weekly-increment hit rate≥ 90%Validates that "shippable by default" is real
AI review coverage100% of PRsEnforces pair-programming as the default
Human review latency< 24hStops AI work from waiting on humans
Time to incident summary< 60sAI-assisted ops, delivered
Change failure rate< 10%DORA — shipping quality
Lead time for change< 48h commit → prodDORA — shipping speed
Compliance pipeline100% green pre-deployNon-negotiable gate

10 The shape of a typical engagement

                      ┌─────────────────────────────────────────────┐
                      │      Context substrate (always active)      │
                      │   Claude Cowork · MCP · Plaud · Multi-LLM   │
                      └─────────────────────────────────────────────┘
                                     │
              ┌──────────┬───────────┼───────────┬──────────┐
              ▼          ▼           ▼           ▼          ▼
          ┌───────┐  ┌───────┐   ┌───────┐   ┌───────┐  ┌───────┐
          │ SENSE │→ │ SHAPE │ → │ FORGE │ → │ PROVE │→ │OPERATE│
          └───────┘  └───────┘   └───────┘   └───────┘  └───────┘
                                     │
                                     ▼
                      ┌─────────────────────────────────────────────┐
                      │   Human Oversight Protocol                  │
                      │   Every irreversible action · signed by     │
                      │   the human who owns the outcome            │
                      └─────────────────────────────────────────────┘

11 Mapping to the prior #ITSenseMeth

For clients and team members familiar with the 2019 method, here's the crosswalk:

Old phase (2019)New phase (2026)Key change
Kick OffSenseWas a meeting. Now it's a 1–2 week engagement with AI-assisted context ingestion.
PlanningShapeWas backlog refinement. Now it's AI-paired architecture + specialist-agent critique.
DevelopmentForgeWas Scrum. Now it's AI pair-programming as the default with continuous AI review.
Quality AssuranceProveWas QA + UAT. Now it's AI-generated tests, adversarial testing, compliance-as-code.
Production DeliveryOperateWas a handover. Now it's a phase we start and never leave.

The old method treated delivery as a terminal event. The new method treats production as the start of operations — which is where the system actually earns its keep.

12 Standing commitments

We won't hide AI behind human work. Where AI did it, we say so.

We won't claim AI autonomy we don't have. Human oversight is real and logged.

We won't recommend a model because of a partnership. We pick the right tool for the task.

We'll document every material decision when we make it — not in retrospect.

We'll measure ourselves publicly against the SLOs on this page.

— ITSense · SoHo NY + Bogotá · 2026

Next step

See the method in production.

Read a case — South America's leading airport, ANALYZER, the challenger bank — to see what the ITSense Method looks like in the field.