Intent Refinement for Software Delivery¶
This page extends CDS Intent Refinement for commitments executed through software delivery.
In software delivery, intent must survive a reality where constraints surface late, dependencies are indirect, and “done” can be mistaken for value. This profile adds the minimum intent structure needed to make commitments executable and governable in delivery runtimes (e.g., 3SF).
Purpose (software delivery focus)¶
Software-delivery Intent Refinement exists to turn meaning into delivery-grade intent:
- Outcomes and success signals that are observable in software realities
- Boundaries and constraints that reflect architecture, data, security, and operations
- Explicit treatment of quality attributes (NFRs) as first-class intent
- Explicit dependency obligations (access, environments, approvals, platform teams)
- A feasibility and learning plan that reduces technical and organizational unknowns
- Clear decision rights for inevitable tradeoffs during implementation
Boundaries¶
Software-delivery Intent Refinement is complete when:
- Intent includes quality attributes and operational expectations (not only features).
- Constraints include security/privacy/data realities and their validation ownership.
- Dependencies are explicit, owned, and time-aware.
- At least one feasibility path exists for major unknowns (spike/prototype/validation).
- Evidence expectations include instrumentation/measurement approach where relevant.
- Decision rights exist for scope changes, tradeoffs, and acceptance evidence.
It is not complete when:
- intent is primarily a backlog of features
- non-functional requirements are “later”
- feasibility is assumed rather than validated
- dependency work is invisible or treated as “someone else’s problem”
Additions to the Intent Package (software profile)¶
Use the standard Intent Package fields, and add the following software-delivery fields.
Quality attributes (NFRs) as first-class intent¶
Define the quality bar in intent terms:
- reliability / availability expectations (if applicable)
- performance expectations (latency, throughput, batch windows)
- security and privacy requirements (incl. threat posture where relevant)
- observability expectations (logs/metrics/traces, alerting)
- maintainability expectations (ownership boundaries, upgrade posture)
- operability expectations (on-call, runbooks, incident response posture)
Keep this lightweight: a few “must be true” statements + owners.
Reversibility classification¶
Classify key parts of intent by reversibility:
- easy to reverse (low cost)
- costly to reverse
- effectively irreversible (locks architecture/data/contracts/user behavior)
For irreversible elements, require:
- explicit tradeoff ownership
- revisit triggers
- feasibility validation before commitment deepens
Feasibility probes (delivery-grade learning plan)¶
When uncertainty is material, define probes such as:
- architecture spike / proof-of-concept
- data profiling / migration rehearsal
- integration test with external systems
- security review pre-check (policy fit, evidence requirements)
- performance baseline measurement
Each probe should specify:
- timebox
- decision it enables
- pass/fail signals
Dependency obligations¶
Make dependencies part of intent, not a delivery surprise:
- required access (roles, lead times, approvals)
- environment readiness (test data, parity, release windows)
- platform team work requests (what, when, who owns follow-up)
- procurement/legal gates (if any)
- stakeholder availability constraints (SME time, acceptance cadence)
Include:
- owner
- expected lead time
- what “ready” means
Evidence and instrumentation expectations¶
If value is expected to be measured, define:
- what signals will be captured
- where they come from (analytics, logs, business reports)
- who owns instrumentation
- when measurement begins
Avoid perfection. Aim for “measurable enough to decide.”
Technical decision rights¶
Explicitly define who can approve:
- scope reduction to meet constraints
- acceptance of technical debt (and under what conditions)
- performance/security tradeoffs
- architecture changes that affect reversibility
- release/cutover decisions (if applicable)
This prevents “engineering is blocked by unclear authority.”
Process (software-delivery emphasis)¶
Run the core Intent Refinement process, with these additional substages.
Step: Make quality attributes explicit¶
Ask:
- “What must be true operationally for this to be acceptable?”
- “What failure modes are unacceptable?”
- “What will support/on-call refuse to inherit?”
Step: Classify reversibility early¶
Ask:
- “Which choices lock us in?”
- “What becomes hard to undo once we start?”
- “What do we need to validate before we cross that line?”
Step: Convert unknowns into feasibility probes¶
Ask:
- “What do we need to learn first to avoid expensive rework?”
- “What would disconfirm our current intent?”
- “What is the smallest test that clarifies feasibility?”
Step: Pull dependencies into intent¶
Ask:
- “What work must other teams do for us to succeed?”
- “What lead times can block delivery?”
- “Who owns the dependency outcomes?”
Step: Define evidence capture¶
Ask:
- “How will we prove value beyond shipping?”
- “Do we need instrumentation changes, and who owns them?”
Step: Confirm decision rights¶
Ask:
- “When constraints collide with scope, who decides?”
- “Who can approve a meaningful tradeoff quickly?”
Software profile quality checks (Intent)¶
Must¶
- At least one quality attribute is explicitly stated (usually reliability/security/operability).
- Any major irreversible element is identified and owned.
- At least one feasibility probe exists for the biggest unknown.
- Key dependencies are listed with owners and lead times (even rough).
- Acceptance evidence is more than “shipped”; instrumentation expectations are stated where relevant.
- Technical decision rights exist for tradeoffs that are likely to occur.
Should¶
- A minimal operational posture is defined (who runs it, what “safe” means).
- Constraint validation ownership is explicit (security/compliance/architecture sign-off).
- Dependency obligations include “definition of ready” per dependency.
Smells¶
- “NFRs later.”
- “Security review at the end.”
- “We’ll figure environments/access as we go.”
- “We can change architecture later” without reversibility awareness.
- “Analytics/measurement isn’t necessary” while claiming outcome improvement.
Common software-delivery failure patterns¶
- Feature-only intent: quality attributes missing, causing late conflict.
- Irreversible surprise: lock-in discovered after work begins.
- Probe avoidance: uncertainty treated as confidence; spikes seen as waste.
- Dependency denial: external teams become bottlenecks mid-flight.
- Unmeasurable value: outcomes claimed, but no evidence plan exists.
Transition to Commitment Formalization (software profile)¶
The extended Intent Package becomes input to software-delivery Commitment Formalization, where commitments must include:
- governance cadence and decision forums
- change protocol that matches delivery reality
- operational ownership and acceptance evidence
- dependency obligations as part of the commitment envelope