Story sizing when AI compresses timelines
The old estimation models assumed human-speed development. That assumption is now false.
Story points were invented to abstract away time and focus on relative effort. A 5-point story was roughly 5x the effort of a 1-point story. The team's velocity — how many points they completed per sprint — was the planning unit.
This model breaks when AI compresses effort non-uniformly. A story that was an 8 (complex feature, multiple integration points, edge cases) might now be a 2 with AI assistance — but a story that was a 3 (requires deep domain knowledge and human judgment) might still be a 3. AI doesn't compress all work equally. It compresses the mechanical parts and leaves the judgment parts unchanged.
What to do instead
Stop sizing by effort. Start sizing by uncertainty.
Three categories:
Clear — requirements understood, validation criteria defined, AI can generate most of the implementation. Measured in hours. Don't estimate, just do them.
Uncertain — requirements partially understood, some unknowns. Need a spike or prototype first. AI accelerates the spike.
Discovery — we don't yet know what to build. Requires customer research, stakeholder alignment, or technical exploration. AI helps with research but doesn't compress the human judgment needed.
The shift
Instead of estimating how long things take, invest in reducing uncertainty before you build. AI makes building cheap — but building the wrong thing is still expensive.
The implication for planning: your planning process should focus on classifying work by uncertainty level, not estimating effort. The work that's "Clear" doesn't need planning — it needs doing. The work that's "Uncertain" needs a spike. The work that's "Discovery" needs research.
Sprint planning becomes uncertainty triage. That's a fundamentally different activity than point estimation.