Product

Your Product Scope Is a Feature List. That's Why You're Over Budget.

A founder came to us with 47 features for version one. We shipped 11. The product was better for it, and so was their budget. Here's the scoping framework we use on every project.

Lanos Technologies8 min read

A founder came to us with a spreadsheet. Forty-seven features, organized by category, color-coded by priority. She'd spent three weeks on it. It was thorough, well-structured, and completely wrong.

Not wrong because the features were bad. Most of them were reasonable ideas for a mature product. Wrong because she was trying to build all of them for the first version. The spreadsheet told you everything about what the product could do and nothing about what the product must do to prove the business works.

This is the most common mistake we see in product development: treating the first version as a miniature version of the final product instead of what it actually is. The simplest possible thing that tests whether the core idea works.

Start with the loop, not the features

Every product has a core user loop. One thing the user does, repeatedly, that creates the value your product promises.

For a scheduling tool, the loop is: browse availability, book a slot, show up for the meeting. For an e-commerce marketplace, the loop is: search for a product, compare options, purchase, receive it. For a project management tool, the loop is: create a task, assign it, update status, close it.

Your first version needs to support this loop and nothing else. Not analytics about the loop. Not admin tools to manage the loop. Not a recommendation engine to optimize the loop. Just the loop itself, end to end, working reliably.

When that founder described her product to us, we asked one question: "Walk me through exactly what a user does from the moment they open this app to the moment they've gotten value from it."

It took her about 90 seconds to describe the core loop. Everything she said in those 90 seconds became the scope for version one. Everything else went on a later list.

The four-bucket framework

We sort every feature into one of four buckets. This isn't original. It's a variation of MoSCoW prioritization that we've adapted for early-stage products.

Must-have, validated

Features you have real evidence that users need. Someone told you, in words or behavior, that they need this specific capability. They tried to do it and couldn't. An existing competitor has it and users cite it as the reason they use that competitor. Validated doesn't mean "seems logical." It means real humans demonstrated a need.

For that founder, the validated must-haves were: user authentication, the core scheduling workflow, and email confirmations. She knew these were essential because she'd been running the process manually for a year and those were the three things she actually did for every booking.

Must-have, assumed

Features you believe are necessary but haven't validated with users. These feel obvious but you'd be surprised how often they turn out to be unnecessary.

Her assumed must-haves included a payment system, multi-timezone support, and a mobile-responsive design. We kept payments (can't run a booking business without getting paid). We deferred multi-timezone (all her current customers were in one timezone) and kept mobile-responsive (because that's just good engineering, not a feature decision).

Should-have

Features that would noticeably improve the experience but whose absence wouldn't prevent users from getting value. Recurring bookings. Calendar integration. Custom branding. Analytics dashboard.

These are the features that make a product feel polished. They're real and they matter. But they're version 1.1, not version 1.0. Building them before you know whether the core loop works is spending money to polish something that might need to pivot.

Won't-have (yet)

Features that are explicitly out of scope. Not "deprioritized" or "later." Out. This is the most important bucket because it's the one that protects your timeline and budget.

Her won't-haves included: a marketplace where multiple service providers could list (she was the only provider), team management (she worked alone), a mobile app (the web app was sufficient), and AI-powered scheduling suggestions.

Writing things down in the won't-have bucket is psychologically difficult. It feels like giving up on ideas. But it's actually the opposite. It's choosing to focus your limited resources on the things that determine whether the business works. You're not abandoning those features. You're sequencing them so they get built after you have evidence that the core product has traction.

The cost of complexity audit

Each feature has a visible cost (engineering time to build it) and hidden costs that only show up later:

Testing surface area. Every feature needs to be tested. Not just "does it work when you click the button." Does it work on different browsers? Does it handle edge cases? What happens when two users trigger it at the same time? A feature that takes one week to build often takes an additional two to three days to test properly.

Interaction effects. Features interact with each other. A "recurring bookings" feature doesn't exist in isolation. It interacts with payments (do you charge per booking or per series?), with cancellations (does canceling one cancel all of them?), with notifications (how many emails does a recurring booking generate?). Each interaction becomes engineering work that wasn't in the original estimate. This is why MVP costs escalate so quickly when scope expands.

Maintenance burden. Features don't stop costing money when they ship. They need bug fixes, updates when dependencies change, and customer support when users get confused. A feature you build for version one is a feature you maintain forever (or until you deliberately remove it, which almost never happens).

Cognitive load. Every feature makes the product harder to understand for new users. More options, more screens, more things to click. Products don't fail because they lack features. They fail because they have so many features that users can't figure out the one thing they came to do.

Before we finalize scope, we do a cost-of-complexity audit. For each feature, we ask: what's the fully loaded cost of this feature, including testing, interactions, maintenance, and cognitive load? Features that seemed worth building in isolation frequently get cut when you account for their total cost.

The scoping conversation that changes everything

When we scope a product with a founder, the conversation usually goes like this:

Us: "If this product could only do one thing, what would it be?"

Founder: "Well, it needs to do at least three things. You can't X without also Y."

Us: "Okay. What are those three things?"

Founder: Describes three capabilities that form the core loop.

Us: "Great. Now. Everything else on your list. If we shipped just those three things, would a user get value from the product?"

Founder: (long pause) "...yes, but it would feel really basic."

Us: "Basic is fine. Basic that works is better than comprehensive that ships in six months."

The pause is always the turning point. That's the moment the founder realizes they've been designing version five and calling it version one.

What happened with those 47 features

We shipped 11. The core scheduling workflow. Payment processing. Email confirmations. User profiles. A basic admin view so the founder could see her bookings. And a few supporting features that were genuinely necessary for the loop to work (availability settings, booking confirmation pages, a cancellation flow).

The build took 8 weeks. The 47-feature version would have taken, by our estimate, 5 to 6 months. At $15,000 per month in development costs, that's the difference between $30,000 and $90,000.

More importantly, she launched and started getting real users. Within the first month, she learned that the feature users asked for most was Google Calendar sync, which was on the should-have list. She also learned that multi-timezone support, which she'd assumed was essential, was completely unnecessary because 94% of her bookings were in a single timezone.

Version 1.1 shipped four weeks after launch with Google Calendar sync and a couple of UX improvements based on real user feedback. She never built multi-timezone support.

That's the value of aggressive scoping: it gets you to real information faster. And real information is what turns good guesses into good products. Once you've launched, knowing what to do in the first 90 days is just as important as the scoping that got you there.

The framework, summarized

  1. Define the core user loop in one or two sentences.
  2. Sort every feature into four buckets: must-have validated, must-have assumed, should-have, won't-have.
  3. Run the cost-of-complexity audit on everything in the first two buckets.
  4. Cut aggressively. If a feature isn't required for the core loop to work, it goes to should-have or won't-have.
  5. Build the smallest version that tests the core hypothesis.
  6. Ship. Learn. Then decide what to build next based on what real users actually need, not what you assumed they would.

The hardest part isn't knowing what to build. It's knowing what not to build. And that discipline is the difference between a product that ships in 8 weeks and learns from real users, and a product that ships in 6 months and learns that half its features were unnecessary.


We do product scoping workshops as part of our technical consulting work. If you have a feature list and you're not sure what version one should look like, let's figure it out together.


TopicsProduct ScopingMVPStartupPrioritizationProduct Management

Explore More

More engineering insights, delivered.

Practical thinking on architecture, infrastructure, and shipping software that lasts.

← Browse all insights