Engineering

A Founder Wanted Kubernetes for 200 Users. Here's the Conversation We Had.

We talk to a founder at least once a month who wants microservices for their MVP. The pitch is always the same: we need to build for scale. Here's why that thinking costs you 6 months and $80K before you've validated anything.

Lanos Technologies6 min read

A few months ago, a founder walked into a call with a Miro board full of boxes and arrows. Eight services. A message queue connecting everything. Kubernetes orchestrating the whole thing. He had spent two weeks designing this.

His product had 200 active users.

We spent the next hour having the same conversation we have at least once a month. And by the end of it, he agreed to throw out the Miro board and start over with something that would actually ship.

Monolith vs microservices: when does each make sense?

It usually starts with some version of "We need to build for scale." The founder has read the Netflix engineering blog. They've watched the Uber architecture talks. They've been told by their previous agency that microservices are "the modern way to build software."

None of that is wrong, exactly. It's just not relevant to a product with 200 users, three engineers, and six months of runway.

Microservices solve one specific problem: organizational scaling. When you have 50 or 100 engineers working on the same codebase, you need independent teams deploying independently. That's the entire value proposition. That's it.

If you have three engineers, microservices are not a benefit. They're a tax.

What microservices actually cost at the MVP stage

This specific founder's architecture would have required:

Infrastructure he didn't need yet. Service discovery, load balancing, container orchestration, health checks, and monitoring for each of the eight services. That's operational complexity for things that have nothing to do with his product or his users.

Distributed debugging. When a bug spans three services, you're not just reading a stack trace anymore. You're correlating logs across services, tracing requests through a message queue, and trying to reproduce timing-dependent failures. A bug that takes 20 minutes to fix in a monolith takes half a day across services.

Data consistency headaches. The moment your data lives in multiple databases, you've signed up for eventual consistency, distributed transactions, and sagas. These are genuinely hard computer science problems. They have nothing to do with booking appointments, which is what his product actually did.

Deployment complexity. Eight services means eight CI/CD pipelines, eight sets of health checks, eight rollback strategies. His three-person team would spend more time managing deploys than writing features.

We've seen startups burn 60% of their engineering time managing infrastructure instead of building product. At the MVP stage, that is a death sentence.

We told him to build a modular monolith. One deployable unit, but with clean internal structure.

The code is organized into domain modules, each owning its own logic. Auth, billing, scheduling, notifications. Each module communicates through well-defined interfaces, never through direct database queries across module boundaries. It looks something like this:

src/
├── modules/
│   ├── auth/
│   │   ├── auth.service.ts
│   │   ├── auth.controller.ts
│   │   └── auth.repository.ts
│   ├── billing/
│   │   ├── billing.service.ts
│   │   └── billing.repository.ts
│   └── scheduling/
│       ├── scheduling.service.ts
│       └── scheduling.repository.ts
├── shared/
│   ├── database/
│   └── middleware/
└── app.ts

One database. Schemas for logical separation if needed, but no separate database instances. Feature flags for experimentation, not new services.

The key principle: design internal boundaries as if you might extract them later, but don't actually extract anything until real, measurable pain tells you to.

What happened

He shipped in 11 weeks. One codebase, one deployment, one database. The product worked. Users started paying. He hired a fourth engineer who was productive on day two because there was one repo to understand, not eight.

Six months later, his scheduling module was getting hammered during peak hours while the rest of the system was basically idle. That was a real, observable scaling problem. So we extracted the scheduling service into its own process with its own database. Took about three weeks. Did it surgically, based on actual data, not architecture diagrams drawn before anyone had used the product.

That's one service extracted from a monolith based on evidence. Not eight services designed on a whiteboard based on vibes.

When microservices actually make sense

We're not anti-microservices. We use them in production. But they make sense in specific situations:

You have 20 or more engineers and the coordination cost of working in a single codebase is slowing everyone down.

You have genuinely different scaling profiles. Your real-time messaging system needs 100x the compute of your user profile service.

You need different technology stacks for different parts of the system. One component genuinely needs a graph database while another needs a time-series database.

You've outgrown your monolith and you're extracting services based on observed problems, not predicted ones.

Notice the pattern. These are all symptoms of success. You solve them when you have them.

The extraction path

The architecture strategy we recommend to every MVP-stage founder is the same:

Start monolithic. Ship fast, iterate fast, debug fast.

Enforce module boundaries from day one. Each module communicates through its public interface, never through direct database access to another module's tables.

Monitor for bottlenecks. When a specific module becomes a scaling problem, you'll know because you'll see it in your metrics.

Extract surgically. Pull out one service at a time, based on actual data.

Our rule of thumb is simple: if you can't point to a specific, measurable scaling problem that a monolith can't solve, you don't need microservices. Full stop.

When you do need to handle asynchronous workloads within your monolith, something like Redis Streams for event-driven processing can give you the decoupling benefits without the operational overhead of a full microservices deployment.

The bottom line

Your MVP has one job: validate your business hypothesis as fast as possible. Every engineering decision should serve that goal.

Microservices don't make your product better. They don't make your users happier. They don't make your revenue grow faster. At the MVP stage, they actively slow you down.

Build a clean monolith. Ship fast. Extract services when the data tells you to, not when a conference talk tells you to.


We've shipped over a dozen products. Every single one started as a monolith. The ones that needed to scale? We extracted services when the time was right. Talk to us about your architecture.


TopicsMicroservicesMonolithSaaSMVPArchitecture

Explore More

More engineering insights, delivered.

Practical thinking on architecture, infrastructure, and shipping software that lasts.

← Browse all insights