Product

The First 90 Days After Launch Are the Hardest. Here's the Playbook.

Launching is the easy part. The next 90 days determine whether your product survives. Here's the week-by-week playbook we give every client on launch day.

Lanos Technologies8 min read

The day you launch is a celebration. Everything after that is work.

Most founders spend months (sometimes years) building toward launch day. Then it arrives, and they realize they have no plan for what comes next. The product is live. Users start showing up. And suddenly you're dealing with a dozen things you've never dealt with before, all at the same time.

Bugs you didn't catch in testing. Feature requests that contradict each other. Users who can't figure out the onboarding. An analytics dashboard that says you have traffic but doesn't tell you whether the product is working.

We've launched enough products to know that the first 90 days follow a predictable pattern. Here's the playbook we share with every client on launch day.

Week 1 to 2: Fix what's broken and don't panic

The first two weeks are about stabilization. Things will break. That's normal. The goal is not perfection. The goal is making sure the critical path works reliably.

Focus on blocking bugs only. A blocking bug is anything that prevents a user from completing the core action. If users can't sign up, that's blocking. If users can't complete a purchase, that's blocking. If the font size on the settings page is slightly off, that's not blocking. Ignore it for now.

Set up error monitoring if you haven't already. You need Sentry or equivalent running in production before you launch, not after. If you didn't set it up, do it on day one. You need to know about errors before users tell you, because most users won't tell you. They'll just leave.

Resist the urge to add features. In the first two weeks, you'll get feature requests. Users will say "I love it, but it needs X." Some of those requests will be great. Write them down. Don't build any of them yet. You don't have enough data to know which requests represent real needs versus individual preferences.

Check every day: Is the application up? Are users completing the core action? Are there errors in the logs? That's it. Don't over-analyze engagement metrics in week one. The sample size is too small to draw conclusions.

Week 3 to 4: Start measuring what matters

By week three, you should have enough data to start paying attention. Not a lot of data. But enough.

Define your activation metric. This is the single action that indicates a user "got" the product. For a scheduling tool, it might be "booked their first meeting." For a project management app, it might be "created their first project and invited a team member." For a marketplace, it might be "completed their first purchase."

If 30% of signups reach activation in the first week, that's decent for an early product. If it's under 10%, you have an onboarding problem, not a product problem. The core loop might be fine, but users aren't getting to it. We've found that most early products fail at activation more than at retention. The product often works; users just never figure out how to start.

Track retention at day 1, day 7, and day 30. Day-1 retention tells you if users come back after trying the product once. Day-7 tells you if they're forming a habit. Day-30 tells you if they're sticking around. At this stage, you just want to establish baselines. You'll optimize these numbers later.

Watch session recordings. Tools like FullStory or PostHog let you watch how real users interact with your product. This is the single most valuable source of information in the first month. You'll see users clicking on things that aren't clickable, getting stuck in flows that seemed obvious to you, and ignoring features you thought were prominent.

Don't watch every session. Watch 10 to 15 per week, focusing on users who signed up but didn't reach activation. Understanding why people don't activate is more useful in this phase than understanding why people do.

Week 5 to 8: Talk to users and make decisions

By week five, you have real data and real users. This is when the important work begins.

Have at least 10 direct conversations with users. Not surveys. Not feature request forms. Actual conversations where you ask open-ended questions and listen. "What were you trying to do when you signed up?" "What's frustrating about the current version?" "If this product disappeared tomorrow, what would you use instead?"

The answers to these questions will frequently surprise you. Users are using your product in ways you didn't anticipate. They value features you thought were minor. The feature you spent three weeks building? Some of them haven't noticed it exists.

Decide what the first meaningful iteration looks like. Based on usage data, session recordings, and user conversations, you should have a short list of things that would materially improve the product. Pick one or two. The ones that improve activation or retention, not the ones that add capabilities.

This is hard because feature requests feel urgent. Users are asking for things. But at this stage, improving the core loop is almost always more valuable than extending it. Make the existing experience better before making it broader. We wrote about how to scope your product to focus on what matters, and the same principles apply post-launch.

Check your infrastructure costs. Your free tiers and startup pricing will start running out. Review what you're actually using and what you're paying for. Some services you set up during development might not be needed in production. Others might need to be upgraded. We've seen founders get surprised by their first real cloud bill around week six.

Week 9 to 12: Build the feedback loop

By week nine, you should have a repeatable process for learning from users and turning those learnings into product decisions.

Establish a release cadence. We recommend weekly or bi-weekly releases for early-stage products. Frequent releases keep the feedback loop tight and prevent the engineering team from working on features in isolation for too long. Every release should include at least one thing that directly responds to user feedback.

Build a request tracking system. This doesn't need to be fancy. A spreadsheet works. The point is to record every feature request with context: who asked for it, what problem they were trying to solve, and how many other users have mentioned something similar. After two months, patterns emerge. The features that matter are the ones that keep coming up from different users trying to solve the same problem.

Run a retention analysis. At the 90-day mark, look at your first cohort. Of the users who signed up in week one, how many are still active? If the number is above 40%, you have something that works. Focus on growth. If it's between 20% and 40%, you have something promising that needs improvement. Focus on the drop-off points. If it's below 20%, you have a problem worth investigating before you invest more in acquisition.

Evaluate whether you need more engineering capacity. If you're a solo founder doing engineering, the first 90 days will tell you whether you can sustain that or need to bring on help. If you have a team, you'll know whether the team size matches the velocity the product needs. This is the right time to think about whether a fractional CTO makes sense to provide strategic guidance without the commitment of a full-time hire.

The support infrastructure you need on day one

Some founders launch without any of this in place. Don't.

A way for users to report bugs. An email address works. A chat widget works. A feedback form works. The mechanism doesn't matter. What matters is that there's an obvious path from "something is wrong" to your inbox.

A changelog or update log. Users who sign up early want to know the product is improving. A simple changelog page that shows what you've shipped each week builds confidence. It doesn't need to be elaborate. "Week 3: Fixed login flow for Safari users. Added password reset. Improved dashboard load time." Stuff like that.

An honest status page. When things break (they will), users need to know you're aware and working on it. A status page that you update during incidents is the difference between "this product is down and nobody's home" and "this product is down and the team is fixing it."

The 90-day milestone

At the 90-day mark, you should be able to answer three questions:

  1. Is the core loop working? Are users successfully completing the main action your product enables?
  2. Are users coming back? Is there any evidence of retention, even if the numbers are modest?
  3. Do you know what to build next? Not from your imagination. From data and user feedback.

If you can answer yes to all three, you have a product. It might be small, rough around the edges, and missing features that you'll eventually need. But it works, people use it, and you know where to go next.

If the answer to any of them is no, that tells you where to focus. A broken core loop means you need to simplify or fix the product. No retention means you need to understand why users leave. No clarity on what to build next means you need more conversations with users.

The first 90 days aren't about building more. They're about learning enough to know what "more" should look like.


We support clients through launch and the critical first 90 days after it. If you're approaching launch and want a plan for what comes next, let's put one together.


TopicsLaunchPost-LaunchSaaSProduct ManagementGrowth

Explore More

More engineering insights, delivered.

Practical thinking on architecture, infrastructure, and shipping software that lasts.

← Browse all insights