The mobile app market is evolving rapidly, and competitors are always on the move. This makes it crucial to thoroughly validate hypotheses during the early stages of product development — this way, you avoid wasting time and resources on an irrelevant app. Semyon Polozov, CEO, manager, and technical director of the development studios Bladestorm and Dreamfrost, will explain which methods to use for this purpose and which metrics indicate the success of new features.

Semyon Polozov
What’s Happening in the Mobile App Market
Every year, users are becoming more reliant on smartphones. According to Similar Web’s 2024 data, over 60% of internet traffic comes from mobile devices. Smartphones make it easy to perform everyday tasks — from ordering a taxi and checking routes to grocery shopping. As a result, the mobile app market continues to grow.
However, experts note that the app market is nearing saturation. This means users are paying more attention to quality and usability. It’s no longer enough to create an app that merely meets user needs — it must also be intuitive and more user-friendly than competitors’ offerings.
Another example is when users frequently complain about the speed of food delivery. You might hypothesize that increasing couriers’ wages will motivate them to complete orders faster and more willingly, thereby improving delivery speed and, consequently, customer satisfaction.
In other words, a product hypothesis is not just a random change to the product but rather a carefully thought-out action aimed at enhancing the user experience or business metrics. Moreover, it is crucial not only to formulate a hypothesis but also to test its impact on key performance indicators to confirm its effectiveness or make adjustments. All in all, hypothesis testing helps minimize risks, reduce costs, and focus on solutions that truly work.
In my opinion, proper product and project management is built on continuous testing and hypothesis validation — whether in marketing or development. For example, when we launched Skybringer at Dreamfrost and the first wave of traffic came in, we saw that the installation conversion rate was only 10%, while our competitors were achieving 30%. Therefore, I conducted a competitor analysis, collaborated with a designer to create several store page layouts, and ran A/B tests on screenshots, icons, and descriptions. One of our variants achieved a 20% conversion rate, which was double the previous result.
Another example involves the GC.Skins app, which allowed users to earn coins by completing various tasks and purchase skins in Counter-Strike. We conducted several customer development interviews and qualitative studies, which led us to formulate a hypothesis: providing users with a certain amount of coins at the start would immediately spark their interest in the app.
To test this, we ran an experiment where new users received 19 coins upon registration. This amount was enough for them to complete just one task and purchase a starter skin. As a result, the hypothesis proved to be effective, showing an increase in conversion to action, and we adopted it as a working strategy.
How to Effectively Validate Hypotheses
The simplest way to validate a hypothesis is to analyze existing product and user behavior data. These insights often contain answers to key questions. If users have already interacted with the product, you should have access to factual data and metrics that can be leveraged for hypothesis testing.
However, if publicly available data is insufficient, you can conduct marketing tests to gauge user reactions to new features or products. For instance, before investing in new inventory or expanding your product range, you can run social media ads and track the number of clicks, visits, and requests to estimate demand. Another approach is to create temporary “placeholders” for new products on your website and monitor cart additions and conversion rates.
If your hypothesis requires deeper validation, consider conducting user interviews and surveys directed at your target audience. This research method provides insights into user opinions and expectations, helping you refine or discard hypotheses. If the changes are minor, 5–7 interviews may suffice. The larger the innovation, the more responses you’ll need — for example, companies typically conduct 50–100 interviews when launching a new product.
A key method for hypothesis validation is A/B testing. This approach involves showing a new interface or feature to a segment of users while the rest continue using the existing product. User behavior is then analyzed to determine the impact of the change. A/B testing allows you to validate hypotheses cost-effectively, identify successful ideas, and implement necessary adjustments.
Key Performance Indicators for Hypothesis Validation
There are no universal indicators — they are selected depending on the stage of the user journey at which the experiment is carried out, and the type of hypothesis itself. Therefore, we will look at the most popular metrics at each stage.
Undoubtedly, there are no universal KPIs as they depend on the stage of the user journey where the experiment takes place and the type of hypothesis being tested. Below, I will explore the most commonly used metrics for each stage.
Registration Stage
- Registration Conversion: The percentage of users who complete the registration process after an implemented change. This metric helps determine whether the update effectively attracts new users.
- User Drop-offs in Registration: Identifies where users face difficulties during registration, allowing for targeted improvements.
- Registration Time: A shorter registration time is an indirect indicator of UX improvements and smoother user interaction.
Payment stage
- Payment Completion Conversion: The percentage of users who successfully complete a payment after a change has been introduced. This is a key metric for assessing the effectiveness of modifications.
- Click-through Rate (CTR) for New Payment Methods: If clicks on newly introduced payment options increase, it suggests user interest in the feature.
- Adoption Rate of New Payment Methods: Indicates the percentage of users who prefer the new option compared to previously available payment methods.
Hypotheses for Increasing Retention
- Retention rate: The percentage of users returning to the app within a specific period, such as the second, seventh, or thirtieth day after registration.
- Cohort Analysis: Allows for the examination of user behavior across different time periods and comparisons of how implemented changes affect retention.
- Session Duration & Active Days in the App: If changes are aimed at increasing user engagement, these metrics help evaluate whether users are spending more time in the app.
Control metrics (“product health metrics”) are essential for assessing the overall impact of a hypothesis on the product, ensuring that changes intended to improve one aspect do not negatively affect others.
Examples of Product Health Metrics:
- Financial Metrics: Revenue, average order value (AOV), and average transaction size. These indicators are crucial for any experiment, as even seemingly positive changes — such as discounts for new users—may temporarily impact financial performance.
- Average Order Value: Helps determine whether the experiment influences per-user revenue, particularly if the hypothesis involves upselling additional services or products.
- Retention Metrics: It is crucial to monitor whether retention decreases due to changes. For example, adding extra steps to the checkout process may initially boost conversion rates but later harm overall retention due to increased friction.
Before testing out a hypothesis, it is vitally important to define the key metrics you will evaluate and set target outcomes.
Common Mistakes in Hypothesis Testing
First, a common mistake is incorrectly conducting qualitative research, such as customer development interviews, and misinterpreting the feedback. For example, during one of our interviews, we showcased a new mobile app design. Users provided positive feedback on all planned features, so we decided to implement them. However, after launch, it became clear that many features remained unused as users did not know what it was that they actually wanted.
Therefore, I can point out that users’ opinions on future features can be highly subjective. Instead of asking about hypothetical functions, focus on evaluating existing features. New functionalities should be carefully developed on your own.
Another common mistake is drawing conclusions from A/B tests too soon. For instance, we once added several fields to the mobile app’s registration form and wanted to assess whether this negatively impacted the registration process. After analyzing the first 100 leads, we saw similar results for both the old and new forms. Based on this small dataset, we concluded that the additional fields had no negative impact and permanently changed the form.
However, two weeks later, when tens of thousands of users had interacted with the product, we observed a 20% drop in conversion to registration. So, we had rushed to conclusions, forcing us to revert to the old form and restart testing.
In both cases, we wasted time, money, and resources. In the second example, we even harmed the performance of new traffic. These types of mistakes should be avoided.
Which Hypotheses Testing Methods Are the Most Effective?
Ideally, all significant product changes should go through A/B testing, as it is a reliable and data-driven approach. After A/B testing, I always have a clear understanding of what improved my average order value or boosted conversion rates.
However, A/B testing is not always feasible as it requires time, and for smaller products with low daily traffic (e.g., 100 users per day), achieving statistically significant results may take too long.
Regardless of the testing method, you can increase the success rate by following these best practices:
1) Document the experiment thoroughly. Clearly define what you are testing, why, and which KPIs you aim to influence. This will help calculate the required sample size.
2) Avoid checking results too early. Sometimes, waiting a bit longer ensures more accurate results.
3) Have colleagues review your experiment. Even if they work in a different field, a fresh perspective can reveal blind spots. Reviewing others’ experiments is also a great way to develop an eye for future work.
