Being data driven should appeal instinctively to engineers. Finally, there is a tool to settle the never-ending disputes between designers and developers. Finally we are able to decide who is truly ‘right’.
Other positive consequences include a product that better fits the target market, faster validation of new features, and saving countless hours by not spending time developing the wrong thing.
Eventually we will feel the effects of all of these benefits, but as this article from Microsoft illustrates, there are multiple stages one must go through before becoming truly data driven.
Crawl — Walk — Run — Fly
At Lunar Way, we are currently somewhere between the crawl and walk stage. We are starting to improve the tracking in our apps, and are able to run experiments. Still, a single experiment takes a lot of effort. This post is therefore not about Lunar Ways fantastic setup, but should serve as a snapshot into our process, and act as a inspiration for others, who would like to work more data driven.
Before we started experimenting on our signup flow, we had to make some changes to the tech. A recent redevelopment of our signup flow left us with an extremely flexible system. It gave us the ability to insert and rearrange screens dynamically from our backend, but it had not been used in an experimental context. In order to start experimenting we had to have a goal to work towards, so we pinpointed three qualities we wanted our signup flow to possess:
We needed to incorporate as much flexibility as possible, in both the order the screens were displayed, and how the content was used.
We needed a control group to:
Performing an experiment without being able to measure the result is pointless. Therefore, we needed a way to measure and visualise the results.
Building such a feature requires time and effort.
The first hurdle we had to overcome was that our tracking was insufficient and inaccurate, it was not aligned across different platforms (iOS/Android). Because of Apples review cycles, fixing such a fundamental problem took weeks.
Our next hurdle was our reliance on third party app tracking tools. We use Google Analytics, which is fine for seeing how many sessions performed a specific event, but not accurate enough when wanting to build a funnel, and inspect drop-off rates in a 20 staged financial signup flow. What if you want to visualise and compare multiple funnels in a split? Forget about it! Furthermore, how do you label which split a session belongs to?
After some time (and a couple of app updates) we finally had our base funnel, which highlighted some of our serious drop-off screens.
The base funnel for the onboarding of new users
This became the basis for our experiments.
Over the past 6 months we have been running 10 different experiments. These range from simply changing the background colors of the signup flow, to completely re-implementing some of the more fundamental screens, like ‘Terms’.
As an example, one minor experiment we ran involved the introduction of an intro screen. The purpose of the intro screen was to prime the user on what they were signing up for.
The split funnel for this experiment looks like this:
A graph showing our onboarding with and without a intro screen
In the end, the experiment showed a 15% worse conversion rate when adding the intro screen. This result was shown to be statistically significant, with a 95% confidence interval.
Had we not tested this, we would never have guessed adding an intro screen would affect conversions in such a negative way. This clearly shows the power of running experiments and tests on your products, and ultimately, builds a clear path to what customers really want.
As mentioned before, these are just the initial steps towards becoming data driven. We have gathered everything we’ve learnt from the first experiments and produced some ‘next steps’, which we will be working with going forward.
Taking ownership of your data is crucial. Not owning the data, because of becoming reliant on third parties, is one of our biggest issues. We are unable to structure and display data the way we would like to. Once we have the ability to fully utilise our data we will be able to trigger backend behavior based on user behavior in the app.
Taking ownership of our own data will be empowering, and allow us to combine backend and frontend data when evaluating of our experiments.
Currently, creating and running experiments can be difficult at Lunar Way. However, it shouldn’t be. We are structured in a way that creates feature teams, with a ‘You build it, you own it’ mentality, so iteration of our products shouldn’t be as tricky as it is.
In order to change this, we need to come up with a general solution for running split tests. Hopefully, the system will make it extremely easy to setup, create and evaluate an experiment.
More companies than ever are using data to drive their decisions. After all, facts are facts, and as long as you’re analysing them correctly, they will always guide the way.
If there is an internal dispute about a developmental or design decision, the data settles it.
It provides companies with the ultimate form of feedback. Every time a user is performing an action, they are telling us something about our app. When we have access to the data of all our users, quickly we can see patterns starting to emerge, and then test these patterns with experiments.
Ultimately, being data driven allows us to quickly adapt, fine-tune, and upgrade our app in ways the users will like. We waste less time and get less stressed. Our users get a product that fits better with their needs and wants.
That’s a win win situation.