How we re-built our on-boarding process and boosted donations by 25%

Share the joy

There are plenty of articles online about the experiments companies run to boost their conversion. They usually describe how a certain change to a design caused an uplift but never really talk about the process behind the change. With this in mind, I want to take a step back and look at the team and the workings behind the 25% uplift to our Justgiving Crowdfunding platform.


How did we identify the problems users were having? What was the process of coming up with the ideas? How did we validate? How did the team work together to get things done?

By doing this, I hope to help you understand how you can apply this to your product as going through the detail of the design change is not always useful especially as each product is different.

I’ll talk about user psychology, being data driven and the dedication to solving the users problems and needs.

Let’s start with the who the team were?


team awesome

Customer service heros — They are the people that talk to your users and deal with their problems everyday, whether or not it’s email, phone or IM.

User experience researchers — They are the ones that delve deeper into the real meaning of a customer problem, they unmask both the problem and the user to build a picture of what is needed to solve the issue.

User experience designers and UI designers — They are the ones who help to build beautiful interfaces and solve the customer problems.

Developers — They are the wizards who bring concepts to reality.

Analysts — They are the ones who dig deep into the data, they always ask why? And do everything they can to get you the right answer.

Product managers — They are the ones who are responsible for facilitation, driving these changes through and making sure they deliver the right result to the user and to the company.

Just a note on senior stakeholders, it is always a good idea to involve them in this whole process. They need to buy in to your cause therefore giving them real data and insight is the best way to motivate the team long term. It doesn’t need to be a lot, just regular soundbites.



1. Find out what the biggest pain points are in your flow and where users are dropping off

This should be your starting point because this is what is stopping users from converting, if you understand the user problem, you can find a way to fix it.

Don’t get sucked in to randomly changing button colours etc…there’s usually a reason behind doing this other than being a common test.

In order to uncover you need to use a mixture of quantitative and qualitative data. This is crucial, you need to triangulate these sources with your own expertise to work out what the exact issues are.

Where are the pain points — Quantitative data


(This is an example funnel from Google images, not our own)

If you just use customer feedback and verbatim you can get sucked in to thinking there are major issues everywhere. This is why you need to use quantitative data to balance. Using this data allows you to see where drop offs are in the flow, so you can rank the severity of issue you are trying to solve.

We used Kissmetrics for web analytics as well as our own database data to see where users were dropping off. It is important to look at your data both at a visit and a visitor level so you can detect if there are multiple pages being created etc…We can then look to the qualitative data to see if there is any evidence of this pain point to help us triangulate.

Why are these pain points — Qualitative data

We used multiple qualitative data sources to uncover the problems.

Customer service feedback — Our customer service team document conversations with customers and produce their own data to share with the team which details the main pain points users are having.


We ran user testing sessions — Our UX researcher ran user testing sessions where he observed and uncovered pain points with the user. It is critical not to guide a user through your process but let them tell you where the pain points are through their behaviour and through talking out loud.


NPS surveys — This is something we run often, but we also put surveys at different potential pain points and asked users what pain points they are having.


By the end of this, you should have a wealth of data upon which to act.

2. Triangulate the data and prioritise

What did we do?

You now have pain points and quant data to start delving deeper. The next step is to map out your user flow on a whiteboard and then attach user problems to particular points in the flow. Be aware that some user problems may be global across the whole process or even platform so be sure to map those out too. We did this process with the UX team to ensure we are aligned.


Once you have this, map out the most severe drops in your flow based on your key metrics and then see if there are direct relationships between what the user says and does, ie the qual and quant data.

You then need to work out what are the highest priority issues.

Why did we do it?

Any PM worth their weight should know that this piece of work must be aligned with the company’s vision, and the KPI’s you are trying to improve that quarter. Therefore, you need to triangulate this with qual and quant data to see what is the largest opportunity.

This is where you can use prioritisation techniques such as ‘fag packet’ maths or more detailed analysis to assess if you were to move a KPI in one part of the funnel what impact would it have.

You should then have a good idea of the priority problems you need to solve as a team. Invite your team to challenge your calculations, this is an important part of the teamwork to ensure the team’s voice is being heard over an individual. It is also critical for a PM to prove why they have prioritised certain things.

3. What can we do to solve these?

What did we do?

We use agile methodologies so we setup shaping sessions to allow the team to solve the user problem we specified. This usually follows the format of presenting the user problem and the data to support it, then asking the team to start ideating on how they can solve that problem for the user. I like it when you follow techniques such as crazy 8’s that allow quick ideas to come out on paper which can then be explored further.

crazy 8

Why did we do it?

This is very important for team morale and for getting original ideas as every person has a different perspective on how to solve the user problem. It is common in waterfall organisations that the design and the product managers solutionise then pass it on to the developers. This ignores the creativity of the whole team, and makes people feel excluded. Shaping together allows the team to solve and grow together.

From there, you can take a few ideas that the team have voted to explore and flesh them out. Write out your hypothesis based on these solutions. The team can then vote on what solution they think will make the biggest impact.

You then have a hypothesis, solution and are ready to test.

4. Prototyping quickly to solve

What did we do?

We used lean UX principles to create prototypes that can be tested on users. These are usually low fidelity prototypes to test a hypothesis.

You can read more about lean UX here

There are two ways to go here. The first is that you iterate on each page or section of your flow and follow a more traditional conversion optimisation process, the second is that you prototype out a complete flow using the solutions to the problems you have created. We chose the latter because we had so much feedback and data that it didn’t make sense to make small changes and test when there is so much data to act on.

You have to make a call as a PM here, do you move more carefully or do you trust your gut and move forward fast. We are a high growth product, so we cannot afford to spend time testing and tweaking every little change, hence why we chose to create a low fidelity prototype and test it. In previous companies, I followed a tighter hypothesis driven process where small tweaks made big differences but in this scenario small changes were not going to drive growth. Therefore, we actually rolled a few hypotheses into one prototype.


The UX researchers tested these prototypes by running user testing sessions. We used invision to build our prototypes. They asked users to think out loud and paid very close attention to the detail. We could then make tweaks based on what they said on the fly and then present the changes to the next person in house. This rapid process allows you to iterate very quickly. We recruited users from our key persona demographic to ensure we could be as accurate as possible. It is important that the whole team see the user testing sessions so everyone understands why certain changes were made.

Why did we do it?

We wanted to move quickly, and iterate as soon as possible. Therefore, by getting a prototype in front of users we can discover what users are thinking and make changes without having to touch code. This means the iteration cycles are much shorter and we can be confident we have put a strong product into the users hands.

lean ux

Image from

5. Build a lean version of your prototype and get to market asap

What did we do?

The next step was to actually start coding up our MVP.

Once again, in order to move quickly you must focus on getting a lean version of the product out first. Don’t obsess over every minute detail if you are trying to grow fast, focus on getting the key improvements out into users hands.

We did this through our usual shaping sessions, we wrote the stories as a team and thought of the test scenarios so we can ensure stable delivery of code.

We constantly asked the question ‘Is this our MVP?’ , or are there things we don’t need to focus on in here. Remove those things and write clear, definitive stories so everyone understands what is going on.

I wrote an article which explains why you should focus on releasing early and often, read it here. We employed these principles throughout to make sure we can test our hypothesis as soon as possible.

Why did we do it?

User research sessions can uncover usability issues but to really test if our hypotheses are true or false we need to release the product improvements to a wider audience. We wanted to get the right product to the user as soon as possible, so iterating quickly and being lean allows us to get feedback asap.

5. AB test

What did we do?

It is extremely important you have a reliable analytics framework in place to measure your changes. Our analysts defined the naming conventions for the events so we can accurately determine if our changes have caused our desired uplift.

We then setup an AB test by calculating the desired length of experiment. We did this by firstly defining what our success metrics are, then measuring what traffic we need to obtain to be able to reach statistical significance for this metric.

We used more complex statistics to determine these numbers but you can use something less regimented if you are starting out, there are free tools here.


Why did we do it?

AB testing is the greatest indicator of success when it comes to improving metrics. Our analyst made sure he sent out the information about how we are going to measure this before we started so everyone was clear on what determines success.

6. The results — be brave and iterate

The next step is to sit back and wait for the test to reach significance. After all our hard work we were happy with the results as we saw a 25% uplift in donations. It feels extra nice when you know that the changes will make an impact on someones life.


Do not be disheartened if you do not receive the desired outcome. This is positive if you work out why it didn’t have an impact. Can you learn anymore from this test? You can send out a survey, watch session recordings to try and deduce learnings. If not, then pick up the next hypothesis and test.

From this

From this

To this


Share the joy

About the author

Anish Hallan

Product Manager

View all posts

Leave a Reply

Your email address will not be published. Required fields are marked *