Please note: This content is originally written in German. An automatic translation is shown below.
Hinweis: Dieser Inhalt ist nur in Englisch verfügbar. Der Originalinhalt wird gezeigt.

A/B Testing the Bidding Experience at Ricardo

The Ricardo “Purchasing” team collectively decided that they wanted to improve the bidding experience on their platform.  Their web engine has a lot of legacy code and came out of a hackathon. The mobile experience was developed in a user-centric way with many rounds of testing. From previous user interviews focusing on the purchasing journey, they were aware that the web bidding experience was not great, and the opportunity to improve its functionality became the highest priority score (RICE score) out of all the opportunities in their backlog. This started the idea and implementation of A/B testing. Product Designer Gemma Cardus and UX Writer Yuliya Denysenko of Ricardo and Anna Niedbala, UX Researcher in the central TX Markets Product and UX team, talked about this project and gave us an insight on how they defined their testing practices and goals.

Goal Setting

To find starting points for the ideas, the team consulted old reports that they had collected with insights from user research. Anna, had previously conducted research on the purchasing experience of Ricardo users. From reviewing the UX research reports, it became clear that there were two main areas that could be improved:

  1. User Interface (UI) related issues that could be tackled
  2. Users didn't seem to understand all aspect of the auction feature

Everything was put on a Miro Board and the purchasing team started ideating how to improve the bidding flow.  The goals were to solve the central problems that were identified, and to make the bidding flow more intuitive. For inspiration, the team decided to look at competitors and see how they approached the bidding part of the user journey on their platforms. Another comparison that was made was between the UI of Ricardo's own web version and their new mobile app. Based on the ideation, they created several designs of a new bidding flow which they believed would solve the problems identified.

A/B Testing? A/B/C/+ Testing!

To verify the improvements in these new designs, they conducted  A/B tests. This was easier said than done! The solutions they had indentified on the UI spanned multiple areas, including the copy, the sequence and the bidding flow. This meant that testing wasn't limited to one simple change and see if it performs better or worse than the current version. A complex mix of combinations of various ideas had to be verified in order to produce meaningful testing results.

The complexity of the project sparked multiple discussions to decide what should be tested first and which solutions should be put together in each of the tests.

Yuliya, said that this was the biggest challenge in this project. She didn't know whether to A/B-test all UX writing changes across all steps at once or implement the wording changes in small steps. Eventually the team decided to mix UI and copy changes in six different variants for A/B/C/+ tests. This approach was meant to simultaneously test every possible combination of changes to the problem areas that were identified. However, this approach made the testing process complex to define and eventually set up.

First Results and Insights

The first A/B test showed that the simultaneous test approach was not a good idea. There was no conclusive result after running the test for two weeks, and there was no way of telling which variant was working or not working. The conclusion of the first round of tests was that not everything could be tested at the same time. The more specific the A/B tests were set up, the more conclusive they were, which led to a change in testing approach: Testing in smaller steps. The second learning was that if a test is not fully conclusive, qualitative research can help understand the why

New Testing Approach

The first “small” A/B Test was to test the wording of the bidding CTA button. From the user research it had become clear that the users were not sure if clicking this button would already confirm their bid. In this round, the status quo CTA "Make a bid (Gebot Abgeben) was tested against a new term - "Bid" (Bieten). This small, but very specific test on the CTA came back with conclusive results in favour of the new term.

With this successful example of the new testing approach, the team continued the A/B tests with a focus on other microcopy such as field titles ("Your next bid" and "Your auction limit"). The tests showed that the microcopy was not really understandable to the user, which inspired the team to conduct qualitative tests to understand the why. The qualitative tests showed that the problem didn't lie in the wording itself but in the concept of an "auction limit", which is not clear to the user.

During the process of iterating and testing endless variations, the team eventually came to a realisation that simplified their decision-making: They needed to keep their focus on the Purchasing team’s KPI, which was the overall bidding conversion rate. Their goal was not to collect more bids per user or to reach a higher winning price. Remembering this helped conclude that the key priority was to make it as simple as possible for the users to place the first bid. Keeping this specific goal in mind enabled the team to refine designs and conduct more focused A/B tests.

To Be Continued

Finally, two different variants of the whole bidding flow were defined: An updated version of the current system based on the previous tests and a new, simplified design. This test is now ongoing and should be more conclusive. But the work doesn’t end here. After this test, the team has already made plans to introduce other improvements into the bidding flow little by little and continuously adjust the quality of the purchasing journey.

Learnings So Far

  • Keep it simple! For the analysis, it is easier to focus on one change at a time. The more specific the A/B tests are, the more conclusive the results can be.
  • Remember the goal. It is important to keep the aims of your tests in mind. In this case: Which KPI do I want to impact with the solutions I am testing?
  • If in doubt, ask a user. Involving qualitative research helps understand the why.

Thank you to Gemma Cardus, Yuliya Denysenko and Anna Niedbala for their insight into this project.

Check out our discussion with Benjamin Ligier, User Analytics Manager at Ricardo and Johannes Schaber from the Product & UX team about A/B Testing in User Research.

Weitere Blog-Artikel

Jobs bei TX Markets

Wir suchen Talente. Immer.

Ob als Techie, Geek, Mastermind oder Project-Hero: Leute mit Herzblut und Skills finden bei uns spannende Jobs in einem ambitionierten Umfeld.