A/B Test

Run experiments on your tours by creating multiple versions and splitting your audience between them. Compare results to identify the best-performing tour and make it the default for all users.

Setup experience variations

Improving user onboarding and walkthroughs requires testing different approaches. With Jimo, you can create experience variations and let results decide which one works best.

  • Try alternative flows, wording, or visuals.

  • Compare performance against your success goal (tour completion or tracked event).

  • Deploy the winning version to all users with confidence.

This experimentation process helps ensure your tours continuously improve and better support your users.

A/B Test

The current way to create variations in Jimo is through A/B testing.

  • You can create A/B versions of your tours (only Tours are supported for now).

    • Max 3 variants: Original + Variant A + Variant B.

    • One test per tour at a time.

  • Your audience is split across variants according to the distribution you choose (e.g. 50/50).

  • As the test runs, Jimo tracks performance for each variant based on your defined goal.

  • Once you gather enough data, you can review results and declare a winner, making that variant the default tour for all future and remaining users.


Start a test

Begin from your tour’s Target & Publish → Goal section.

  • Click Start A/B Test.

  • Jimo automatically creates:

    • The Original (your current tour flow).

    • A new Variant A (duplicate of the Original).

  • At this stage, the test is unpublished.

  • You may Cancel Test before publishing (this deletes all variants and resets back to Original only).


Configure variants

Once your test is created, you’ll see the Variants panel where all versions of your tour are listed. This is where you manage each variant — preview and edit their flows, add or remove alternatives, adjust how traffic is split between them, and finally publish your setup.

Think of this step as shaping the actual experiment: defining what’s being tested and how users are distributed.

Variants list

  • Each variant is shown in the list with:

    • Preview (open a read-only flow modal).

    • Edit (open the builder on that variant).

    • Delete (only on Variant A/B; the Original cannot be removed).

Adding new variants

  • Click Add variant to create another version.

  • Choose to duplicate the Original or an existing Variant.

  • You can test up to 3 versions in total.

Builder quick switch

  • Inside the builder, use the top-bar Variant switch to move between versions.

  • Changes made in one variant do not affect the others.

Audience distribution

How it works: the distribution is applied on top of the “Who” targeting rules defined for the tour.

  • Jimo will randomly dispatch all eligible users (those matching your “Who” field) into the selected proportions.

  • By default, if set to 50/50, the full eligible user list is split evenly between the variants.

    • No variant may exceed 90%.

    • Each must have at least 10%.

Use the slider to decide how much traffic each version receives.

  • The UI shows approximate user counts for each allocation.

Publishing

  • Click Publish to make the test live.

    • All active variants are then served to users based on your distribution.

    • You can add variants later to an already live test, but must re-publish.

  • If you Cancel Test after publishing, you’ll need to publish again to return to a single-variant tour.


Review results

Once live, head to your tour’s Analytics → Insights to see performance.

Variant filter in Statistics

  • Filter the Experience statistics by variant: Original / A / B.

  • Compare step drop-off, clicks, completions across versions.

A/B Test panel

Dedicated panel for experiment results:

  • Status & Confidence: gauge shows “Too Early” or rising % confidence.

  • Variant table with:

    • Started

    • Goal Reached

    • Conversion Rate

    • (Click on a variant.)

    • Set as Default button

Choosing a winner

  • When confident, click Set as Default on the best performer.

  • That variant becomes the only version shown to all users.

  • Ending a test is irreversible – other variants are retired.

  • Test data is archived in history for later reference.

Starting a new test

  • Open Goal → Start A/B Test again.

  • The current tour (default) becomes the new control.

  • Add a challenger and repeat.

  • Only one test at a time is allowed per tour.


Best practices & tips

Keep your experiments reliable and easy to interpret:

1

One change at a time

Focus your test on a single variable (like copy, step order, or design). This way you know exactly what caused the difference in performance.

2

Split fairly

A 50/50 traffic split is usually the best starting point for faster, unbiased results. Use uneven splits (e.g. 80/20) only if you want to limit risk from a bold experiment.

3

Don’t peek too soon

Early numbers can be misleading. Wait until enough users have seen each variant and the confidence level is high before picking a winner.

4

Use the winner as the new baseline

Once a variant proves better, set it as default and treat it as the new control. From there, you can continue testing new improvements.

5

Communicate with your team

Let colleagues know when a test is running, since users may see different versions. This avoids confusion and ensures everyone understands the experiment in progress.

Last updated

Was this helpful?