A/B Testing Overview
Learn how A/B testing works in SortLab to compare sorting strategies and find the one that drives the most revenue for each collection.
A/B Testing Overview
Not sure which sorting strategy is best for a particular collection? A/B testing takes the guesswork out of the equation. Instead of committing to a single approach, you can run a controlled experiment that compares two strategies side by side and lets real customer behavior decide the winner.
What is A/B testing for collection sorting?
A/B testing (also called split testing) is a method of comparing two versions of something to see which one performs better. In SortLab, A/B testing lets you pit two sorting strategies against each other on the same collection. One strategy is the Control (your current approach), and the other is the Challenger (a new strategy you want to evaluate).
SortLab alternates between the two strategies at a regular interval you choose, then measures which one generates more revenue, orders, and engagement. After the test runs for the configured duration, you get a clear report showing which strategy won.
Why A/B test your sorting strategies?
Even small changes to product order can have a measurable impact on your store's performance. Here's why A/B testing matters:
- Data-driven decisions — Stop guessing which sorting strategy works best. Let your actual customers tell you through their behavior.
- Risk-free experimentation — Try a new strategy without fully committing to it. If the challenger underperforms, your current strategy is still running half the time.
- Collection-level optimization — Different collections attract different shoppers. A strategy that works for "Best Sellers" might not be ideal for "New Arrivals." A/B testing helps you find the right fit for each collection.
- Continuous improvement — Your store's product mix and customer preferences change over time. Regular A/B testing ensures your sorting stays optimized as your business evolves.
- Revenue impact you can measure — Every test gives you concrete numbers: how much more (or less) revenue a strategy generated, so you can quantify the value of your sorting decisions.
How SortLab's A/B testing works
SortLab uses a time-based alternation approach to A/B testing. Here's the process:
- You pick two strategies — The Control (A) is your collection's current sorting strategy. The Challenger (B) is a new strategy you want to test.
- SortLab alternates between them — At the switch interval you set (for example, every 6 hours), SortLab automatically swaps the active sorting strategy on your collection. This ensures both strategies get equal exposure across different times of day and days of the week.
- Metrics are tracked for each variant — SortLab records revenue, orders, and engagement separately for each strategy, so you can see exactly how each one performs.
- Results are reported — After the test duration ends, SortLab presents the results with clear performance comparisons so you can decide which strategy to keep.
Time-based alternation ensures that both strategies are tested across peak and off-peak hours, weekdays and weekends, giving you a fair comparison that accounts for natural traffic fluctuations.
Key concepts
Understanding these terms will help you get the most out of A/B testing in SortLab:
Control (A)
The Control is your collection's current sorting strategy — the one that's already active. It serves as the baseline for comparison. During the test, SortLab labels this as variant A with a "Current" badge.
Challenger (B)
The Challenger is the new sorting strategy you want to evaluate. You choose this when creating the test. SortLab labels it as variant B with a "New" badge. If the Challenger outperforms the Control, you can adopt it as the new default.
Switch Interval
The switch interval determines how often SortLab swaps between the Control and Challenger strategies. For example, with a 6-hour interval, Strategy A runs for 6 hours, then Strategy B takes over for the next 6 hours, and so on. Shorter intervals mean more frequent switching; longer intervals let each strategy run for a more extended period before swapping.
Duration
The duration is the total length of the test. A longer duration gives you more data and more reliable results. The default is 14 days, which typically provides enough data for most stores to reach a meaningful conclusion.
What metrics are tracked?
During an A/B test, SortLab tracks the following metrics for each variant:
| Metric | What it measures |
|---|---|
| Revenue | Total revenue generated from the collection while each strategy was active |
| Orders | Number of orders that included products from the collection |
| Conversion Rate | Percentage of collection visitors who made a purchase |
These metrics are tracked independently for each variant, so you can make a direct comparison and see which strategy drives better results.
Prerequisites
Before you can create an A/B test for a collection, it must have an active sorting strategy already applied. This is because the A/B test uses your current strategy as the Control (A) to compare against the Challenger (B).
Only collections with an active sorting strategy can be A/B tested. If you haven't set up sorting for a collection yet, head to Quick Start to get started.
Available on all plans
A/B testing is included in every SortLab plan, including the Free plan. There's no need to upgrade to start experimenting with your sorting strategies. See Plans & Pricing for full details on what each plan includes.
Next steps
Ready to run your first experiment? Head to the next guide to get started.
- Creating Tests — Step-by-step guide to setting up an A/B test
- Interpreting Results — Learn how to read your test results and pick a winner