Creating a Split Test
- Navigate to the project you want to test and open the A/B Tests tab.
- Click New Test and give it a descriptive name (e.g., "VSL v1 vs v2 - shorter cut").
- Select your Control (the original video) and add one or more Variants.
- Set the traffic split. A 50/50 split is standard for two variants. With three variants, try 34/33/33.
- Choose your primary metric - typically completion rate or conversion event (e.g., CTA click, purchase).
- Set a minimum sample size or let VSLStats calculate it based on your current traffic.
- Click Start Test. Traffic will split immediately upon the next page load.
What to Test
Not everything is worth testing - focus on changes that are likely to move the needle meaningfully. Here are the highest-impact test categories:
Thumbnail / Poster Frame
The thumbnail is the first thing a visitor sees before pressing play. Test a talking-head close-up vs. a text-overlay frame vs. a result screenshot. A compelling thumbnail can increase play rate by 20–40%.
Video Length
Test a full-length VSL against a condensed version. Shorter videos often win on cold traffic; longer videos can outperform on warm audiences who already trust the brand. Run separate tests per traffic source if possible.
CTA Placement
When your call-to-action appears matters. Test showing the CTA button at 60% completion vs. 80% vs. at the very end. Early CTAs capture eager buyers; later CTAs filter for highly engaged viewers.
Opening Hook (First 30 Seconds)
The first 30 seconds determine whether viewers watch the rest. Test different opening frames - problem-first, result-first, or story-first - and measure the 30-second retention rate as your primary metric for this experiment.
Audio / Voiceover
Same script, different narrator or delivery style. Energy level, pacing, and tone of voice influence perceived credibility and excitement. This is a surprisingly high-impact variable that most marketers overlook.
Reading Test Results
VSLStats displays live results in the A/B Tests dashboard. You'll see:
- Views per variant - confirms traffic is splitting as expected.
- Primary metric rate - completion rate or conversion rate per variant.
- Relative uplift - how much better or worse each variant is vs. the control.
- Statistical confidence - expressed as a percentage (e.g., 95% confident).
- Projected winner - VSLStats flags the leading variant once confidence crosses your threshold.
Statistical Significance
Don't call a winner early. VSLStats uses a two-proportion z-test to calculate confidence. The default confidence threshold is 95%, meaning there's less than a 5% chance the observed difference is due to random variation.
If a test is running for more than 30 days without reaching significance, the difference is likely too small to matter in practice. Consider testing a more dramatic change.
Ending a Test and Applying the Winner
- Once your test reaches statistical significance and you're satisfied with the sample size, click End Test.
- VSLStats will show a final summary with confidence, uplift, and recommended action.
- Click Apply Winner to automatically set the winning variant as the default for 100% of traffic.
- The losing variant is archived - you can reference its data anytime but it will no longer serve traffic.
Best Practices
- Run one test at a time per video to avoid interaction effects between variables.
- Document your hypothesis before starting: "We believe X change will improve Y metric by Z% because…"
- Keep tests running through at least one full week to account for weekday/weekend traffic differences.
- Segment your results by traffic source - a variant that wins on paid traffic may lose on email traffic.