Creating a Split Test

  1. Navigate to the project you want to test and open the A/B Tests tab.
  2. Click New Test and give it a descriptive name (e.g., "VSL v1 vs v2 - shorter cut").
  3. Select your Control (the original video) and add one or more Variants.
  4. Set the traffic split. A 50/50 split is standard for two variants. With three variants, try 34/33/33.
  5. Choose your primary metric - typically completion rate or conversion event (e.g., CTA click, purchase).
  6. Set a minimum sample size or let VSLStats calculate it based on your current traffic.
  7. Click Start Test. Traffic will split immediately upon the next page load.
Note: Visitors are assigned to a variant on their first view and kept in that variant for the duration of the test (cookie-based). This prevents the same person from seeing both versions and skewing your data.

What to Test

Not everything is worth testing - focus on changes that are likely to move the needle meaningfully. Here are the highest-impact test categories:

Thumbnail / Poster Frame

The thumbnail is the first thing a visitor sees before pressing play. Test a talking-head close-up vs. a text-overlay frame vs. a result screenshot. A compelling thumbnail can increase play rate by 20–40%.

Video Length

Test a full-length VSL against a condensed version. Shorter videos often win on cold traffic; longer videos can outperform on warm audiences who already trust the brand. Run separate tests per traffic source if possible.

CTA Placement

When your call-to-action appears matters. Test showing the CTA button at 60% completion vs. 80% vs. at the very end. Early CTAs capture eager buyers; later CTAs filter for highly engaged viewers.

Opening Hook (First 30 Seconds)

The first 30 seconds determine whether viewers watch the rest. Test different opening frames - problem-first, result-first, or story-first - and measure the 30-second retention rate as your primary metric for this experiment.

Audio / Voiceover

Same script, different narrator or delivery style. Energy level, pacing, and tone of voice influence perceived credibility and excitement. This is a surprisingly high-impact variable that most marketers overlook.

Reading Test Results

VSLStats displays live results in the A/B Tests dashboard. You'll see:

Statistical Significance

Don't call a winner early. VSLStats uses a two-proportion z-test to calculate confidence. The default confidence threshold is 95%, meaning there's less than a 5% chance the observed difference is due to random variation.

Rule of thumb: Wait until each variant has at least 200 conversions (or 500 plays if measuring engagement) before declaring a winner. Ending tests early is the most common A/B testing mistake.

If a test is running for more than 30 days without reaching significance, the difference is likely too small to matter in practice. Consider testing a more dramatic change.

Ending a Test and Applying the Winner

  1. Once your test reaches statistical significance and you're satisfied with the sample size, click End Test.
  2. VSLStats will show a final summary with confidence, uplift, and recommended action.
  3. Click Apply Winner to automatically set the winning variant as the default for 100% of traffic.
  4. The losing variant is archived - you can reference its data anytime but it will no longer serve traffic.

Best Practices