A/B Testing
Run experiments to optimize your website’s performance.
Overview
A/B Testing (also called split testing) lets you compare two or more versions of a page or element to see which performs better. Instead of guessing what works, you can make data-driven decisions based on actual visitor behavior.
How A/B Testing Works
- Create variations — Define different versions of a page or element
- Split traffic — Ghost Metrics randomly shows each variation to different visitors
- Measure results — Track which variation achieves better outcomes
- Pick a winner — Use the data to implement the best-performing version
Common A/B Tests for Healthcare
Landing Pages
- Different headlines or value propositions
- Various hero images
- Alternative form placements
Calls-to-Action
- Button text (“Schedule Now” vs “Book Appointment”)
- Button colors or sizes
- CTA placement on page
Forms
- Number of form fields
- Form layout (single column vs multi-column)
- Required vs optional fields
Content
- Long-form vs short-form content
- Different content structures
- Video vs text explanations
Creating an A/B Test
- Navigate to A/B Tests in the main menu
- Click Create a new experiment
- Configure your test:
- Name — Descriptive name for the test
- Hypothesis — What you expect to happen
- Variations — Define your control and test versions
- Success metric — What defines a “win” (goal conversion, engagement, etc.)
- Traffic allocation — Percentage of visitors to include
- Implement the variations on your website
- Start the test
Test Configuration Options
Traffic Allocation
Choose what percentage of visitors participate in the test:
- 100% — All visitors are part of the experiment
- 50% — Half of visitors see the experiment, half see the original
- Lower percentages let you limit exposure to unproven changes
Success Metrics
Define what you’re trying to improve:
- Goal conversions
- Pageviews
- Time on page
- Bounce rate
- Revenue
Scheduling
- Start date — When the test begins
- End date — When the test automatically stops
- Or run until statistical significance is reached
Implementing Variations
A/B tests require changes to your website code. You can implement variations using:
JavaScript-Based Changes
For simple changes like text, colors, or visibility:
// Example: Change button text based on variation
if (variation === 'control') {
document.getElementById('cta-btn').innerText = 'Contact Us';
} else if (variation === 'variation1') {
document.getElementById('cta-btn').innerText = 'Get Started Today';
}Server-Side Changes
For more complex changes, implement variations in your backend code and use Ghost Metrics to track which variation each visitor sees.
Tag Manager Integration
Use Ghost Metrics Tag Manager to deploy variation code without modifying your website directly.
Contact support for guidance on implementing your specific test.
Reading Test Results
Key Metrics
For each variation, you’ll see:
- Visitors — Number of participants
- Conversions — Goal completions
- Conversion rate — Percentage that converted
- Improvement — Change compared to control
Statistical Significance
Ghost Metrics calculates whether results are statistically significant:
- Significant — Results are reliable, not due to chance
- Not significant — Need more data or difference is too small
Wait for statistical significance before declaring a winner. Ending tests too early can lead to incorrect conclusions.
Confidence Level
The confidence level indicates how sure we are that the results are real:
- 95% confidence — Standard threshold for most tests
- Higher confidence = more reliable results
Best Practices
Test One Thing at a Time
Changing multiple elements makes it impossible to know what caused the difference. Test one variable per experiment.
Run Tests Long Enough
- Wait for statistical significance
- Run for at least 1-2 weeks to account for day-of-week variations
- Ensure enough visitors for meaningful data (at least 100+ conversions per variation)
Document Everything
Keep records of:
- What you tested
- Your hypothesis
- Results and learnings
- Actions taken based on results
Have a Clear Hypothesis
Before testing, state what you expect:
“Changing the CTA button from ‘Contact Us’ to ‘Schedule Appointment’ will increase form submissions because it’s more specific and action-oriented.”
Don’t Peek Too Often
Checking results frequently can lead to stopping tests too early. Set a schedule for reviewing results.
After the Test
If You Have a Winner
- Implement the winning variation permanently
- Document the results
- Consider follow-up tests to optimize further
If Results Are Inconclusive
- The variations may perform equally well
- Consider a different test approach
- Move on to test something else
If the Original Wins
That’s valuable data too! You’ve confirmed your current approach works better than the alternative.
Limitations
- Tests require sufficient traffic to reach significance
- Complex tests may need developer support to implement
- Results apply to your specific audience and context