Most CRO “best practices” are not universal truths. Many are context-dependent, and some are just myths. Use the scorecard below to turn generic advice into testable hypotheses — then benchmark what actually works for your audience.
CRO myths vs evidence (scorecard)
This is a “what to test” scorecard, not a set of universal answers. Use it to avoid cargo-cult optimization.
| Claim | Evidence | Verdict | What to test instead |
|---|---|---|---|
| “Red buttons convert better” | Weak | Myth | Contrast + label clarity + button placement relative to intent. |
| “Above the fold matters most” | Mixed | Context-dependent | Above-fold clarity + scroll cues + progressive disclosure. |
| “Shorter forms always win” | Mixed | Context-dependent | 2-step form, progressive profiling, optional fields, inline validation. |
| “Social proof always helps” | Mixed | Context-dependent | Proof proximity to CTA, specificity, third-party trust badges, case studies. |
| “More options = more conversions” | Strong | Usually myth | Default recommendations, fewer tiers, clear “most popular,” guided selection. |
| “Urgency always works” | Mixed | Context-dependent | Truthful deadlines, inventory-based scarcity, risk reversal, guarantees. |
| “Long-form always beats short-form” | Mixed | Context-dependent | Modular long-form (accordion), comparison tables, objection handling blocks. |
| “Video always increases conversions” | Weak | Usually myth | Silent autoplay vs click-to-play, placement, transcript + key benefits. |
What Actually Works
The most repeatable improvements tend to come from these levers (still test them, but start here):
Clarity
Make the value proposition instantly understandable.
Friction
Remove steps, hesitation, and cognitive load.
Trust
Increase perceived safety: proof, guarantees, transparency.
Intent match
Align landing page content with what users searched/clicked.
Offer
Improve the actual deal (pricing, bundles, trials), not just UI.
Copy-paste playbook: turn any “best practice” into a real test
- 1Write the mechanism: “Why would this change affect behavior?”
- 2Define one primary metric: the single number that decides win/lose.
- 3Pick a minimum detectable effect: don’t test for 0.5% lifts unless you have huge traffic.
- 4Ship a meaningful variant: make it big enough to plausibly move the metric.
- 5Record learnings: even neutral results are useful if the hypothesis was clear.
If your tests keep coming back “inconclusive,” read A/B test duration benchmarks and plan sample size up-front.
The Real Takeaway
CRO "best practices" are starting points, not rules. What works for one site may fail for another. Your audience, product, and context are unique. Test everything. Trust nothing without data.
Find What Works for You
Stop guessing. Start testing. ExperimentHQ makes it easy to run tests and get real answers.