Fix What’s Broken First — The CRO Layer Nobody Wants to Do

I once walked into a company — mid-size SaaS, decent traffic, paid acquisition running at scale — and asked a simple question: "When was the last time someone here completed a purchase on mobile?"
Silence.
Not uncomfortable silence. Genuine confusion. The kind where people look at each other trying to remember if that's someone's job. I pulled out my phone, went to the site, added a product to cart, tapped through to checkout — and watched the submit button disappear behind the sticky footer on my screen size. It had been like that for months. Maybe longer. Mobile was 55% of their traffic.
Nobody was hiding it. Nobody was negligent. It just... wasn't visible. The desktop experience worked fine. The analytics showed "checkout abandonment" as a metric, but that metric had always been high, so it read as normal. The problem was invisible because nobody walked the path their users walked.
That's what Layer 1 is about.
Why this layer exists
If you read the framework post, you know I think about CRO as a three-layer pyramid, done in order. Layer 1 — fix what's broken — is the foundation. And it's the layer most teams skip entirely.
I get it. Fixing broken things doesn't feel like optimization. It doesn't produce a case study. You can't present "I found a form that didn't work and fixed it" in a quarterly review and expect applause. But here's the math that should change your mind: a bug that affects the checkout page hits 100% of the traffic that reaches it. Every single session. Every single day. No A/B test you'll ever run has that kind of reach.
Most CRO programs jump straight to testing because testing feels like progress. There's a backlog, a velocity, a win rate. But running experiments on a broken funnel is like testing paint colors on a house with a cracked foundation. You're measuring the wrong thing.
Layer 1 isn't optimization. It's the prerequisite for optimization.
The audit I actually run
I've done this enough times that I have a repeatable process. It's not a downloaded checklist — it's the checklist that formed by doing the work, over and over, at different companies with different stacks. Here's what it looks like.
Technical audit
Walk the funnel yourself. Not a synthetic test. You, on your phone, on your laptop, on a slow connection. With an ad blocker. Without an ad blocker. On Chrome, Safari, Firefox, and yes, Edge — because some of your customers use Edge and you don't get to decide otherwise.
What I'm looking for:
- Page speed. Not just the Lighthouse score — the actual experience. Does the page feel slow? Does the CTA appear above the fold before the user scrolls away? A 4-second load time on a landing page is a silent funnel killer, and your traffic sources won't tell you about it.
- Console errors. Open dev tools on every key page. JavaScript errors that don't crash the page still break things — a silent error on form validation means the form looks fine but never submits. I've seen this more times than I'd like to admit.
- Broken links and 404s. Not just "does the link go somewhere" but "does it go where it should." And check HTTP status codes — 404 pages returning 200 are invisible to monitoring tools, which means nobody knows they're broken.
- Mobile rendering. Don't just check "does it look okay." Tap every button. Fill every field. Does the keyboard cover the input? Does the fixed header overlap the form? Do dropdown menus work with touch?
- Cross-browser behavior. Safari handles form autofill differently than Chrome. Firefox renders certain CSS differently. These aren't edge cases — they're significant chunks of your user base behaving differently than you think.
Analytics hygiene
This one scares people, because the implication is uncomfortable: the data you've been making decisions on might be wrong.
Check every conversion event. Fire it manually and verify it appears in your analytics tool. Check that the value is correct. Check that it's not double-firing. Check that it fires on the thank-you page and not on the form page. Check that your UTM parameters carry through the entire funnel and don't get stripped at a redirect.
I once audited a company's analytics and found their conversion tracking was undercounting by roughly 40%. A tag manager update three months earlier had broken the event firing on one of their two checkout paths. The "good" path still tracked correctly, so the overall numbers looked plausible — just low. They'd spent an entire quarter optimizing based on data that was fundamentally wrong. Every decision, every "insight," every test result — contaminated.
If your data is broken, your optimization is fiction. Full stop.
User path walkthrough
This is the one that sounds obvious and almost nobody does thoroughly.
Complete every conversion action on your site. Every form. Every checkout. Every signup flow. On every device you can get your hands on. Use real data where you can — not "test@test.com" but a real email, a real phone number, a real address. You'd be surprised how many forms choke on international phone formats or addresses with apartment numbers.
Don't just check that it works. Check what happens after. Does the confirmation email arrive? Does it arrive quickly? Does the confirmation page actually confirm anything, or does it leave the user wondering if it went through? The post-conversion experience is still part of the funnel — a confusing confirmation creates support tickets and buyer's remorse.
How to prioritize what you find
You will find more broken things than you expected. You always do. The question is: what do you fix first?
I use a simple formula: severity x traffic x proximity to conversion.
- Severity: Does it block the action entirely, or just make it harder? A hidden submit button is worse than a slow-loading image.
- Traffic: How many people hit this page? A bug on the checkout page matters more than a bug on the about page, even if the about page bug is more visually obvious.
- Proximity to conversion: How close is this to the money? Problems near the end of the funnel are more expensive per occurrence than problems at the top, because every user who reaches them has already survived every other drop-off point.
Multiply those three, rank the list, and start at the top. Don't over-engineer the scoring — a rough estimate is fine. The point is to avoid the trap of fixing whatever you noticed most recently instead of whatever matters most.
War stories from the audit trenches
Some of the worst conversion killers I've found were things nobody knew existed. A few patterns come up again and again.
The CSS overlap. A mobile checkout where the "Complete Purchase" button sat right behind a sticky promotional banner on screens between 375px and 414px wide — basically every iPhone. Desktop worked fine. Tablet worked fine. The exact devices that made up 40%+ of mobile checkout traffic couldn't submit the form. The fix was one line of CSS. The impact was double-digit.
The silent Safari error. A lead generation form that used a JavaScript method Safari didn't fully support at the time. Chrome, Firefox — no issues. Safari users saw a form that looked perfectly normal, filled it out, hit submit, and... nothing. No error message, no loading spinner, no feedback at all. The form just sat there. These users made up roughly 25% of the site's traffic. Every single one of them was being silently rejected.
The invisible 404s. A site where a URL restructure had left dozens of old pages returning 200 status codes with the homepage content instead of proper 404s. From a user perspective, clicking a link just showed the homepage — confusing, but not catastrophic. From an SEO and analytics perspective, it was a disaster. Search engines were indexing duplicate content. Internal link equity was leaking everywhere. Analytics showed inflated homepage traffic that was actually error traffic. Nobody noticed because the monitoring tools only check for 4xx and 5xx status codes.
The 40% undercount. Analytics showed a steady conversion rate that the team was optimizing against. Turned out, one of the two checkout paths had stopped firing conversion events after a tag manager update. The "real" conversion rate was 40% higher than what the dashboard showed. Every optimization decision for the past quarter had been based on a denominator that was wrong. Tests that "won" might not have. Tests that "lost" might have been winners.
Each of these was invisible from a dashboard. Each of them was obvious to anyone who actually walked the funnel.
When Layer 1 is "done enough"
Perfection is a trap. You're not trying to build a flawless site. You're trying to reach a state where a real person can walk the entire funnel on any device without hitting something that blocks or confuses them.
My test: I hand my phone to someone who's never seen the site. I ask them to complete the primary conversion action. If they can do it without asking me a question or encountering something broken, Layer 1 is done enough.
"Done enough" means:
- No conversion-blocking bugs on any major device/browser
- All conversion events firing correctly and reconcilable against backend data
- Page speed on key pages under a reasonable threshold (I aim for under 3 seconds on mobile, but the specific number matters less than "not noticeably slow")
- No confusing dead ends in the funnel
It doesn't mean every page is fast. It doesn't mean every edge case is handled. It doesn't mean the design is good. It means the plumbing works. That's Layer 1.
The hidden value nobody talks about
The obvious value of Layer 1 is the conversion lift. In my experience, fixing what's broken accounts for 5-15% improvement — sometimes more if the problems have been compounding for a while. That's real revenue, recovered by fixing things that should have worked in the first place.
But the less obvious value is what it does for everything that comes after.
Layer 2 (removing friction) requires you to study user behavior — session recordings, funnel analysis, form analytics. If your funnel is broken, that behavioral data is contaminated. You're watching users struggle with bugs and interpreting it as "friction." You're measuring drop-off rates that include error-induced abandonment and treating them as behavioral signals. Your entire understanding of how users move through the funnel is distorted.
Layer 3 (testing) requires clean data and consistent traffic flow to reach statistical significance. If some percentage of your traffic is silently failing due to technical issues, your sample sizes are effectively smaller than you think. Tests take longer to reach significance. Results are noisier. The math doesn't work as well because the inputs are dirty.
Fixing what's broken doesn't just improve conversion directly — it cleans the lens you use to see everything else. It makes Layer 2 more accurate and Layer 3 more reliable. That compounding effect is worth more than the direct lift.
Start here. Seriously.
If you're running any kind of CRO program — or thinking about starting one — do this first. Before you brainstorm test ideas. Before you set up a testing tool. Before you hire an agency. Walk your own funnel. On your phone. Right now.
You will find something broken. You always do.
Fix it. Then fix the next thing. Then check your analytics. Then walk it again. It's not glamorous. You won't write a case study about it. But it's where the money is, and it's the foundation everything else gets built on.
This is Layer 1 of the CRO pyramid. If you want the full framework, start with The CRO Framework I've Used at Every Company Since 2010.
Up next: Layer 2 — Remove Friction. Once the plumbing works, you can start studying how people actually behave — and clearing the path between them and the outcome you want.
