When Users Said “More” and the Data Said Otherwise
Improving event page conversion by reconciling what users asked for with what they actually did.
The conversion problem hiding in plain sight
Event discovery platforms live and die by registration conversion. A user can find an event, feel genuinely interested, and still leave without signing up. The event detail page is the moment that determines whether intent becomes action — and for most platforms, it’s one of the least-examined parts of the funnel.
This case study traces how I investigated a persistent conversion gap on an event detail page by running user interviews and behavioral analytics in parallel — and what I did when the two sources appeared to tell completely different stories.
Users were arriving. They weren’t registering.
Registration conversion on the event detail page was lower than expected — and lower than comparable benchmarks for the event category. Roughly 60–70% of users who landed on an event page left without registering, including many who had clicked through from a personalized recommendation or a targeted campaign. Both signals pointed to genuine, prior intent.
This wasn’t an attention or awareness problem. Users were already in the door. Something was happening on the page itself that prevented commitment, and the team needed to understand what.
“They just need more information.”
The team’s initial read was intuitive: if someone visits an event page and doesn’t register, they probably didn’t have enough to make a confident decision. Early user feedback seemed to support this — people mentioned wanting to know more about the speaker, more about the agenda, more about format and time commitment.
The proposed response was to enrich the page: expand the agenda section, surface longer speaker bios, add an FAQ. More information means more confidence means more registrations. It was a reasonable hypothesis — but it needed validation before anyone started building toward it.
Two parallel tracks: what users said vs. what they did
I ran the investigation using two methods simultaneously. On the qualitative side: twelve user interviews with a mix of recent event attendees and users who had visited event pages but not registered. Sessions focused on the decision-making moment — what they were looking for, what made them feel uncertain, what would have made them more confident earlier.
On the quantitative side: behavioral analysis of a sample of event pages across scroll depth, funnel drop-off, CTA interaction patterns, time-on-page, and device type. The goal was to understand not just what users experienced, but how far into the page they actually got.
Running both methods in parallel wasn’t just about thoroughness — it was the only way to catch a discrepancy if one existed. Qualitative research captures what users want and why. Behavioral data captures what users actually do. Neither alone gives you the full picture.
Five patterns from twelve conversations
Relevance before everything. Before reading anything in depth, users wanted to quickly assess whether the event was right for them. “Is this for someone at my level?” came up in multiple forms. Audience fit was the first filter — not the topic or the agenda.
Confidence had a specific shape. Users described wanting confidence before registering, but confidence didn’t mean volume of information. It meant the right information arriving fast. Several described scanning for a specific answer rather than reading the page top to bottom.
Speaker credibility, but briefly. Speaker credentials mattered, but not in the way the page assumed. Users didn’t want long bios. They wanted one or two anchoring facts: industry, role, a recognizable credential. Enough to assess quickly, not read at length.
Outcomes over agenda. What users consistently said they wanted most was clarity on what they’d walk away with. Practical takeaways and learning outcomes outweighed session titles or scheduling details by a significant margin.
“More details” often meant faster clarity. When users said they wanted more information, that phrase deserved scrutiny. In several cases, the information they said was missing was already on the page — just positioned below the fold or inside an expandable section. What they were describing wasn’t a content gap. It was a surfacing problem.
Users weren’t reading what they said they needed
Median scroll depth on desktop. Most users never reached the agenda, speaker bios, or FAQ.
Median scroll depth on mobile. Nearly 3 in 4 mobile users left before reaching mid-page content.
Mobile drop-off rate vs. desktop. Mobile sessions were also 40% shorter on average.
Average time-to-first-CTA-visibility on mobile. The registration button wasn’t in view on first load.
Click-through on “See full agenda” and “About the speaker” expandable sections.
Scroll engagement on most bounce sessions. Users who left weren’t searching — they left quickly.
Interviews said: users want more detail.
Data said: users aren’t reading the detail they already have.
Taken at face value, the two data sources seemed to point in opposite directions. Users said they needed more information before they felt ready to commit. Analytics showed they were leaving before they ever reached the information already available.
The resolution isn’t that one source was wrong — both were accurate. They were just describing different things. Interviews capture what users want from an ideal experience. Behavioral data captures what users do in the actual experience, including when it fails to meet them early enough.
Users who said they wanted more details weren’t asking to scroll further. They were describing an unmet need that the page wasn’t surfacing in time. They left before the content that could have answered their questions came into view.
The problem wasn’t a content gap. It was a hierarchy problem.
Users weren’t asking for more content. They were asking for faster clarity.core finding
The event detail page was structured as a document to be read. Users were approaching it as a decision to be made. Those are different things, and they call for different information architectures.
The content that could have converted users already existed on the page — it was just positioned as if users had already decided to invest time, when in reality they were still deciding whether to invest attention. Relevance, credibility, value, and a clear call to action needed to arrive before users had a reason to leave, not after.
Don’t add more content. Reorganize what already exists.
The decision was to redesign the event detail page to be summary-first and trust-first — leading with the elements that enable fast decision-making rather than assuming users would scroll to find them.
This was a deliberate tradeoff. The instinctive response to “users want more information” is to add more sections. The data-informed response was to surface the highest-confidence-building content above the fold, before users had a reason to leave. Adding more content would have extended the decision path and increased cognitive load. Reorganizing gives users a shorter, cleaner path that doesn’t require them to do the work of surfacing what’s relevant themselves.
Redesigning the information hierarchy
The redesigned page structure treats two sections of the page as serving two different user states: users still deciding whether this is worth their time, and users who’ve already decided yes and want to go deeper.
- Event title + one-sentence value prop — not what it is, but why it matters to the target audience
- “Who this is for” — a plain-language audience signal, not a long paragraph
- Date / time / format / duration — in a scannable metadata row
- 3–5 key takeaways — written as outcomes, not session titles
- Speaker trust signal — photo, name, title, one-line credential
- Social proof indicator — registrant count or peer interest
- Sticky registration card — one CTA, always visible on scroll
- Full agenda — for users who want session-level detail
- Complete speaker biography — for users who want full background
- FAQ — for users with specific logistics questions
- Additional event context — format notes, recording policy, prerequisites
- Every piece of existing content stays on the page — nothing removed, only reordered
- The page earns deeper engagement by satisfying the surface decision first
- Mobile receives its own treatment, not a responsive copy of the desktop layout
What we’d measure — and why clicks alone would mislead
- Registration conversion rate on the event detail page
- CTA click-through rate (visibility vs. intent)
- Scroll depth relative to CTA position
- Event save and share rate
- No-show rate (are we attracting the right attendees?)
- Registration completion quality
- Post-event satisfaction signal
- A fast registration also means a short session — bounce rate conflates two different behaviors
- CTA visibility is a prerequisite for a click; without scroll context, causality is unclear
- More signups only matter if they produce genuine attendance, not just registrations
The tensions worth naming explicitly
None of these tradeoffs resolve themselves. Naming them clearly is part of the product work — they shape whether the redesign ships as the default template or as an opt-in format, and they inform what the A/B test is actually designed to learn.
The insight only visible when you use both sources
The most durable lesson from this project is that user interviews and behavioral data are not competing sources of truth. They’re complementary — and you need both to understand a product problem accurately.
Interviews told me what users were trying to accomplish and where they felt uncertain. They explained the motivation behind behavior. But interviews can’t tell you what users actually do when left alone with the product. People describe idealized versions of their own decision-making, and the gap between what they say and what they do is often exactly where the design problem lives.
Behavioral data told me what users actually did: how far they scrolled, where they dropped off, which elements they interacted with. It revealed that the information users said they were missing was already on the page — they just weren’t reaching it. Without the quantitative signal, I might have built exactly the wrong thing: more content, more sections, more of what wasn’t working.
Relying on one source alone would have produced a confident but wrong decision. Interviews alone send you toward a longer, richer page. Analytics alone might suggest the problem is the event type or the page length — without explaining why users were leaving or what they were actually missing.
The insight that users want faster clarity, not more content, only becomes visible when both sources are read together and the gap between them is taken seriously. That gap isn’t a contradiction to explain away. It’s where the actual product problem lives — and it’s the thing worth building toward.