Reporting That Gets Used: Start From the Decision

The difference between reporting that gets adopted and reporting that gets ignored is usually upstream of the dashboard itself.

Last year we inherited campaign reporting from a prior project, covering paid media, email, onsite, and events. Four channels, several ad platforms, a CDP in the middle, and a marketing team that needed to see campaign performance in one place. The first version answered most of the questions people were asking. What we wanted next was reporting that tied those answers directly to the decisions the team was trying to make, so the numbers could be acted on without a follow-up query.

Start from the decision, work backwards

Good reporting starts from the questions and business actions stakeholders are trying to take, then works backwards to find the gaps in information and surface what’s needed to close them.

When we rebuilt that reporting, we anchored on specific decisions the marketing team was making. Which channels to invest more in for the next ABM campaign. Which audiences were converting. When to switch creative. We consolidated the multi-channel data into one place, but more importantly we organized it around those decisions and not around the data sources. Adoption followed, because the reporting was close enough to the action for people to use it without a follow-up query.

A sales team making weekly pipeline decisions needs something different from a marketing team doing quarterly budget planning. The data might overlap, both care about revenue and conversion rates, but the grain, the filters, the time horizon, and the frequency of use are different. One dashboard trying to serve both usually serves neither well.

Data quality is upstream of adoption

Secondly, dashboards built on top of inconsistent source data don’t get used, regardless of how well they’re designed.

On that same project, a chunk of the sales pipeline data had SKUs that didn’t match the source catalog. Between twenty-five and thirty percent of orders were getting entered with SKUs that didn’t exist, because the order entry workflow was hard to use. The reporting downstream of that was technically working, but the numbers looked wrong to anyone who knew the catalog, and the trust went with it.

The fix wasn’t in the dashboard layer. It was making the order entry workflow easier so the source data came in clean the first time. Once that upstream work was done, the reporting started getting used, because the numbers matched what people expected to see.

This pattern shows up across a lot of engagements. Native reporting in a CDP is good for quick checks but teams end up exporting to BigQuery for the real analysis. An advertiser-facing portal has the right data but lives in a tool nobody wants to open. Most of the time the dashboard itself is fine. The data quality, the tool choice, and the workflow upstream of it are what determine whether anyone uses it.

The best reporting isn’t always a dashboard

Some of the highest-adoption reporting I’ve helped build wasn’t a dashboard at all. A weekly CSV export delivered to a Slack channel. A summary email that lands on Monday morning. An office hours session where someone with context can answer audience questions live.

We did that on a different engagement. Instead of building one more dashboard, we moved the weekly audience review into an office hours model where someone from the team facilitated and answered questions in the moment. People got their answers faster, they understood the reasoning behind the numbers, and the knowledge got distributed across the team instead of concentrated with one analyst.

What helps reporting land

The reporting that gets sustained usage tends to share a few qualities.

It answers a specific question a specific team is asking on a specific cadence. Not “here is all the data about our email campaigns,” but “here’s whether this week’s email performance is tracking with last week, and what changed.”

It’s tied to a real meeting or decision point. If a report is part of how a team runs their Monday standup or their weekly pipeline review, it gets used every week. If it’s something people can optionally look at, it tends to get used once and forgotten.

It’s built with the heaviest user in mind. Senior stakeholders request reporting but often aren’t the ones using it daily. The person who will live in the data is usually lower on the org chart, and building for them first produces something more durable.

And it sits on top of data people trust. When the source numbers are clean and the definitions are agreed, people use the reporting. When they aren’t, polish on the dashboard doesn’t change much.

Most of the reporting work I’ve done has been closer to a product design problem than a data modeling one. Figure out what someone is trying to decide, build something that fits exactly that, clean up whatever upstream data issue is blocking trust, and deliver it where and when the decision gets made. Do that, and the adoption tends to take care of itself.