They'll always find a way to dress it up. The ROAS is up, the cost per click is down, or the reach is impressive. And yet something doesn't add up. You don't notice it immediately in your revenue, but your gut tells you something different from what the agency's report says. The growth they promised isn't materialising. When you ask questions you wait a while for an answer. And when you finally get one, it comes wrapped in buzzwords and good vibes. So that's fine then, right?
I wrote this article as a reference for people who need to scrutinise digital agencies but don't always understand every detail themselves. But also as a foot in the door for when you need a second opinion on your marketing reports, whether they come from an agency or in-house.
This article goes through the most commonly used metrics in digital marketing, KPI by KPI. What gets measured, how a digital marketer can put a positive spin on it, and how to question each specific KPI. Some theory, but mostly what I've encountered in practice.
It's easier to show what seems to be working than to hide what isn't. That's also the shortest route to success for the marketing agency, but not necessarily for your business.
"Almost every metric in a marketing report is manipulable. The system has never required anyone to ask the difficult questions."
ROAS: Return On Advertising Spend
What it measures: the ratio between what you spend and the revenue that comes out of it.
How it gets manipulated:
Branded campaigns are included in the numbers. Branded campaigns — campaigns that bid on search queries containing your own brand name — almost always perform brilliantly. The people searching for your brand name are already interested. The chance they'll convert is high, costs are low, and the ROAS looks spectacular. But the question that rarely gets asked: would those people have found you anyway through organic search, without that campaign? Probably yes. What you're measuring is largely revenue you would have had anyway. When branded and non-branded campaigns are reported together without a split, the actual performance of your acquisition campaigns becomes invisible.
The attribution window is set too short or too long. Google Ads measures conversions within a window you set yourself. An agency that sets a 7-day window for a product with a 30-day purchase process is reporting a fraction of the actual impact. Conversely, a window that's too long can attribute conversions to campaigns that had little to do with them.
The attribution model in external reports differs from what the platform uses. Google Ads now uses data-driven attribution as the default for accounts with sufficient data. But many agencies report via Looker Studio, their own dashboards or GA4 reports where last-click is still the most common setting. Last-click gives full conversion credit to the last channel before the purchase, meaning organic or direct traffic sometimes steals credit from email campaigns, SEA and especially Social ads. Always ask which model is used in the report you receive, because that determines what you see.
Platform data isn't checked against actual revenue. Google Ads looks at what happens after the date of a click. You usually can't directly compare those numbers to what happened in your till that same week. On top of that, the concept of a 'conversion' is sometimes different from a purchase. It might be a lead, or someone who started a payment. If that doesn't translate into revenue, it's not worth much to you. An agency that forwards only platform numbers without comparing them to your actual revenue is showing you a reflection of reality, not reality itself.
The question you ask: "Can you show the ROAS excluding branded campaigns? Which attribution model are you using in this report? Which conversions did you measure exactly? Is every conversion worth the same?"
Conversion rate
What it measures: the percentage of visitors who complete a desired action — a purchase, an enquiry, a sign-up.
How it gets manipulated:
Benchmarks used as a sales pitch. "Your conversion rate is 1.2% and the benchmark for your sector is 2.5%" sounds like a diagnosis but isn't an analysis. A benchmark says nothing about your traffic mix, your price point, your purchase process or your target audience. A webshop selling expensive, complex products to a niche audience inherently has a lower conversion rate than a webshop with impulse purchases at twenty euros. That benchmark justifies a CRO project, but it solves a problem that might not exist at all.
Traffic mix is ignored. When campaigns reach new, cold audiences — people who don't know your brand yet, top-of-funnel visitors who are comparing — the conversion rate drops. That's not a problem, that's logic. More cold visitors on your site automatically means a lower percentage that converts immediately. An agency that leaves out that context and presents a falling conversion rate as proof that the website needs fixing is turning the reasoning on its head.
Micro-conversions are used to hide macro-results. An agency that can't report purchases reports newsletter sign-ups, page visits or video duration. These can be legitimate intermediate steps, but when they become the primary metric while the actual objective — revenue, leads, bookings — falls short, they serve as a smokescreen.
The question you ask: "Based on what data do you conclude our conversion rate is too low? What's the traffic mix of visitors who convert versus those who don't?"
CTR
What it measures: the percentage of people who click on an ad after seeing it.
How it gets manipulated:
High CTR without intent. A CTR of 8% looks impressive. But if those clicks come from a broad audience that clicks out of curiosity but buys nothing, a high CTR is worthless. CTR measures engagement with the ad, not the quality of the traffic behind it. A provocative ad attracts clicks, but that says nothing about whether those people are buyers.
CTR is used to compensate for poor conversion numbers. When conversions disappoint but CTR is high, the focus shifts to "the ads are performing well, the problem is with the landing page." Maybe that's true. But it can also be a way of placing responsibility for the result outside the agency.
Average CTR hides what's actually happening. A campaign with ten ad groups can have an average CTR of 5%, while two groups are at 15% and eight groups at 1%. Those two groups pull the average up. The question is which groups are actually converting.
The question you ask: "What's the correlation between CTR and conversion rate per ad group? Which clicks actually lead to results?"
Cost per click and CPM
What it measures: CPC is what you pay per click. CPM is what you pay per thousand impressions.
How it gets manipulated:
Low costs are sold as efficiency. "Our CPC has dropped from €1.80 to €0.90" sounds like good news. But if those cheaper clicks come from a broad, less relevant audience that doesn't convert, a lower CPC is a deterioration, not an improvement. Costs say nothing without quality.
Cheap reach is easy to buy. Broad audiences, low bids, little filtering. An agency that measures its success by a low CPM reaches many people for little money, but those people might be completely irrelevant to your business.
The question you ask: "What's the cost per actual conversion, broken down by audience segment? Which segments have a low CPC but no conversions?"
Impressions and reach
What it measures: how many people saw your ad, how many unique people you reached.
How it gets manipulated:
Impressions and reach are the easiest metrics to produce and the hardest to challenge. "We reached 2.3 million people" sounds impressive. But reach without context — who are those people, did they act, are they your potential customers — is a number without meaning.
When a campaign generates no measurable conversions, reporting shifts to reach and brand awareness. These can be legitimate objectives, but only if that was agreed upfront. When reach becomes the new primary metric after conversion numbers disappoint, that's a signal.
The question you ask: "What's our objective for this campaign, reach or conversion? If it's reach: how do we measure whether that reach is actually contributing to the business?"
Quality score and campaign health
What it measures: platform-specific scores indicating how well a campaign is set up according to the platform's own criteria.
How it gets manipulated:
An agency that measures its own performance against platform scores is using a ruler the platform made for them. It makes me uneasy every time I look into an account and see they've done exactly what Google suggests — put everything on broad match and let Google do the work.
"All our campaigns have a quality score of 100%" only proves they're technically set up correctly. I've seen campaigns with a quality score of 50% that perform ten times better than campaigns with a score of 100.
The question you ask: "Aside from the quality score: what have these campaigns contributed to our revenue or leads in the past period?"
| KPI | What an agency says | What good reporting looks like |
|---|---|---|
| ROAS | "Our ROAS is 800%" | ROAS excl. branded, checked against actual revenue |
| Conversion rate | "The benchmark is 2.5%, you're at 1.2%" | Conversion rate per traffic source and funnel stage |
| CTR | "Our CTR is 8%, the ads are performing great" | CTR linked to conversion rate per ad group |
| CPC / CPM | "CPC dropped from €1.80 to €0.90" | Cost per actual conversion per segment |
| Reach | "We reached 2.3 million people" | Reach with context: who, and what did they do? |
| Quality score | "All campaigns are green" | Contribution to revenue or leads, regardless of platform scores |
| Guarantees | "We'll double your conversion rate" | Transparent approach, honest about what's possible and what isn't |
What your agency should never promise you
There's another pattern I regularly encounter that bothers me more than bad reporting: agencies making promises they can't possibly keep.
A concrete example I've seen more than once. An agency promises to raise the conversion rate from 0.5% to 1% and presents that as doubling revenue. That sounds logical, but it's wrong in two ways.
First, the reasoning itself is too simplistic. Conversion rate isn't only determined by what's on your website, but also by what type of people you attract. That would be the consequence of adjusting your marketing mix to keep only the most purchase-ready visitors, while your potential at the top of the funnel shrinks. I wrote about this in more detail in the piece on conversion rate and what it does and doesn't tell you.
Second, the most common need is often to work at the top of the marketing funnel. Where are the potential customers who don't yet know your business or solution? To reach them you need to target non-branded keywords, address new audiences, attract colder traffic. And cold traffic converts less quickly. What actually happens in practice when the agency does its job well: the conversion rate drops. Then the agency is stuck with a promise they've undermined by working effectively.
I always say: marketing isn't an exact science, but it's a kind of philosophy. And that applies to every aspect. The focus here is on SEA, but it applies equally to SEO and Paid Social ads. An agency that guarantees results is lying, consciously or not. What an agency can promise: an approach, transparent reporting and honesty about what works and what doesn't.
An agency that never brings bad news, that explains away every disappointing result with external factors, that every month finds some metric that's gone up — that agency is covering for itself — not for you.
"Every report that contains only good news was written to reassure you, not to inform you."
Something else I think matters: the onboarding
Something else I think matters is how a marketing agency presents itself during onboarding. How an agency behaves before the assignment says more than all the reports afterwards.
An agency that at the first meeting immediately starts talking about campaign structures, budgets and platforms, but hasn't asked what makes your business unique, who your ideal customer is, why people buy from you and not a competitor — that agency is going to sell generic solutions.
Marketing that works is marketing that starts from what distinguishes a business. Make sure during the onboarding process to ask how they're going to leverage your USP in your campaigns, and how they'll show that reflected in the results.
Also look in the mirror
So far this article has been about what an agency does wrong. But being honest also means I address the other side, because a collaboration works both ways.
Only in very rare cases can an agency make enormous differences within a few days. Not because they're not good, but because marketing doesn't work that way. If the time between a first point of contact and an actual purchase at your business averages six weeks, a two-week campaign simply hasn't had that time. I wrote about this in more detail in the piece on attribution windows and campaign evaluation.
The same applies to the context you provide as a client. If you hire an agency to grow non-branded traffic, but you don't tell them which search terms are relevant to your audience, which products or services you want to push, or what type of customer you want to attract — they'll fill that in themselves. And that input will be generic, because they don't know your business as well as you do. I understand that as a client it's sometimes hard to give more direction on this, which is why I think it's important to properly understand the business, the objectives and the product when working on it.
When your gut tells you the approach is too generic
Sometimes you just know. It feels like the agency is telling the same story to every client, making the same recommendations, using the same structure. That feeling is right more often than you'd think. But gut feeling is hard to defend in a meeting. Here are a few concrete signals.
They talk about their approach before they understand you. An agency that in the first meeting already starts talking about campaign structures, platforms and budgets, but hasn't yet asked what distinguishes your business, is working from a template.
The report could have been from another client. If you replaced the logo on the report with a competitor's, would the report still make sense? If the answer is yes, the report says nothing about your specific situation.
They cite benchmarks without context. Benchmarks are useful as a reference, but an agency that measures its own performance against sector averages rather than your historical data or your specific objectives is thinking in categories, not in your business.
Watch how they answer when you ask a specific question. Ask one of the questions from this article. An agency that knows your situation gives an answer that refers to your data, your segments, your campaigns. An agency with a generic approach gives an answer that could have come from a blog post.
You don't need to be an analytics expert to ask these questions. You just need to expect that the numbers you receive are the numbers that move your business forward.
The checklist for your next meeting
ROAS
"Can you show the ROAS excluding branded campaigns? Which attribution model are you using in this report? Which conversions did you measure exactly? Is every conversion worth the same?"
Conversion rate
"Based on what data do you conclude our conversion rate is too low? What's the traffic mix of visitors who convert versus those who don't?"
CTR
"What's the correlation between CTR and conversion rate per ad group? Which clicks actually lead to results?"
CPC / CPM
"What's the cost per actual conversion per audience segment? Which segments have a low CPC but no conversions?"
Reach
"How do we measure whether that reach is actually contributing to the business?"
Campaign health
"Aside from the quality score: what have these campaigns contributed to our revenue or leads?"
Guarantees
"What can you guarantee, and what can't you?"
Before you sign
"What do you want to know about our business before you propose an approach?"
Do you have doubts about your marketing report?
Do you have doubts after reading this article about your marketing report? Whether it comes from an agency or in-house, I can serve as a second opinion.
Send me your question about your marketing report without any obligation, or let me look at it myself if you have general doubts. I don't charge anything in that observation phase. So if I find nothing wrong, it costs you nothing. If I do find something, it might be the start of a good collaboration and better marketing results for your business.