TL;DR — Google Ads reports all conversions after a click, including those that would have happened without the ad. Of 1000 reported conversions, perhaps only 500 are incremental. Branded search as an example. Test it yourself: turn off that campaign for 2 weeks.
Reading time: about 8 minutes.
Imagine your marketing agency tells you their latest Google Ads campaign delivered 1000 conversions. Good news. The campaign costs 40,000 euros, so your cost per acquisition is 40 euros. At an average customer value of 200 euros, this looks like a profitable campaign. But does this match reality?
I can already hear you thinking "yes, of course not every conversion is a sale". Sometimes a contact request or checkout start also counts as a conversion. That would be a valid remark, were it not for the fact that it is not what I am talking about here. This is not about the wrong type of conversion, or about miscounting. What I want to point out is that "1000 conversions attributed to this campaign" means something different from "1000 people bought solely because of this campaign." There is an important distinction between those two sentences.
This article is about that difference. About why it exists, what it means for your decisions, and why it is rare for someone to tell you about it explicitly. As I wrote in my previous article, most agencies work at level one: reporting what platforms say, without adding context.
What a platform actually counts
When Google Ads reports that a campaign had 1000 conversions, it means this: 1000 people made a conversion within the attribution window after clicking an ad. There is nothing inherently wrong with this measurement, and it is one of the things I look at to see whether my campaigns are running well. But it is not everything you want to know.
Because within those 1000 converting users there are three different groups that Google Ads cannot distinguish:
| User type | What happened | Incremental? |
|---|---|---|
| Would buy anyway | Searched for brand name, clicked ad instead of organic result below. Bought either way. | ❌ No |
| Convinced by ad | Still hesitating, ad pushed them over the line. Without ad: no purchase. | ✅ Yes |
| Accelerated/shifted | Would buy later or via other channel, now came in faster. Timing shifted, not necessarily new. | 🟡 Partially |
These three groups are reported in Google Ads as the same "conversion." They sit next to each other in the same number. The platform does not know which group someone belongs to, because it simply does not have the data to distinguish.
Of the 1000 conversions you get reported, perhaps only 500 are truly incremental. Then your real cost per acquisition becomes not 40 euros but 80 euros. At a customer value of 200 euros there is still margin, but much less than the report suggests. If the ratio is closer to 300 out of 1000, it becomes 133 euros CPA and the campaign becomes loss-making. How skewed the ratio actually is, you do not know from a standard report. You do know that the chance is high that it is skewed.
The campaign I routinely advise against
There is one type of campaign where the difference between reported and incremental can be the largest: paid branded search. Ads that run on searches for your own brand name. These campaigns look spectacular in reports. Low cost per click, high conversion rate, impressive ROAS. On paper it is the best channel in your entire marketing mix. But in practice this campaign also gives the most distorted picture and is the easiest one to score with.
The reason is simple. People who search for your brand name have actively chosen your business. They type in your name, see your ad appear above the organic results, click on the top link, buy. The alternative was that they would have clicked one centimeter lower on the organic result and done the same thing.
There are scenarios where paid branded search is legitimate. If you have strong competitors bidding on your brand name, you want to be in a relevant position. If you want control over the message or the landing page. If you run specific promotions that the organic result cannot accommodate. For those cases, paid branded search has a function. But that function comes with a measurement problem: the report does not show whether the ad actually caused something new, or only intercepted traffic that was going to come anyway.
Every branded search conversion is reported in Google Ads as if the ad caused the purchase, without distinguishing between defensive bidding against a competitor and intercepting someone who was coming to you anyway.
I have had clients who intuitively sensed this before they worked with me. One client put it this way: "Google Ads is rigged, because most visitors just come in on your brand name and then the agency takes the credit." That client had no background in marketing analytics. He just saw it happening in his own numbers.
Branded search is for that reason often the dividing line between an agency that reports honestly and an agency that mainly keeps itself sellable. An agency that reports branded search separately from other Google Ads campaigns, and explains why those numbers should be read differently, deserves your trust. An agency that lumps everything together and is proud of the total ROAS, deserves a follow-up question.
How you can test this on your own campaign
You do not have to take my word for it. For a branded search campaign, you can test incrementality yourself with one simple experiment: turn off the campaign for two weeks. Then measure two things.
First: what happens to your organic traffic on your brand name? Have a quick look in Search Console. If your CTR and clicks suddenly go up, you already know. Do you also see a shift from last-click conversions toward your organic channels? Then you have two pieces of evidence.
If your revenue and organic conversions largely stay on track, you know what you need to know. The campaign was mainly filling the report, not the business. If your revenue does drop measurably, the campaign does have incremental value. Maybe because competitors bid on your brand name and you lost traffic without the ad. Maybe because your promotion message in the ad won clicks that the organic result did not get. In that case you also know what you need to know: the campaign earns its place.
Results vary, and that is the point. In my own client work I have seen both outcomes. Companies where pausing branded search had barely any effect on actual revenue, and companies where it was significant. The answer is not universal. It is specific to your business, your competitive environment, your brand. Neither outcome is a verdict against your agency. Both give you harder material than a platform report can ever provide.
It is an experiment that requires no PhD. Only the willingness to take a small risk that the numbers might drop slightly for two weeks. Check the auction insights to see how much competition there is for your brand name.
One of my clients, by the way, was firmly convinced that brand searches only happened from repeat customers, because they already knew the brand. But he was making the mistake more clients make: thinking that brand searchers are by definition existing customers, while awareness campaigns have exactly the goal of creating those brand searchers. New people who learn about your brand will also search for your brand name. Without having converted with you yet.
What you can do as a client besides this
Two other things you can do. First: ask your agency to report branded search separately from other Google Ads campaigns. Not because branded search is bad, but because the numbers should be read differently from those of campaigns that are actually trying to spark new interest. If your agency refuses or downplays this split, you have an answer to a question you had not yet asked.
Second: put reported conversions next to your own revenue data over the same period. If the platform says you had 30 percent more conversions, but your total revenue only went up 5 percent, where did those extra conversions come from? Often the answer is that they were already in your pipeline, but are now attributed to a campaign that did not cause them. A sharper question alongside this: how many of those conversions were new customers, and how many were existing customers who would have bought anyway? Your own revenue ledger is the best incrementality signal you have available for free.
Bringing all data together in one place
The sharpest way to see incrementality is by connecting all business data. From first ad click to conversion, from first purchase to repeat. Google Ads only sees its own piece, your CRM only sees customers, your accounting only sees revenue. Actually you want to see them all connected rather than guessing.
In my work I build dashboards that bring these sources together using AI, BigQuery, Supabase and other tools — this can now be done without months of development work. You then see for the first time which campaigns bring new customers versus activate existing customers. Which channels lead to repeat customers. Which investments actually grow.
That is level two I wrote about in my previous article. No Bayesian statistics, but context your agency usually does not have.
How does it actually stand
The search for incrementality is a difficult story. For platforms it undermines their sales story. For agencies it makes their reports harder to defend. For clients it means the numbers on which they base budget decisions can be misinterpreted. You can measure so much in digital marketing that it is hard to accept that some numbers actually have a blind spot. And because precisely that makes it complex, it gets ignored as long as no questions are asked about it.
An agency that is transparent about what reports do and do not say, delivers something that is not the minimum standard in this industry, but should be the standard. An agency that presents reported ROAS as if it were actual ROAS, does something that is understandable for them, because it keeps the story simple and their position defensible. The client pays the bill without knowing it.
What you can do as a client is not validate every campaign with experiments. That is unrealistic. What you can do is look differently at the numbers that come to your desk. With the question in the back of your mind: what part of this is real, and what part is noise presenting itself as result? That has nothing to do with distrust, by the way, or with manipulation of the numbers. Rather with getting a complete picture.
In a follow-up article I go deeper into POAS versus ROAS and what becomes visible when you include profit per acquisition rather than only revenue.
Joey Vangaeveren founded Intzicht and works as an embedded marketing and data analytics partner for B2B and B2C businesses across hospitality, business solutions, e-commerce and SaaS. His work spans strategy, custom analytics dashboards, and applied AI. He writes about what he sees in practice.
Want to see what your numbers really tell you, beyond the platform report? Get in touch.