Introspection
8 min read
Joey Vangaeveren | Intzicht

Why marketing reports stay so simple

Where agencies sit, where I work, and what level exists at international tech companies.

Last week I read a job posting for a Principal Data Analyst, Marketing Analytics at an international tech company. The role described work at a level I cannot deliver. Marketing Mix Modeling in Bayesian frameworks. Geo-holdout experiments to measure causal impact. Incrementality testing with synthetic controls. Yes, these are real words, and I had not heard of them either. Have you? Then this article is probably not really for you.

My first reaction was honestly uncomfortable. I write here regularly about attribution and about what marketing agencies do and do not measure well. Do I even have the right to speak, when there are people out there who understand this field ten levels deeper than I do?

I went to Claude for help. Could he not build this for me? But apparently this is something even AI cannot fix for you in one-two-three. You cannot expect an LLM to quickly handle some PhD-level work for you. It is actually the first time I hit a limit with AI as a tool. That says something about where this field still stands: there is depth that does not fit into a prompt.

So the question lingered. And while it lingered I started to see something else. Not that I know nothing, not that my earlier articles are nonsense. But something about how far this field reaches, and how far the daily practice in the Benelux sits from that reach.

This article is about three levels at which marketing measurement works. Honest about where I stand myself, about where most agencies stand, and about where a select set of companies actually work. It is not a competition, but the world seems bigger than you might think. And every time you believe you finally see the bigger picture, you realize you are actually missing much more.

Level one: what agencies actually report

A typical monthly report from a Belgian or Dutch marketing agency to a mid-sized client contains a predictable set of elements. Traffic: sessions, users, conversion rate. If paid campaigns are running: Google Ads and Meta Ads reports, with platform-reported conversions and ROAS. Sometimes an overview of published content. Sometimes a paragraph with recommendations.

That is what is there. Nothing more is expected.

It is easy to be cynical about this. But before we do, it is fairer to ask why this level is dominant. And the answer is not that agencies are dumb or that they deceive their clients. The answer is much more nuanced than that.

Most clients do not want to or cannot spend their own time interpreting marketing numbers. That is what the agency is for. And the simpler the agency makes it, the more the client feels properly served. Adding complexity feels to the client like more work. For the agency, complexity also means having to explain, defend, and nuance. That costs hours and raises additional questions, while the underlying question they want answered is: what is our contribution? And what I think it should be: what do these numbers mean concretely for your business?

What a standard Google Ads report tells you is not what you think it tells you. The platform-reported conversions count everyone who converts within the attribution window after clicking an ad. That is not the same as the people who converted because of that ad. What about everyone who saw the ad but did not click? What about someone who googles your brand name after seeing your ad? Those differences disappear into a single number, and that number becomes the basis for decisions.

Level two: adding context

What I try to do in my work is not measure 180 degrees differently than what the platforms measure. It is interpret differently by adding context the agency usually does not have.

In concrete terms: trying to bring in company-owned numbers. From the bookings, from the sales data, from the CRM. Not only looking at what the advertising platforms say, but at what is happening inside the business itself.

That leads to questions that otherwise do not get asked. Are these new customers coming in through this marketing channel, or are they existing customers who would have bought anyway? Does it make sense to look at a 'too low' conversion rate on a top-of-funnel campaign, if the goal of that campaign is awareness rather than direct conversion? When can you really evaluate a campaign for this product or this company? Sometimes the answer is only after six months, not after two weeks. Are the channels in use even the right ones for what this company sells and to whom? And if revenue is lower this quarter, is that due to today's marketing mix, or due to repeat purchase falling away from an earlier season?

These questions are not particularly deep from an academic perspective. They do not require Bayesian statistics. What they do require is that the person reporting sits close enough to the business to even be able to ask them. An external agency that only sees the advertising platforms cannot answer these questions unless they also have access to and use data that is company-owned.

This is the level I try to work at. It is concretely better than level one, but it is not the top of the field. It is adding context to existing measurement methods, not replacing those methods themselves.

Level three: the measurement method itself under scrutiny

The Airalo job posting described a different level. Marketing Mix Modeling, or MMM for short, is a statistical approach that tries to model which part of revenue can be attributed to which marketing channel, taking into account delay effects, saturation, and external factors. Bayesian frameworks like PyMC-Marketing or Google Meridian are used to build these models.

Geo-holdouts are experiments where you turn off a channel in one region and not in others, to measure what the effect actually was. Causal inference, with techniques like synthetic control and difference-in-differences, tries to answer the question of whether marketing actually causes something or only coincides with it.

I had never heard of these terms before I read the posting. I now see what they are for. They try to bypass or solve the fundamental problem of attribution: that platforms and most reports do not measure what is truly incremental, by using more rigorous methods. It is the difference between saying "our ads were attributed 1000 conversions" and saying "our geo-experiment shows that our ads caused 300 incremental conversions."

This level exists. Only at companies with the scale, the budget, and the analytical capacity to invest in it. Mainly international tech companies, global brands, companies with marketing budgets running into the tens of millions. For most mid-sized companies in the Benelux, this level is not feasible. It would certainly be valuable, but the people who can deliver this work are not available at the budgets these companies have. It still has to pay for itself.

What this means

There is something uncomfortable about describing three levels where you yourself sit on the middle one. The temptation is to position yourself higher than you are. Or, out of false modesty, lower. Neither is honest.

Honest is: I work at level two. I add context to measurement methods, I interpret more sharply than a standard agency report does, I ask questions that otherwise do not get asked. That is valuable for the companies I work with. It is not frontier-level, and I do not pretend that it is. I'm not claiming to have reinvented the wheel either. There are probably other consultants or agencies trying my approach. But I have not actually come across any myself. Most marketers I meet are more 'vibe' and 'atmosphere' people than 'data' and 'numbers' people.

There's always someone operating at a higher level. You cannot know everything. That may not be necessary to deliver good work. But it sits uncomfortably with me. And that discomfort, I think, is a healthy signal. It keeps you honest about what you do and do not offer. It prevents the most dangerous position in this field: delivering level one while presenting yourself as level three.

For clients reading this and wondering where their own agency sits: that is the sharpest question you can ask. Not whether your agency has the most expensive tools. Not whether they make the most impressive dashboards. But whether they are honest about the difference between what they measure and what is truly incremental. Between what they report and what they know. Between the level at which they work and the level that exists.

My sense is that most marketing agencies and digital consultants work at level one. That does not have to be a problem, as long as they are honest that that is the level. The problem arises when level one is sold as something it is not.

What you are looking for, as a client, is not necessarily someone at level three. Few companies need that or can afford it. What you are looking for is someone who is honest about where they stand. That is rarer than expertise itself.


Joey Vangaeveren founded Intzicht and works as an embedded marketing and data analytics partner for B2B and B2C businesses across hospitality, business solutions, e-commerce and SaaS. His work spans strategy, custom analytics dashboards, and applied AI. He writes about what he sees in practice.

Want to explore what this could mean for your business? Get in touch.

Joey Vangaeveren founded Intzicht and works as an embedded marketing and data analytics partner for B2B and B2C businesses across hospitality, business solutions, e-commerce and SaaS. His work spans strategy, custom analytics dashboards, and applied AI. He writes about what he sees in practice.

Get in touch.

More insights

Why your Google Ads report does not say what you think it says

What Google Ads reports as conversions is not the same as what actually drove sales.

Read more

When your customer is ready to come back, are you there?

Your customer decides earlier than you expect. Here's how to use that.

Read more