Most businesses measure their marketing efforts wrong. Not because they don't want to know, but because standard tools give them an incomplete picture and no one told them.
The consequence is predictable: budgets shift from channels that do work to channels that look good in a report. Volumes decline. And the people who made the decision don't understand why.
This article is about attribution. What it is, why it's so difficult to measure correctly, and what concretely happens when you do it wrong.
The problem with last-click
Imagine you sell a product that costs €800. A potential customer sees an ad on Meta in January. They don't click. Two weeks later they see a display ad via Google and briefly visit the website. In March they search for the brand name via Google, visit the website again and buy.
Which channel gets credit for that purchase?
In last-click attribution, the default model in GA4, Google Ads and Meta Ads, Google Search gets 100% of the credit. Meta gets nothing. The display ad gets nothing. The two earlier touchpoints that initiated the purchase simply don't exist in the report. And this isn't an edge case. This is the standard way most businesses measure their marketing results.
"Last-click attribution is like rewarding the winner of a relay race and ignoring the rest of the team."
For cheap products with a short decision time, last-click is a reasonable approximation. Impulse purchases, low-threshold services: the time between first contact and purchase is short and touchpoints are limited. But the more expensive the product, the more planning a purchase requires, the larger the window between first introduction and conversion, and the more misleading last-click becomes.
The blind spot no one discusses
There's something else that has an enormous impact on how businesses allocate their budgets, but is rarely discussed.
When someone sees an ad on Meta, via display or via YouTube, and doesn't click immediately but later searches for the brand name via Google, GA4 registers that conversion as organic or direct. Technically that's correct. Practically it's misleading. The paid ad created the intent. The search is the consequence. But in the report it looks like the customer came on their own.
This explains a pattern I've seen for years: businesses that see 60 to 70 percent of their conversions coming via organic or direct, conclude that paid channels are too expensive and lower their advertising budget. After which organic and direct volumes also start declining and no one understands why.
"Organic traffic is not a standalone channel. It's often the endpoint of a journey that started with a paid ad."
What concretely happens when you turn off the tap
I've experienced two situations where businesses decided based on last-click data to reduce or completely stop their awareness campaigns.
In both cases, total volume declined noticeably afterwards. Not immediately, because there was still some trailing effect from earlier campaigns, but within one to two seasons the effect was clearly visible in booking and revenue figures. The paradoxical consequence was that ROAS on remaining campaigns increased. Because if you only run conversion-focused campaigns, targeting people who are almost ready to buy, the measured return is indeed higher. But total volume shrinks. Instead of €20,000 revenue on €3,000 spend, you earn €200 on €10. The ratio is better. The result is worse.
"A rising ROAS with declining volumes is not success. It's a warning signal."
What the data actually shows
To make this concrete: I compared the same time window for a client with a strongly seasonal product in two attribution models. Last-click versus a UMM model (Unified Marketing Measurement) with an unlimited attribution window via Billy Grace.
In last-click, organic was at the top as best performing channel. Meta Ads was virtually invisible. Google Ads showed a ROAS that was respectable but not impressive. Switch to the UMM model and the entire picture shifts. Almost 40 percent of the conversion value attributed to organic turned out to actually be driven by paid channels. Meta Ads suddenly showed an actual ROAS of more than 600 percent. Google Ads rose from a measured ROAS of about 200 percent to more than 340 percent when indirect conversions and a wider time window were included.
Same campaigns. Same period. Completely different story.
| Channel | Last-click | UMM |
|---|---|---|
| Google Ads | ROAS ~200% | ROAS 340%+ |
| Meta Ads | Nearly invisible | ROAS 600%+ |
| Organic | #1 channel | -40% after reattribution |
"The data was always correct. Only the window through which we looked was too small."
In conclusion
The fundamental mistake I've encountered for years is not that businesses want to measure poorly. It's that they rely on tools that give them a simplified picture and make decisions as if that picture is complete.
A consultancy once told me I was looking at too much data. I consider that one of the most dangerous pieces of advice you can give a business. More data means more context. More context means better decisions, as long as you know what questions you're asking.
"There's no such thing as too much data. Only drawing the wrong conclusions from the data you have."
Attribution is not a technical question. It's a strategic question. And the answer doesn't start with looking less. It starts with looking better.