6 min read
Joey Vangaeveren | Intzicht

Is it too soon to evaluate your campaign?

You launched a campaign last week. Why can't you see results yet?

You gave the green light on a campaign last week, but you're not seeing results yet. Why is it taking so long? The short answer: because you haven't been looking long enough.

The tree you planted yesterday

If you plant a tree today, would you judge its performance by the fruit you can pick tomorrow? Yet that's exactly what happens when someone asks what a campaign has delivered after just one week. A campaign doesn't work like a light switch. Someone sees your ad, thinks about it, compares options, hesitates, and eventually buys. Sometimes that takes three days. Sometimes three weeks. Depending on what you sell, it can take even longer. Until that cycle is complete, your data won't tell you what you want to know. Not because something is wrong with the campaign, but because the process is still running.

A bad week in the South of France

I was once in the South of France in late spring. It rained the entire time. Based on that one trip, I could conclude that the South of France always has bad weather. That's obviously not true. I just had bad luck.

The same applies to campaign data. A week can be bad by coincidence — a public holiday, a seasonal dip, or just bad timing. A week can also be unusually good. Neither gives you a reliable picture. The period you look at determines what you see. Choose the wrong window and you draw the wrong conclusions, even if the campaign is working just fine over the longer term.

"A week of data is not a measurement. It's a snapshot."
~30%
After 1 week
Incomplete picture
~70%
After 30 days
Indicative
100%
Full cycle
Reliable

So how long should you wait? That depends on your business. The question to ask is: how long does it typically take between someone seeing your ad and actually making a purchase? For a webshop selling a twenty-euro product, that might be five days. For a company selling a service worth several thousand euros, it could be two months. There's no universal answer, but there is a way to find out: look at your data, determine how long the purchase cycle takes for the majority of your customers, and use that as your minimum evaluation window.

I once worked with a client who wanted weekly reporting on campaign performance. The problem: the decision cycle didn't just run longer than seven days — it also involved multiple people. A purchase was never decided by one person alone. The result was predictable. Every week looked bad. Instead of questioning whether that reporting method actually made sense, we started looking at other things. What's overall traffic doing? How do we break down costs per week? Meanwhile, the euros spent that week were helping to harvest results over the next two months, not that week alone. We were judging the harvest before the growing season was over.

"Evaluating early feels like control. It's the opposite."

How Google Ads looks at your data

There's another reason why evaluating too early is dangerous, and it has to do with how platforms like Google Ads work. Google Ads optimises based on the attribution window you set. That window determines which conversions Google includes in its calculations and uses to adjust its bidding strategy. If you set a 45-day window, Google only accounts for the full impact of a click after 45 days.

Say you evaluate on 5 April what your March campaign delivered. You'll see the conversions that fall within your reporting period, but some of the buyers from March haven't made their decision yet. Those conversions are coming — they're just not in your report yet. You're closing the books before the harvest is in.

After 5 days
Budget€500
Impressions27,512
Clicks2,120
CTR7.71%
Conversions20
Conversion rate0.94%
Avg. order value€73
Revenue€1,460
ROAS292%
CPA€25
After full attribution window
Budget€500
Impressions27,512
Clicks2,120
CTR7.71%
Conversions33
Conversion rate1.56%
Avg. order value€79
Revenue€2,607
ROAS521%
CPA€15.15

If you cut your budget or stop the campaign based on that incomplete picture, you're doing two things at once. You're drawing the wrong conclusion about performance. And you're depriving Google of the data it needs to refine its bidding strategy. You're penalising the algorithm for work it hasn't had the chance to finish.

"You're penalising the algorithm for work it hasn't had the chance to finish."

But how do you know if things are going well? That's a fair question. Waiting without any signal is uncomfortable. What you can read early depends on the type of campaign and what you're trying to achieve. For a traffic campaign, I look at CTR and bounce rate. Are people leaving immediately, or are they sticking around? For a campaign with a longer consideration period, I look at page visits that signal intent — a pricing page, a contact page, a specific product page that people only visit when they're serious.

And sometimes there are soft conversions available — a request, a download, a sign-up — that sit earlier in the funnel but still tell you something. What those signals won't tell you: whether the campaign will hit its end goal. What they will tell you: whether something is moving in the right direction. That's the difference between waiting blind and waiting informed.

What to agree on before the campaign starts

The real problem doesn't start after launch. It starts before launch, when there are no clear agreements about when and how performance will be evaluated. If you have to explain two weeks in why no conclusions can be drawn yet, it's already too late. Not because the situation can't be explained, but because you're in a defensive position that was entirely avoidable.

What to establish upfront: what do you want to achieve, how will we measure it, and when does it make sense to evaluate? Tie that to a concrete timeframe that fits the purchase cycle of the business. That way everyone knows what to expect and when. It's simply better to make expectations clear upfront so that afterwards you can't be accused of being defensive or manipulating numbers.

"Agreeing on evaluation upfront isn't admin. It's the foundation of honest collaboration."

Conclusion

A week almost never tells you anything. Not because marketers want to avoid accountability, but because the data is structurally incomplete until the purchase cycle has run its course. Sometimes you want to keep a close eye on campaigns that are running — especially if they're expensive or if you're trying something new. But it's important to be patient and evaluate only when the time is right, and to choose the right timeframe when you do.

Joey Vangaeveren is founder of Intzicht and works as a strategic & hands-on marketing and analytics partner for businesses in both B2B and B2C, from e-commerce to hospitality. He writes about the things he encounters in practice, without filter.

Curious what this could mean for your business? Get in touch.

All cases and results in this article are based on real experience. Companies and specific figures have been anonymised to protect the confidentiality of my clients.

More insights

Why your marketing budget delivers more than you think, and how to find the proof

Last-click gives the wrong channels the credit. Here's the proof.

Read more

Conversion rate is not the holy grail. It is one number in a bigger story.

A falling conversion rate can mean your marketing is doing exactly what it should.

Read more

Your ROAS target is based on the first purchase. But what is a customer actually worth?

Most acquisition budgets are too conservative. Not because they have to be, but because the calculation is wrong.

Read more
Introspection

Why the best marketers doubt themselves, and the worst ones never do

An honest story about imposter syndrome, integrity and what sets real marketers apart.

Read more