Hi Simo, Very happy to welcome you to Friends of Search in Amsterdam this year! Like many online marketing professionals, we are a big fan of your work and blog: simoahava.com. I think most marketers who have questions about Google Tag Manager or tracking-related issues often end up on your blog. Thank you for sharing all this knowledge and guides!
One of the main reasons we asked you to speak at Friends of Search is that several changes are coming in the field of data collection, third party cookies and Google Analytics 4. Your talk will be about server-side tracking / tagging and how they help address the current “problems” in the EU regarding data collection and processing. It’s a very relevant topic to anyone working in digital marketing.
First of all, what have you been working on lately?
Lately I’ve been completely preoccupied with Simmer (www.teamsimmer.com). It’s an online course platform created by my wife, Mari Ahava, and me mid-pandemic, in late 2020. I’m working on new course material almost daily.
Other than that, I’m working on trying to figure out the potential of server-side data flows. It’s an interesting paradigm, for sure, because it gives companies more control over upstream data, hopefully with the intent of respecting the rights and privileges of end users and customers.
Do you think many marketers’ concerns about the upcoming changes are understandable? Are third-party cookies so essential for marketers these days?
Well, third-party cookies are dead in the water. I don’t think it’s important to dwell too much on what we’re losing and rather switch the mindset to what we’re getting in return.
I don’t think they’re essential to marketers, at all. Display advertising itself has always been flaky because of the disconnect between storing “views” associated with third-party cookies and correlating that with what happens on the advertisers site or app. The chain is just too brittle.
Many ad tech vendors are offering new first-party options to replace third-party cookies, such as sending the user’s personal data (hashed) directly to the vendor, so that they can then correlate this with whatever the user did on the vendor’s other properties, or other sites the vendor sends traffic to. This isn’t a vast improvement, in my opinion, but because it’s first-party it does give more controls to the site, the user, and to the web stack for controlling what actually gets sent.
The fact is that it’s been pretty ugly in ad tech for the past 30 years. Users’ data has been harvested without any limitations and with very few repercussions. With the emergence of privacy legislation, tracking protections, and a rising understanding among the general public just how much of this harvesting is going on, things are now changing, hopefully to the better.
Yes, it will mean that marketers, analysts, and advertisers will have “less” data or “worse” data to work with, but that’s the compromise we need to live with. There’s a wealth of technological opportunities to explore in this dynamic, too.
Do you think the use of first party cookies and server side tracking will make the internet more privacy friendly?
No, technology doesn’t change the status quo. It’s how that technology is used. You could argue that by removing third-party cookies from the mix, users’ data is now more “in the open” and thus can be better protected, but it has been shown time and time again that “first-party” isn’t a magical safe haven for user data.
If you look at privacy legislation, for example, it’s often decidedly technology-agnostic. This is because whatever tech is being used, the same opportunities for misusing and misrepresenting users’ trust are always available.
However, moving data streams 1P and utilizing server-side proxies gives a wealth of tools to potentially vastly improve the conditions for respecting users’ right to privacy. It’s just that these same tools can be used to subvert these rights, too, so again it’s really up to the companies collecting this data to decide how they want to orient in this world.
The Google Tag Manager (GTM) solution that is now widely used is having GTM on the client side, which forwards via GA4 or other tooling to the GTM server side container on the subdomain and does the tagging from there. This means that we are still dependent on client-side tracking in Google Tag Manager in many cases. Do you see this setup as server side tagging?
Server-side tagging is not reliant on a client-side GTM. Server-side can work with any incoming data stream – it’s just really convenient to use client-side GTM for it, because of some built-in mechanisms that make the setup really smooth.
How future-proof do you see this setup?
If you’re thinking whether this setup is future-proof against privacy legislation or tracking protections then the answer is “no”. First of all, I don’t believe either of those need to be “proofed” against. Instead, companies need to understand why privacy regulation exists, and what browsers are trying to do with tracking protections. That way they can figure out a way to work with server-side in a way that helps achieve these goals rather than circumvent them.
Is this GTM solution the right solution for most companies? Or would you do this differently for SMEs for example?
I don’t think it’s THE solution for every company out there. First of all, it comes with a cost. Although that cost isn’t very big, in my opinion, it’s still more than zero and thus an investment.
It also requires some understanding of cloud / server technologies and some effort to engineer it in a way that actually delivers benefits rather than just offers an alternative data flow from the user’s browser to the vendor servers.
Having said that, I’ve often said that I think every company should LOOK at server-side tagging as an option for their marketing stack. You can set it up at zero cost as a test, and see if you have the resources and capability to work with it.
The biggest problem I see with server-side tagging is of maturity. Even if you understand the tech perfectly, you might still be tempted to use server-side as a way to circumvent privacy laws or users’ wishes to block invasive data collection with browser extensions. So it’s more about understanding what server-side ACTUALLY tries to solve for the end user, rather than what companies might want to use it for.
We see companies lately that use server side tagging to, for example, set up Facebook Conversion API from the server, without a visitor being able to know this. The GDPR is regularly circumvented, because it seems there is no control over it. Any thoughts about that?
As I said, privacy laws aren’t predicated on a certain stack. There’s nothing in server-side tagging that forces you to do unlawful things or to circumvent the user’s wishes. That’s a decision YOU make when you deploy server-side tagging.
If you want to do illegal things, you can of course do that but you need to be aware of the risks associated with these tactics. By moving to server-side doesn’t absolve you of these responsibilities.
So with the Conversions API, for example, if you have a legal basis for collecting personal data to Facebook, you can of course do that. The user doesn’t have to actually see the data stream in the browser. But you do need the legal basis. If you collect to Facebook without a legal basis, you are in violation of GDPR just as much as if the collection happened client-side.
I haven’t personally seen that GDPR would be any more circumvented with server-side technologies than before. If it is, then it’s a shame and also a gross misunderstanding of how GDPR works. If the user sends a Data Subject Request to the vendor, they’ll easily be able to see that their data has been collected from the site and then the site needs to explain what their legal basis was for doing this.
How do you expect regulation, transparency and enforcement to follow?
I think and hope that regulation will continue to be technology-agnostic, because that’s the only way for it to be durable. However, I’m sure that interpretations of the regulation, legal precedents, and case studies will outline server-side technologies more and more in the future, because they are such a cheap and scalable way of processing data these days.
Is or will there be good tooling that will be able to check this or will there be another way of transparency?
I think it’s vital the web technologies are developed to shine light on server-side flows, too. Because server-side happens, well, server-side, it’s behind a veil that can’t be breached from the browser. I always quote the Vegas rule: “What happens in the server, stays in the server”.
This is a technological conundrum that needs to be figured out. It might require a rethinking of the current client-server architecture of the internet, or at the very least it requires that those building server-side tools need to bake in these transparency controls.
I hope that companies deploying server-side setups will introduce voluntary transparency into privacy policies and in their communication to their customers.
Anything we need to know about Google Analytics 4 (GA4)?
Lots! 🙂 GA4 certainly has a lot of synergy with server-side tagging, so if you’re thinking of setting up a simple data flow from the client to the server, deploying GA4 either through gtag.js or Google Tag Manager is a good idea. You don’t actually have to send the data to Google Analytics servers from your server-side tagging container, though. You can even use this incoming GA4 stream to collect to Facebook or even a completely different analytics vendor like Amplitude, Mixpanel, or Snowplow!
Thank you for this interview and we are looking forward to seeing you in Amsterdam!
Looking forward to it, too! See you in Amsterdam.
Simo Ahava is partner and co-founder at 8-bit-sheep. On June 14 he will be speaking at the Dutch edition of Friends of Search in de Kromhouthal in Amsterdam.