Spring naar content

It might be safe to say that anyone who works in digital marketing like I do uses it on a daily basis, whether it be helping with writing code, creating visual content, and even for articles like these, AI is an incredibly useful tool. No one can deny that. Knowing you will always have someone (or something) to rely on for bouncing your ideas off of is reassuring, regardless of the sometimes hallucinating nature of such tools – which of course is a great way to remain sharp and solidify the importance of a human brain – it is easy to walk the line of AI being a useful extension to your work and letting it completely take over your work without giving it much thought. And now with AI being integrated into almost anything digital, often without giving us much of a choice, it does raise a number of questions, for example; how aware are we of the inner workings of AI solutions? Who actually uses the data it collects? And for what? And how is data stored? Is it as safe as promised?

Let’s face it, the (business) models are fundamentally vague, and maybe deliberately so, but its users and consumers are not completely off the hook either. Should we even use these tools if we do not fully comprehend what we’re ‘signing up for’? Or are the benefits simply too good to pass up on? We claim to highly value our privacy, yet our behaviour suggests we readily trade it for convenience. So, is online privacy and security still an attainable goal in an era of pervasive AI, or have we passed the point of no return?

AI race or craze?

Let us take a step back and look at what the current situation actually is: in the past three years alone, we went from writing about new developments such as Cookieless Era, server-side tracking, importance of data governance, the shift to first party from third party data, and of course the rise of AI. When looking at it from this angle, and taking away personal perspective and feelings, the rate of technological development in this area is quite astounding – especially taking into account such a short timespan. Where AI felt quite out of reach for the general population, Large Language Models (LLM’s) alone have completely flipped that narrative on its head, to the point where it has become a part of our daily routine in most cases. These tools have seen a progression that is impressive, but yet again, it does raise plenty of eyebrows. LLM’s such as the giants ChatGPT (OpenAI) and Gemini (Google), but also smaller or more specialized alternatives like Claude (Anthropic), DeepSeek and Grok (xAI), have forced their way into the mainstream, where it is not only used by professionals, but by everyone. From simple dinner recipe ideas to actual lifestyle tips, which sometimes leads to disastrous consequences, it seems people are adopting this new idea or technology without giving it too much thought.

Something that has bugged me personally in this AI craze is how easily people just make use of AI tools without really putting any thought into it. Photos are being uploaded to create funny and parodic (and sometimes scarily accurate) pictures, random user data is entered into LLM’s such as Gemini and ChatGPT, confidential work documents are pasted in to get a quick summary, and highly sensitive medical symptoms are shared with AI chatbots like it is nothing. In other words, entrusting our most private data to platforms of which we know nothing about, not its inner workings or necessarily the people behind it. Moreover, we are almost forced to make use of AI, whether we like it or not. Take a look at your smartphone for example, I think we all know by now your smartphone is “always listening”. Targeted advertisements seem to pop-up almost immediately after discussing a new brand of shoes out loud with it nearby for example, or Google being able to follow your travels to a scary level of detail, just to name two examples. And now, due to the relentless push for AI, our everyday apps are being invaded with it without even giving us a choice. For example, is an AI assistant for WhatsApp really necessary? Of course there is no way to disable it, and despite Meta swearing it does not circulate your data from your personal accounts, can we ever be fully certain? Regardless of what happens with said data, Meta and Google’s track records for collecting fines due to violating user privacy regulations in the US and EU tells us everything we need to know in order to be, in my personal opinion, rightfully sceptical of this ordeal: data is valuable, and personal data is even more valuable.

Section 2 The rise of digital awareness

Besides enthusiasts who only see the positives of this new development, there are also plenty of skeptics who are not so keen on the constant stream of new technological integrations into our everyday lives. It is an interesting trend to see the increasing awareness of our digital footprint and the (sub)conscious decisions we take in regard to it. Consequently, it raises the questions of how much control we really have when it comes to our data as the general public, but also, how can businesses ensure transparency and privacy to their consumers? Usercentrics has released an insightful report on this based on research it has conducted by surveying 10.000 consumers across Europe and the US that are frequent internet users, which in turn sparked my curiosity. The goal of this study was to create a better understanding on the current status of data privacy and digital trust in today’s world. More than half of the respondents have signaled trust issues in regard to the use of AI and to provide a better picture of the statistics mentioned in the report, see its most crucial findings summarized below:

  1. Consumers nowadays are more aware of their personal data and its value. When transparency is lacking, trust will inevitably decline as the idea of control fades away. Around 60% feel like their data has become an object or product and are uncomfortable with their data being used as ‘fuel’ to train AI, which does not even fundamentally benefit them in most scenarios.
  2. Awareness of cookie banners has increased quite a bit, to the extent where consumers now actually read the banners. Additionally, clicking “Accept all” has become less of the obvious reflex it was a couple of years ago as nowadays it reflects a more permanent decision (almost 50% of respondents!).
  3. Consumers are losing trust in brands as long as they are not fully transparent in what happens to their data. Transparency on data collection and usage has now become the number one trust factor for over 40% of the participants in this study.
  4. Lastly, and maybe most importantly, the report shows that consumers actually care about their privacy, but feel unsure of what it means and how it works. A staggering 77% of the respondents do not understand how their data is collected and used by businesses.

While some of these conclusions are worrying, especially knowing the issue is a lack of understanding, it also shows a clear solution to the issue, namely transparency. To most users, as proven by this research, AI still feels like a “black box” which is an incredibly poignant comparison as this is exactly the analogy we use for the consumer’s thought process in (digital) marketing and consumer psychology (Usercentrics, 2025).

This becomes even more poignant when AI is used to break open this “black box”. For example, look at analytics tooling like the various Google Analytics editions: it provided us with an insight into consumer journeys that we could then analyse more in depth if needs be, but its latest iteration, Google Analytics 4, is now actually able to predict and pre-emptively analyse a consumer’s on-site behaviour based on collected data that has been fed to its underlying algorithm. Obviously this is not the only platform that does this and we see it happen within all different types of media. These datasets are also used to create top-notch hyperpersonalized marketing. Where there used to be a couple of persona’s that determined a specific marketing strategy, targeted marketing nowadays needs to be as specific as possible, giving way to the “they’re always listening” sentiment. Some examples include stuff we might not even be aware of, like Netflix’ personalized recommendations, Spotify’s annual Wrapped, Starbucks’ app, and so on. This couldn’t be possible without collecting and using user data to train AI-models that would enable these hyperpersonalized experiences. Despite the feeling of control slowly slipping away, it isn’t impossible to reclaim agency to an extent, even in a landscape where online privacy seems to be unattainable.

Reclaiming a sense of agency

The bottom line is that it is a two-way street; consumers should be aware of what they can do and how they can ensure their own privacy, but it should also be the responsibility of businesses to be transparent and educate their consumers. So how can we reclaim this agency if uncertainty and a lack of understanding is prevalent? From a business perspective, as mentioned in the beginning, it should be the responsibility of a business to provide the necessary information to enable consumers in making well informed decisions. This does not have be formatted as hosting webinars or creating whitepapers for example, albeit an interesting idea, but more so by ensuring a comprehensive consent banner, clear and concise cookie/privacy notices, and above all else, honesty and transparency about the data you collect, why you collect it and how it is used for business practices – especially when it comes to AI. More importantly, give people a choice. As mentioned in the first section, integrating an app with an AI feature like WhatsApp’s Meta AI is fine, but allow people who are uncomfortable with it to disable it, regardless of whether it actually collects user data, especially if their only reinforcement of trust is a corporation’s ‘word’.

Now from a consumer perspective, it might be more difficult to fully engross yourself in this sphere due to the sheer size of it and it being quite complicated to understand. Oftentimes it is a deliberate choice by businesses or other organisations to experiment with the grey areas and boundaries of what is allowed. Nonetheless, simple things such as reading a consent banner and privacy/cookie notices when entering a website is a great first step towards understanding what data is collected. Additionally, try and stay up to date with any app’s release or update cycles to stay informed whether your favourite social or shopping apps will start collecting more or additional data. Usually it is common practice to notify users whenever they release a big update, mainly because it is often required by law or regulations to do so, so this is a great cue to read the patch notes for example. And lastly, maybe the most dramatic step of them all, if you do not feel comfortable with your data being collected or used for AI training or marketing purposes, maybe consider stopping using a specific app, website or service. This would be the most surefire way to prevent your personal data from being used.

Conclusion

In short, the solution to the problem is not as difficult as we might think. I personally do think we need to face reality and accept that true online privacy has been a myth for quite a long time now, and truly by our own accord, but we still can make active choices – from both a consumer and business perspective. Feasibility is a spectrum after all and might differ from person to person. Regardless of personal sentiments and emotions, the need for understanding our role in this landscape cannot be underestimated if we want to work towards a fair and balanced digital future, and this understanding starts with us, the individual.

SDIM is a sponsor of the DDMA Digital Analytics Summit 2025 on October 9th. Get your tickets here.

Dave Swart

Data Specialist & Lead Strategy | SDIM

Ook interessant

Lees meer
Digital Analytics |

Challenging the Feasibility of Digital Agency; A Bygone Era of Online Privacy or Stepping Stone to a New Understanding?

Artificial Intelligence, or more commonly known as AI, we can all admit that it would be virtually impossible not to have heard these two words, or letters, used in conjunction…
Lees meer
Data, Decisions & Engagement |

DDMO 2025: Hoe de adolescentie voorbij? Van ambitie naar uitvoering in datagedreven marketing

Marketeers willen het volledige potentieel van datagedreven marketing benutten, maar worden vaak teruggefloten door de realiteit. Het Data-Driven Marketing Onderzoek 2025 (DDMO), afgenomen bij 532 marketingprofessionals, onthult een duidelijke afstand…
Lees meer
Digital Analytics |

Turning data into action: why dashboards aren’t enough

Organizations today pour huge investments into data. They build data warehouses and BI dashboards, calculate customer lifetime value (CLV), even develop MROI (Marketing ROI) models. Yet strangely, many never translate…