Spring naar content

Be warned: Generative AI and Large Language Models come with a few disadvantages

If you have ever experimented with a Large Language Model like ChatGPT, you know that it can generate stunning results. However, you have to be careful because there are also a number of substantial disadvantages to the wondrous world of generative AI and LLMs. Therefore, we always have to critically examine the generated results. Be aware of the following: 

  1. Misinformation 
    LLMs can “hallucinate” facts and generate texts that simply do not match reality. Due to their ease of use and the superficial fluency of the generated text, they can be used to quickly create large amounts of text that contain errors and misinformation. 

    Due to the fluency of these models and the way in which they can generate errors, experts have expressed their concerns about the mass production of false information using these models. People can wrongly believe that these models have produced the output. Essays, tweets and news articles can be falsely or misleadingly produced using models like GPT-3. Even OpenAI’s CEO, Sam Altman, warned about this: 

    “It’s a mistake to be relying on [ChatGPT] for anything important right now. We believe in shipping early & often, with the hope of learning how to make a really useful and reliable AI through real-world experience and feedback.” 
  1. Bias 
    Bias LLMs generate text in a similar style to the one they were trained on. Even when LLMs are trained on a wide variety of internet text, the model output still strongly depends on the training data, even if this is not immediately apparent in the text they generate. 

    This is especially problematic when the training data contains controversial or offensive opinions and views. There are many examples of LLMs generating offensive text. It is not feasible to construct a neutral training set. 
  1. Intellectual property 
    Generative AI has the potential to infringe on intellectual property and places a burden on IP holders to monitor and enforce their IP rights. The technology requires companies that produce and manage AI to establish appropriate standards and policies to protect themselves from IP infringement by generative AI. 

    A recent lawsuit against GitHub Copilot, a tool that automatically writes working code when a programmer starts typing, could change the future of generative AI and copyright. In this lawsuit, a programmer claimed that GitHub would violate Microsoft and OpenAI’s open-source licenses. The final decision could have broad impact on the world of AI. GitHub is being sued for copyright infringement because it does not provide attribution, the mention of the copyright holder, when open-source code is covered by a license that requires it. The plaintiff’s lawyer stated: 

    “This is the first step in what will be a long journey. As far as we know, this is the first class-action case in the US challenging the training and output of AI systems. It will not be the last. AI systems are not exempt from the law. Those who create and operate these systems must remain accountable.” 
  1. Training costs 
    Training AI systems entails significant environmental and financial costs. The use of energy-intensive chips for machine learning training is already linked to an increase in CO2 emissions. 

In addition to the disadvantages, it is important to know that new European rules for AI are currently being developed. In April 2021, the European Commission published a proposal for the AI Act, which aims to minimize any “new risks or negative consequences for individuals or society” that may arise. In addition to the use of AI systems (such as generative AI) by citizens, the use within critical infrastructures (such as hospitals and power plants) also falls under the AI law. In its proposal, the European Commission aims for a balance, as innovation and development must also be encouraged. As the European Commission puts it: 

“It is in the EU’s interest to maintain its technological leadership position and ensure that Europeans can benefit from new technologies developed and operate in accordance with the values, fundamental rights, and principles of the Union.” 

In practice, the AI law will establish new rules and obligations for providers, importers, distributors, and users (not end-users) of an AI system. Depending on the qualification of the used generative AI (high-risk or not), it may be necessary to follow these new rules and obligations. So if generative AI is used within your organization, it is important to closely follow the developments around the AI Act. 

Take a look at our dossier page with all the legal developments surrounding the AI Act. 

If you process personal data of Europeans or within Europe, you must comply with the General Data Protection Regulation (GDPR). In some cases, the data you enter in or generated by generative AI may be classified as personal data under the GDPR. For example, entering contact details to set up an email or receiving a response with personal information. 

If this is the case, you must comply with the rules set out in the GDPR and meet three requirements: 

  1. You must have a “legal basis” for the processing (such as obtaining consent). 
  2. You must have a clear purpose for why you process the personal data. 
  3. You must provide information to the person whose personal data you are processing (the “data subject”). 

However, in practice, these points are rarely addressed when using generative AI. 

Additionally, the GDPR imposes certain responsibilities. If you enter personal data into generative AI, you are likely to be seen as a “data controller.” This means that you are responsible for the processing carried out by the provider of the generative AI that receives the personal data (the “data processor”). In the event of a data breach, you will be liable for the damage caused to the individuals, but only for the part of the processing that the generative AI performs on your behalf. For the processing that the provider performs itself (such as training the model), they are seen as a data controller themselves. 

The GDPR also requires you as the data controller to set up a specific set of agreements with the data processor on the processing (also known as a “data sharing agreement”). So, when using generative AI, you need to check if agreements have been made about the processing. These agreements are even more important when using generative AI with a server located outside the EU. In that case, it is necessary to establish additional safeguards to ensure that the level of protection remains the same.

In conclusion

At the end of the day, it becomes clear that there is a lot to consider if you want to implement generative AI within your business practices. It can yield significant benefits, and the results can be astonishing, but there are also many drawbacks associated with LLMs. Moreover, this varies across different marketing disciplines. While one may use LLMs as an experiment, another may incorporate them more systematically. A truly mature implementation of this new technology is still lacking, in any case.

Want to learn more about Generative AI and Large Language Models? Then tune in to the first episode of the second season of the DDMA Podcast: Shaping The Future, where they delve into the positive and negative implications of this technology for the field of marketing. 

Lee Boonstra

Applied AI Engineer and Developer Advocate bij Google

Marike van de Klomp

Lead Product Owner Digital Channels & Conversational AI bij ABN AMRO

Robin Hogenkamp

Senior Business Consultant CX | VodafoneZiggo (commissievoorzitter)

Romar van der Leij

Voormalig Legal counsel | DDMA

Ook interessant

Lees meer
AI Act |

Europese Commissie publiceert concept van ‘code of practice’ voor gebruik LLM’s

De Europese Commissie heeft op 14 november de eerste conceptversie van de Code of Practice voor General Purpose AI modellen (Code) gepubliceerd. De Code moet straks meer invulling geven aan…
Lees meer
AI Act |

Nieuwe marketinggids Responsible AI voor de verantwoorde inzet van AI in marketing

VIA Nederland, DDMA en bvA, drie brancheverenigingen in de marketingsector, hebben vandaag met trots de Marketinggids Responsible AI gepresenteerd. Deze gids, ontwikkeld als een ‘levende’ leidraad, biedt marketeers concrete richtlijnen…
Lees meer
Artificial Intelligence |

Best of DDMA Research & Insights 2024

In een marketingtijdperk waarin data, technologie en consumentengedrag continu veranderen, is gedegen onderzoek cruciaal. Door middel van uitgebreide (consumenten)onderzoeken en benchmarks bieden we onze leden grip op deze veranderingen. In…