x close
Nothing to display...
7 min read

Ethical and legal considerations for generative AI in revenue marketing

AI

Let's face it - artificial intelligence is changing marketing in huge ways. With AI tools that can analyze customer data and deliver hyper-targeted ads, marketers have never had more power to understand and sway consumers. 

But with great power comes great responsibility. 🕷️

As AI takes on a growing role, marketers need to get serious about tricky ethical issues around privacy, transparency, and algorithmic bias. This article will dive into some of the core ethical dilemmas raised by AI marketing, including:

Your guide to artificial intelligence
Artificial intelligence (AI) helps to build smart machines that can perform a variety of tasks that would otherwise require human intelligence.

Key considerations marketers should keep in mind while implementing or using AI tools 

The question of ethics and morals in AI makes me consider a broader existential question: what kind of world do we want to live in? For me, delving into AI's ethical dimensions means establishing the rules and taxonomy that govern our interaction with AI and its interaction with us.

We’ve seen regulations in different disciplines and major companies, and navigating these complex questions can quickly get intricate.

Although I'm not an ethicist, I believe that the fields of liberal arts, theology, and ethics have an important role to play in our approach toward AI. People studying these disciplines are often required to focus on existential questions like our interaction with nature and our understanding of good and bad. These conversations provide a pedagogical basis for progressing with AI.

Chatbots like GPT exemplify this interaction. They can refuse to tell certain jokes, adhering to a set of rules programmed into them. But this adherence is not perfect, and we’re in a massive testing phase with countless people using the same tool and getting different answers. It underscores the need for a governing body to decide what responses AI can and can't give.

In the same vein, advertisers and marketers must set rules for their organizations on how to operate with these emerging technologies and what they should accept from them with or without review. We're in the midst of an evolution as we establish these rules for ourselves.

While we're seeing some movements, like AI cooperatives, we're not quite there yet. Even some of the famous ethicists, like those fired from Google a few years ago, were engineers, not ethicists by training.

I foresee significant movement in academia that will bleed over into these technologies. There's work being done, but a lot is still left to accomplish.

Gain all the knowledge you need to leverage generative AI to optimize your marketing efforts and drive better results in your campaigns with our generative AI eBook.  

From Data to Dollars: How Generative AI Transforms Revenue, Digital, and Growth Marketing [eBook]
Boost conversions, optimize campaigns, and maximize your revenue growth with the ultimate eBook on leveraging generative AI in marketing.

Legal implications do indeed become quite complex when discussing generative AI. Consider a case I recall, wherein an AI was trained on the artwork of an artist from California.

The artist felt that the company who trained the machine on his work was exploiting his intellectual property for future profit, without providing any compensation to him. 

This kind of scenario poses a significant challenge - firstly, identifying if the work generated by the AI is indeed based on someone else's intellectual property and secondly, proving that in a court of law.

We also have to consider when such AI-generated content crosses the line. For instance, copyright law permits the use of someone else's work after a certain period post their demise, often a hundred years. 

How do these rules apply in the context of AI? These are not easy questions to answer, but they definitely need to be considered as technology advances.

Is generative AI a worthy investment or a costly experiment? Hear more from Ryan in his episode of the Let’s Talk Revenue Marketing podcast. 

Evaluating the ROI of generative AI
Are we on the brink of a revolution in content creation? Join us as we delve deeper into the advantages of machine-generated content and confront the burning question: Is generative AI a worthy investment or just a costly experiment?

Bias

The ethical landscape surrounding AI is complex and multifaceted. Let's consider bias first. When an AI algorithm generates something, it relies on the data fed to it. For instance, if we feed the algorithm only Monet's artwork and ask it to depict a ship sinking in the ocean, it will do so in the style of Monet because that's the only reference it has. 

This can create a sort of "information bias", which can result in downstream problems, as the bias is only as good as the sample. Therefore, it's crucial to understand that bias in AI doesn't necessarily relate to prejudice, but more to the limitations of the input data and the rules guiding that data.

Privacy

Now, regarding privacy, we have to recognize that generative AI systems are not entirely different from our existing interactions with social media or search engines like Google. They collect and process data to improve their services, but they also potentially reveal a lot about us. 

Certain AI companies have controls in place to prevent company-specific information from being input into the systems, but even these systems are not immune to vulnerabilities.

In essence, users should be aware of what they're putting into these systems and understand how these inputs could potentially influence the algorithm.

Transparency

Lastly, let's discuss transparency. It's been a topic of conversation whether or not we should have "algorithmic transparency". Essentially, this means understanding how an AI came to a certain conclusion. 

This can be quite challenging, especially when the AI's decision-making process doesn't leave a clear trail. I personally believe that in the future, platforms should provide algorithmic transparency so users understand why they're seeing what they're seeing.

Beyond these ethical concerns, there are more serious issues such as the potential weaponization of AI for fraud or manipulation. One alarming incident involved a Danish man who, after spending years communicating with a chatbot, ended his own life. These are serious concerns that we need to address, ensuring we interact responsibly with these emerging technologies.

Specific ethical dilemma case studies - and how it was addressed

An ethical example I've encountered is related to advertising. The question is whether advertisers want to sponsor content dealing with topics like death and dismemberment. If the algorithm misinterprets the advertiser's intentions, the advertiser might believe they're purchasing one type of solution to target customers, but they end up with a different result. 

That's a real challenge, as we're seeing in real time. For instance, I might ask ChatGPT to write a bio for someone. Sometimes the bio comes out great, other times, not so much. With billions of web pages to handle, assuring quality can be challenging.

The issue of algorithmic transparency is key. The Wall Street Journal did a deep dive into TikTok a year or two ago, which I highly recommend. It provides a great exposé on the ethics and morality of social media operations.

This becomes more complex when we look at broader issues, such as the influence of deep fakes on voter behavior in election cycles. While we may not call it generative AI in the traditional sense, forms of generative AI can create content targeted at potential voters. 

This was well depicted in the Brexit movie with Benedict Cumberbatch. From a marketing perspective, if this content is gaining traction with voters, it means the campaign is effective. 

However, the veracity of the content generated can be questionable. Balancing these factors is extremely challenging at scale, whether it's an election cycle or targeting web pages where an advertiser may think they're buying one thing, but they get something else. These are areas where we must tread carefully.

I believe bioethics provides a good model for this situation. There's certainly a growing number of AI evangelists and researchers, some of whom are even losing their jobs over their work, who might be able to lobby policymakers. I think organizations like the Center for Humane Technology will play a pivotal role here.

Just as GDPR necessitated privacy officers, we might see a new C-suite role that can liaise with scientists, ethicists, and policymakers outside the organization. Not everyone is equipped to analyze the implications and potential downstream outcomes of AI technologies. We might even see the rise of 'ethics as a service' technologies.

The traditions and understanding of ethics vary depending on cultural and intellectual backgrounds, so I don't think there will be a one-size-fits-all solution.

Similar to the medical field, where a Catholic hospital's ethical guidelines may differ from a secular one, we need to offer consumers the choice to opt for AI systems that align with their ethical traditions.

Currently, we don't have such frameworks in place. We're lacking in options and transparency for consumers. I believe we'll get there in the next ten years, and generative AI might even speed up the process, though it won't work alone. A lot of peer review will be necessary.

Written by:

Ryan Boh

Ryan Boh

Ryan is the Head of Identity at Lockr

Read More
Ethical and legal considerations for generative AI in revenue marketing