The Nuances of Voice AI Ethics and What Companies Should Do

Learn how your business can build apps to automate tasks and gain efficiencies with low-code/no-code tools on November 9 at the Virtual Low-Code/No-Code Summit. Register here.

In early 2016, Microsoft announced Tay, an AI chatbot that can converse and learn from random users on the Internet. Within 24 hours, the bot began spouting racist and misogynistic statements, seemingly without provocation. The team unplugged Tay, realizing that the ethics of unleashing a chatbot on the internet were, at best, unexplored.

The real questions are whether AI designed for random human interaction is ethical and whether the AI ​​can be coded to stay within bounds. This becomes even more critical with voice AI, which companies use to communicate automatically and directly with customers.

Let's take a moment to discuss what makes AI ethical or unethical and how companies can integrate AI into their customer-facing roles in an ethical way.

What Makes AI Unethical?

The AI ​​is supposed to be neutral. Information goes into a black box — a pattern — and comes back with some degree of processing. In Tay's example, the researchers created their model by feeding the AI ​​a massive amount of conversational information influenced by human interaction. The result? An unethical model that has hurt rather than helped.

Event

Low-Code/No-Code vertex

Join today's top leaders at the Low-Code/No-Code Summit virtually on November 9. Sign up for your free pass today.

register here

What happens when an AI receives CCTV data? Personal information? Photographs and art? What comes out the other side?

The top three contributing factors to ethical dilemmas in AI are unethical use, data privacy concerns, and system biases.

As technology advances, new AI models and methods are emerging daily and their use is increasing. Researchers and companies deploy models and methods almost randomly; many of them are not well understood or regulated. This often results in unethical results, even when the underlying systems have minimized bias.

Data privacy issues arise because AI models are built and trained on data coming directly from users. In many cases, customers unwittingly become test subjects in one of the greatest unregulated AI experiments in history. Your words, images, biometrics and even social media are fair game. But should they be?

Finally, thanks to Tay and other examples, we know that AI systems are biased. Like any creation, what you put in is what you get out of it.

One of the most salient examples of bias surfaced in a 2003 trial that found researchers used emails from a slew of Enron documents to train conversational AI during decades. The trained AI saw the world from the perspective of a fallen energy trader in Houston. How many of us would say that these emails would represent our point of view?

Ethics in Voice AI

Voice AI shares the same fundamental ethical concerns as AI in general, but because voice closely mimics human speech and experience, there is a higher risk of manipulation and misrepresentation. Plus, we tend to trust things with a voice, including user-friendly interfaces like Alexa and Siri.

Voice AI is also very likely to interact with a real customer in real time. In other words, voice AIs are the representatives of your business. And just like your human representatives, you want to make sure your AI is trained and acts in accordance with company values ​​and a professional code of conduct.

Human agents (and AI systems) should not treat callers differently for reasons unrelated to their service membership. But depending on the data set, the system may not provide a consistent experience. For example, more men calling a center might result in a gender classifier...

The Nuances of Voice AI Ethics and What Companies Should Do

Learn how your business can build apps to automate tasks and gain efficiencies with low-code/no-code tools on November 9 at the Virtual Low-Code/No-Code Summit. Register here.

In early 2016, Microsoft announced Tay, an AI chatbot that can converse and learn from random users on the Internet. Within 24 hours, the bot began spouting racist and misogynistic statements, seemingly without provocation. The team unplugged Tay, realizing that the ethics of unleashing a chatbot on the internet were, at best, unexplored.

The real questions are whether AI designed for random human interaction is ethical and whether the AI ​​can be coded to stay within bounds. This becomes even more critical with voice AI, which companies use to communicate automatically and directly with customers.

Let's take a moment to discuss what makes AI ethical or unethical and how companies can integrate AI into their customer-facing roles in an ethical way.

What Makes AI Unethical?

The AI ​​is supposed to be neutral. Information goes into a black box — a pattern — and comes back with some degree of processing. In Tay's example, the researchers created their model by feeding the AI ​​a massive amount of conversational information influenced by human interaction. The result? An unethical model that has hurt rather than helped.

Event

Low-Code/No-Code vertex

Join today's top leaders at the Low-Code/No-Code Summit virtually on November 9. Sign up for your free pass today.

register here

What happens when an AI receives CCTV data? Personal information? Photographs and art? What comes out the other side?

The top three contributing factors to ethical dilemmas in AI are unethical use, data privacy concerns, and system biases.

As technology advances, new AI models and methods are emerging daily and their use is increasing. Researchers and companies deploy models and methods almost randomly; many of them are not well understood or regulated. This often results in unethical results, even when the underlying systems have minimized bias.

Data privacy issues arise because AI models are built and trained on data coming directly from users. In many cases, customers unwittingly become test subjects in one of the greatest unregulated AI experiments in history. Your words, images, biometrics and even social media are fair game. But should they be?

Finally, thanks to Tay and other examples, we know that AI systems are biased. Like any creation, what you put in is what you get out of it.

One of the most salient examples of bias surfaced in a 2003 trial that found researchers used emails from a slew of Enron documents to train conversational AI during decades. The trained AI saw the world from the perspective of a fallen energy trader in Houston. How many of us would say that these emails would represent our point of view?

Ethics in Voice AI

Voice AI shares the same fundamental ethical concerns as AI in general, but because voice closely mimics human speech and experience, there is a higher risk of manipulation and misrepresentation. Plus, we tend to trust things with a voice, including user-friendly interfaces like Alexa and Siri.

Voice AI is also very likely to interact with a real customer in real time. In other words, voice AIs are the representatives of your business. And just like your human representatives, you want to make sure your AI is trained and acts in accordance with company values ​​and a professional code of conduct.

Human agents (and AI systems) should not treat callers differently for reasons unrelated to their service membership. But depending on the data set, the system may not provide a consistent experience. For example, more men calling a center might result in a gender classifier...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow