Responsible AI must be a priority — now

Join leaders July 26-28 for Transform AI and Edge Week. Hear high-level leaders discuss topics around AL/ML technology, conversational AI, IVA, NLP, Edge, and more. Book your free pass now!

Responsible artificial intelligence (AI) must be embedded in a company's DNA.

"Why is bias in AI something we all need to think about today? It's because AI powers everything we do today," said Miriam Vogel, President and CEO of EqualAI, to a live audience at this week's Transform 2022 event.

Vogel went deep into the topics of AI bias and responsible AI in a fireside chat led by Victoria Espinel of The Software Alliance trade group.

Vogel has extensive technology and policy experience, including at the White House, the U.S. Department of Justice (DOJ), and the nonprofit EqualAI, which is dedicated to reducing unconscious biases in the development and use of AI. She also chairs the recently launched National AI Advisory Committee (NAIAC), mandated by Congress to advise the President and the White House on AI policy.

Event

Transform 2022

Sign up now to get your free virtual pass to Transform AI Week, July 26-28. Hear from the AI ​​and data leaders of Visa, Lowe's eBay, Credit Karma, Kaiser, Honeywell, Google, Nissan, Toyota, John Deere, and more.

register here

As she noted, AI is becoming more and more important in our daily lives and dramatically improving them, but at the same time, we need to understand the many inherent risks of AI. Everyone - builders, creators and users - must make AI "our partner", as well as efficient, effective and trustworthy.

"You can't build trust with your app if you're not sure it's safe for you, that it's designed for you," Vogel said.

We need to address responsible AI now, Vogel said, because we're still setting "the rules of the road." What constitutes AI remains a kind of "grey area".

What if it's not resolved? The consequences could be disastrous. People may not get the right healthcare or job opportunities because of AI bias, and “litigation will come, regulation will come,” Vogel warned.

When this happens, "we can't unpack the AI ​​systems that we've become so dependent on and which have become intertwined," she said. "Right now, today, it's time for us to be very careful about what we're building and deploying, making sure we're assessing the risks, making sure we're mitigating those risks."

Good "AI Hygiene"

Companies need to address responsible AI now by establishing strong governance practices and policies and establishing a culture that is safe, collaborative, and visible. It needs to be “going through the levers” and managed mindfully and intentionally, Vogel said.

For example, when hiring, companies can start simply by asking if platforms have been tested for discrimination.

"This core question is extremely powerful," Vogel said.

An organization's HR team should be supported by AI that is inclusive and does not disqualify the best candidates from employment or advancement.

It's a matter of "good AI hygiene," Vogel said, and it starts with the C suite.

"Why the C-suite? Because at the end of the day, if you don't get buy-in at the highest levels, you can't get the governance framework in place, you can't get investment in the governance framework and you can't buy-in to make sure you're doing it the right way,” Vogel said.

Furthermore, bias detection is an ongoing process: once a framework has been established, a long-term process should be in place to continuously assess whether bias is hindering the systems.

"Bias can become part of every human contact...

Responsible AI must be a priority — now

Join leaders July 26-28 for Transform AI and Edge Week. Hear high-level leaders discuss topics around AL/ML technology, conversational AI, IVA, NLP, Edge, and more. Book your free pass now!

Responsible artificial intelligence (AI) must be embedded in a company's DNA.

"Why is bias in AI something we all need to think about today? It's because AI powers everything we do today," said Miriam Vogel, President and CEO of EqualAI, to a live audience at this week's Transform 2022 event.

Vogel went deep into the topics of AI bias and responsible AI in a fireside chat led by Victoria Espinel of The Software Alliance trade group.

Vogel has extensive technology and policy experience, including at the White House, the U.S. Department of Justice (DOJ), and the nonprofit EqualAI, which is dedicated to reducing unconscious biases in the development and use of AI. She also chairs the recently launched National AI Advisory Committee (NAIAC), mandated by Congress to advise the President and the White House on AI policy.

Event

Transform 2022

Sign up now to get your free virtual pass to Transform AI Week, July 26-28. Hear from the AI ​​and data leaders of Visa, Lowe's eBay, Credit Karma, Kaiser, Honeywell, Google, Nissan, Toyota, John Deere, and more.

register here

As she noted, AI is becoming more and more important in our daily lives and dramatically improving them, but at the same time, we need to understand the many inherent risks of AI. Everyone - builders, creators and users - must make AI "our partner", as well as efficient, effective and trustworthy.

"You can't build trust with your app if you're not sure it's safe for you, that it's designed for you," Vogel said.

We need to address responsible AI now, Vogel said, because we're still setting "the rules of the road." What constitutes AI remains a kind of "grey area".

What if it's not resolved? The consequences could be disastrous. People may not get the right healthcare or job opportunities because of AI bias, and “litigation will come, regulation will come,” Vogel warned.

When this happens, "we can't unpack the AI ​​systems that we've become so dependent on and which have become intertwined," she said. "Right now, today, it's time for us to be very careful about what we're building and deploying, making sure we're assessing the risks, making sure we're mitigating those risks."

Good "AI Hygiene"

Companies need to address responsible AI now by establishing strong governance practices and policies and establishing a culture that is safe, collaborative, and visible. It needs to be “going through the levers” and managed mindfully and intentionally, Vogel said.

For example, when hiring, companies can start simply by asking if platforms have been tested for discrimination.

"This core question is extremely powerful," Vogel said.

An organization's HR team should be supported by AI that is inclusive and does not disqualify the best candidates from employment or advancement.

It's a matter of "good AI hygiene," Vogel said, and it starts with the C suite.

"Why the C-suite? Because at the end of the day, if you don't get buy-in at the highest levels, you can't get the governance framework in place, you can't get investment in the governance framework and you can't buy-in to make sure you're doing it the right way,” Vogel said.

Furthermore, bias detection is an ongoing process: once a framework has been established, a long-term process should be in place to continuously assess whether bias is hindering the systems.

"Bias can become part of every human contact...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow