Google is taking bookings to talk to its supposedly sensitive chatbot

At the I/O 2022 conference last May, Google CEO Sundar Pichai announced that the company would gradually use its experimental LaMDA 2 conversational AI model to select beta users in the coming months . Those months have arrived. On Thursday, researchers from Google's AI division announced that interested users could sign up to explore the model as access becomes more widely available.

Regular readers will recognize LaMDA as the supposedly sensitive natural language processing (NLP) model that a Google researcher got fired for. NLPs are a class of AI models designed to parse human speech into actionable commands and power the functionality of digital assistants and chatbots like Siri or Alexa, as well as do the heavy lifting for apps. real-time translation and subtitles. Basically, whenever you talk to a computer, it uses NLP technology to listen.

"I'm sorry, I didn't quite understand" is a phrase that still haunts the dreams of many early Siri users, although over the past decade NLP technology has advanced to a fast pace. Today's models are trained on hundreds of billions of parameters, can translate hundreds of languages ​​in real time, and even carry lessons learned in one conversation to later discussions.

Google's AI test kitchen will allow beta users to experiment and explore NLP interactions in a controlled, presumably supervised, sandbox. Access will begin rolling out to small groups of US Android users today before rolling out to iOS devices in the coming weeks. The program will offer a set of guided demos that will show users the capabilities of LaMDA.

"The first demo, 'Imagine It,' lets you name a place and offers avenues for exploring your imagination," Tris Warkentin, Group Product Manager at Google Research, and Josh Woodward, Senior Director of Product Management for Labs at Google, wrote in a Google AI blog on Thursday. "With the 'List It' demo, you can share a goal or topic, and LaMDA breaks it down into a list of helpful subtasks. And in the 'Talk About It (Dogs Edition)' demo, you can have fun, open conversation about dogs and only dogs, which explores LaMDA's ability to stay on topic even if you try to stray off topic."

The focus on safe and responsible interactions is common in an industry where there's already a name for AI chatbots going all-Nazi, and that name in Taye. Fortunately, this extremely embarrassing incident was a lesson that Microsoft and much of the rest of the AI ​​field took to heart, which is why we see such strident restrictions on what users can bring to mind Midjourney or Dall. -E 2, or what Facebook topics Blenderbot 3 can discuss.

This does not mean that the system is infallible. “We conducted series of dedicated adversarial tests to find additional flaws in the model,” Warkentin and Woodward wrote. "We enlisted expert members of the red team...who discovered other harmful, yet subtle, outputs." These include not "producing a response when used because it has difficulty differentiating between benign and contradictory prompts" and producing "harmful or toxic responses based on biases in its training data" . Like many AIs these days are used to.

All products recommended by En...

Google is taking bookings to talk to its supposedly sensitive chatbot

At the I/O 2022 conference last May, Google CEO Sundar Pichai announced that the company would gradually use its experimental LaMDA 2 conversational AI model to select beta users in the coming months . Those months have arrived. On Thursday, researchers from Google's AI division announced that interested users could sign up to explore the model as access becomes more widely available.

Regular readers will recognize LaMDA as the supposedly sensitive natural language processing (NLP) model that a Google researcher got fired for. NLPs are a class of AI models designed to parse human speech into actionable commands and power the functionality of digital assistants and chatbots like Siri or Alexa, as well as do the heavy lifting for apps. real-time translation and subtitles. Basically, whenever you talk to a computer, it uses NLP technology to listen.

"I'm sorry, I didn't quite understand" is a phrase that still haunts the dreams of many early Siri users, although over the past decade NLP technology has advanced to a fast pace. Today's models are trained on hundreds of billions of parameters, can translate hundreds of languages ​​in real time, and even carry lessons learned in one conversation to later discussions.

Google's AI test kitchen will allow beta users to experiment and explore NLP interactions in a controlled, presumably supervised, sandbox. Access will begin rolling out to small groups of US Android users today before rolling out to iOS devices in the coming weeks. The program will offer a set of guided demos that will show users the capabilities of LaMDA.

"The first demo, 'Imagine It,' lets you name a place and offers avenues for exploring your imagination," Tris Warkentin, Group Product Manager at Google Research, and Josh Woodward, Senior Director of Product Management for Labs at Google, wrote in a Google AI blog on Thursday. "With the 'List It' demo, you can share a goal or topic, and LaMDA breaks it down into a list of helpful subtasks. And in the 'Talk About It (Dogs Edition)' demo, you can have fun, open conversation about dogs and only dogs, which explores LaMDA's ability to stay on topic even if you try to stray off topic."

The focus on safe and responsible interactions is common in an industry where there's already a name for AI chatbots going all-Nazi, and that name in Taye. Fortunately, this extremely embarrassing incident was a lesson that Microsoft and much of the rest of the AI ​​field took to heart, which is why we see such strident restrictions on what users can bring to mind Midjourney or Dall. -E 2, or what Facebook topics Blenderbot 3 can discuss.

This does not mean that the system is infallible. “We conducted series of dedicated adversarial tests to find additional flaws in the model,” Warkentin and Woodward wrote. "We enlisted expert members of the red team...who discovered other harmful, yet subtle, outputs." These include not "producing a response when used because it has difficulty differentiating between benign and contradictory prompts" and producing "harmful or toxic responses based on biases in its training data" . Like many AIs these days are used to.

All products recommended by En...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow