Microsoft considers more limits for its new A.I. Chatbot

The company knew that new technology had issues such as occasional accuracy issues. But users have had surprising and annoying interactions.

When Microsoft presented a new version of its Bing search engine that includes the artificial intelligence of a chatbot the last week, company executives knew they were climbing a branch. accurate and had built-in measures to protect against users trying to push them to do weird things or trigger racist or harmful screeds.

But Microsoft was not not quite ready for the surprising sensation of goosebumps felt by users who have tried to engage the chatbot in open discussions and probe personal conversations - even though this problem is well known in the small world of specialist researchers in artificial intelligence. Bing in an attempt to reel in some of his more alarming and oddly human responses. Microsoft plans to add tools that allow users to restart conversations or give them more control over the tone.

Kevin Scott, Microsoft Chief Technology Officer, told the New York Times that he was also considering limiting the length of conversations before they venture into foreign territory. Microsoft said long discussions can confuse the chatbot and it picks up the tone of its users, sometimes becoming irritable.

"An area where we are learning a new use -case for chat is how people use it as a tool for broader world discovery and social entertainment," the company wrote in a blog post late Wednesday. Microsoft said it was an example of using a new technology in a way "that we hadn't fully considered".

That Microsoft, traditionally a cautious company with products who range from high-end enterprise software to video games, was willing to take a chance on unpredictable technology shows just how enthusiastic the tech industry has become about artificial intelligence.The company declined to comment on this. article.

In November, OpenAI, a San Francisco startup in which Microsoft has invested $13 billion, launched ChatGPT, an online chat tool that uses a technology called generative A.I. It quickly became a source of fascination in Silicon Valley, and companies raced to find an answer.

Microsoft's new search tool combines its search engine Bing search with underlying technology built by OpenAI. Microsoft CEO Satya Nadella said in an interview last week that it would transform the way people find information and make search much more relevant and conversational. imperfections - was a critical example of Microsoft's "breathtaking pace" to integrate generative AI. in its products, he says. At a press conference at the Microsoft campus in Redmond, Washington, executives repeatedly said it was time to get the tool out of the "lab" and into the hands of the public. p>

"I feel especially in the West, there's a lot more things like, 'Oh my God, what's going to happen because of this AI? '" Mr. Nadella said. "And it's better to sort of say, 'Hey listen, is this really helping you or not?'"

Oren Etzioni, professor emeritus at the University of Washington and founding chief executive of the Allen Institute for AI, a leading lab in Seattle, said Microsoft "took a calculated risk, trying to control the technology as much as it could be ".

He added that many of the most troubling cases involved pushing technology beyond ordinary behavior. "It can be very surprising how cunning people are to elicit inappropriate responses from...

Microsoft considers more limits for its new A.I. Chatbot

The company knew that new technology had issues such as occasional accuracy issues. But users have had surprising and annoying interactions.

When Microsoft presented a new version of its Bing search engine that includes the artificial intelligence of a chatbot the last week, company executives knew they were climbing a branch. accurate and had built-in measures to protect against users trying to push them to do weird things or trigger racist or harmful screeds.

But Microsoft was not not quite ready for the surprising sensation of goosebumps felt by users who have tried to engage the chatbot in open discussions and probe personal conversations - even though this problem is well known in the small world of specialist researchers in artificial intelligence. Bing in an attempt to reel in some of his more alarming and oddly human responses. Microsoft plans to add tools that allow users to restart conversations or give them more control over the tone.

Kevin Scott, Microsoft Chief Technology Officer, told the New York Times that he was also considering limiting the length of conversations before they venture into foreign territory. Microsoft said long discussions can confuse the chatbot and it picks up the tone of its users, sometimes becoming irritable.

"An area where we are learning a new use -case for chat is how people use it as a tool for broader world discovery and social entertainment," the company wrote in a blog post late Wednesday. Microsoft said it was an example of using a new technology in a way "that we hadn't fully considered".

That Microsoft, traditionally a cautious company with products who range from high-end enterprise software to video games, was willing to take a chance on unpredictable technology shows just how enthusiastic the tech industry has become about artificial intelligence.The company declined to comment on this. article.

In November, OpenAI, a San Francisco startup in which Microsoft has invested $13 billion, launched ChatGPT, an online chat tool that uses a technology called generative A.I. It quickly became a source of fascination in Silicon Valley, and companies raced to find an answer.

Microsoft's new search tool combines its search engine Bing search with underlying technology built by OpenAI. Microsoft CEO Satya Nadella said in an interview last week that it would transform the way people find information and make search much more relevant and conversational. imperfections - was a critical example of Microsoft's "breathtaking pace" to integrate generative AI. in its products, he says. At a press conference at the Microsoft campus in Redmond, Washington, executives repeatedly said it was time to get the tool out of the "lab" and into the hands of the public. p>

"I feel especially in the West, there's a lot more things like, 'Oh my God, what's going to happen because of this AI? '" Mr. Nadella said. "And it's better to sort of say, 'Hey listen, is this really helping you or not?'"

Oren Etzioni, professor emeritus at the University of Washington and founding chief executive of the Allen Institute for AI, a leading lab in Seattle, said Microsoft "took a calculated risk, trying to control the technology as much as it could be ".

He added that many of the most troubling cases involved pushing technology beyond ordinary behavior. "It can be very surprising how cunning people are to elicit inappropriate responses from...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow