Fears of AI sensitivity are a distraction

Couldn't attend Transform 2022? Check out all the summit sessions in our on-demand library now! Look here.

While many other industries are being hit by high inflation and slowing growth rates, the market for software sophisticated enough to communicate digitally with humans isn't slowing down.

Called chatbots, global demand for these virtual humans is expected to grow nearly 500% between 2020 and 2027 to become a $2 billion-a-year industry, according to new market research.

Today, the use of these digital assistants and companions is already widespread. Consider that more than two-thirds of consumers worldwide have interacted with a chatbot in the past 12 months, with the majority saying they had a positive experience. However, 60% of consumers say they think humans are better than virtual assistants when it comes to understanding their needs.

This latest statistic is disturbing because it begs the question: what does the other 40% think? Do they assume that an algorithm is better than a person at understanding human needs and wants?

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to advise on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

register here

The artificial intelligence (AI) and machine learning (ML) programs that underpin chatbots are capable of extraordinary achievements, of which we have only seen the tip of the iceberg. But putting themselves in the shoes of human beings - and feeling their feelings - is not part of their current or future achievements.

That is to say, expecting AI to have the emotions, desires, insecurities and dreams of human beings is a red herring. Unfortunately, the fear of almighty Terminator-style automatons is a deep-rooted mistake in the past that still haunts us today. Not only are these fears exaggerated and outdated, but they prevent us from investing in one of the best ways to advance humanity.

It's alive

More than two centuries ago, Mary Shelley published Frankenstein, and the world got its first glimpse of a mad scientist standing over a reanimated corpse and shouting, "< em>He's alive!" From then on, people rightly feared that humans would lose control of their creations.

The Terminator franchise hasn't done human innovation any favors either, with images of robots gaining so much sentience that they go on a murderous rampage and completely wipe out the humans.

The same concerns persist today, but with an interesting twist: a surprisingly high number of users of the social chatbot Replika believe that the program has developed its own conscience. In another case, a senior Google engineer was placed on administrative leave after claiming the LaMDA AI program is sentient and has a soul.

What's really happening here is that artificial intelligence - created by people to mirror people - is getting very good at its job. We increasingly see a true reflection of ourselves in that mirror, and that's a good thing. That means AI is getting better and we'll be designing even better uses for it in the future.

The mistake is to think that technology will come to life in the same way that humans and animals come to life: to believe that it will have the same lust for power, the same vanity and the kinds of petty grievances that people who create AI a. The basic programming of a machine with ...

Fears of AI sensitivity are a distraction

Couldn't attend Transform 2022? Check out all the summit sessions in our on-demand library now! Look here.

While many other industries are being hit by high inflation and slowing growth rates, the market for software sophisticated enough to communicate digitally with humans isn't slowing down.

Called chatbots, global demand for these virtual humans is expected to grow nearly 500% between 2020 and 2027 to become a $2 billion-a-year industry, according to new market research.

Today, the use of these digital assistants and companions is already widespread. Consider that more than two-thirds of consumers worldwide have interacted with a chatbot in the past 12 months, with the majority saying they had a positive experience. However, 60% of consumers say they think humans are better than virtual assistants when it comes to understanding their needs.

This latest statistic is disturbing because it begs the question: what does the other 40% think? Do they assume that an algorithm is better than a person at understanding human needs and wants?

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to advise on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

register here

The artificial intelligence (AI) and machine learning (ML) programs that underpin chatbots are capable of extraordinary achievements, of which we have only seen the tip of the iceberg. But putting themselves in the shoes of human beings - and feeling their feelings - is not part of their current or future achievements.

That is to say, expecting AI to have the emotions, desires, insecurities and dreams of human beings is a red herring. Unfortunately, the fear of almighty Terminator-style automatons is a deep-rooted mistake in the past that still haunts us today. Not only are these fears exaggerated and outdated, but they prevent us from investing in one of the best ways to advance humanity.

It's alive

More than two centuries ago, Mary Shelley published Frankenstein, and the world got its first glimpse of a mad scientist standing over a reanimated corpse and shouting, "< em>He's alive!" From then on, people rightly feared that humans would lose control of their creations.

The Terminator franchise hasn't done human innovation any favors either, with images of robots gaining so much sentience that they go on a murderous rampage and completely wipe out the humans.

The same concerns persist today, but with an interesting twist: a surprisingly high number of users of the social chatbot Replika believe that the program has developed its own conscience. In another case, a senior Google engineer was placed on administrative leave after claiming the LaMDA AI program is sentient and has a soul.

What's really happening here is that artificial intelligence - created by people to mirror people - is getting very good at its job. We increasingly see a true reflection of ourselves in that mirror, and that's a good thing. That means AI is getting better and we'll be designing even better uses for it in the future.

The mistake is to think that technology will come to life in the same way that humans and animals come to life: to believe that it will have the same lust for power, the same vanity and the kinds of petty grievances that people who create AI a. The basic programming of a machine with ...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow