Upcoming AI regulations may not protect us from dangerous AI

Check out all the Smart Security Summit on-demand sessions here.

Most AI systems today are neural networks. Neural networks are algorithms that mimic a biological brain to process large amounts of data. They are known to be fast, but they are impenetrable. Neural networks require huge amounts of data to learn how to make decisions; however, the reasons for their decisions are hidden in countless layers of artificial neurons, all separately tuned to various parameters.

In other words, neural networks are "black boxes". And the developers of a neural network not only don't control what the AI ​​does, they don't even know why it does what it does.

It's a horrible reality. But it's getting worse.

Despite the risk inherent in the technology, neural networks are beginning to manage key infrastructure for critical business and government functions. As AI systems proliferate, the list of examples of dangerous neural networks grows every day. For example:

Event

On-Demand Smart Security Summit

Learn about the essential role of AI and ML in cybersecurity and industry-specific case studies. Watch the on-demand sessions today.

look here Already, at least one person has died in an AI-driven vehicle. Microsoft's Copilot AI apparently memorized Texas A&M professor Tim Davis' sparse matrix transposition code. The Apple Card algorithm assigned Steve Wozniak a credit limit 20 times that of his wife, even though the two split the finances. The Metropolitan Police in London used a neural network to search for pornography. Unfortunately, the neural network continued to identify the sand dunes as bare breasts. Microsoft's neural network chatbot Tay was supposed to mimic the behavior of a curious teenage girl on Twitter. Unfortunately, Tay became a racist and misogynist denier in less than 24 hours. Google Photos used neural networks to identify people, objects, animals, food and backgrounds. Then he inexplicably identified photos of black people as gorillas.

These results range from deadly to comical to grossly offensive. And as long as neural networks are used, we risk a lot of damage. Businesses and consumers rightly fear that as long as AI remains opaque, it remains dangerous.

A regulatory response is coming

In response to these concerns, the EU has proposed a

Upcoming AI regulations may not protect us from dangerous AI

Check out all the Smart Security Summit on-demand sessions here.

Most AI systems today are neural networks. Neural networks are algorithms that mimic a biological brain to process large amounts of data. They are known to be fast, but they are impenetrable. Neural networks require huge amounts of data to learn how to make decisions; however, the reasons for their decisions are hidden in countless layers of artificial neurons, all separately tuned to various parameters.

In other words, neural networks are "black boxes". And the developers of a neural network not only don't control what the AI ​​does, they don't even know why it does what it does.

It's a horrible reality. But it's getting worse.

Despite the risk inherent in the technology, neural networks are beginning to manage key infrastructure for critical business and government functions. As AI systems proliferate, the list of examples of dangerous neural networks grows every day. For example:

Event

On-Demand Smart Security Summit

Learn about the essential role of AI and ML in cybersecurity and industry-specific case studies. Watch the on-demand sessions today.

look here Already, at least one person has died in an AI-driven vehicle. Microsoft's Copilot AI apparently memorized Texas A&M professor Tim Davis' sparse matrix transposition code. The Apple Card algorithm assigned Steve Wozniak a credit limit 20 times that of his wife, even though the two split the finances. The Metropolitan Police in London used a neural network to search for pornography. Unfortunately, the neural network continued to identify the sand dunes as bare breasts. Microsoft's neural network chatbot Tay was supposed to mimic the behavior of a curious teenage girl on Twitter. Unfortunately, Tay became a racist and misogynist denier in less than 24 hours. Google Photos used neural networks to identify people, objects, animals, food and backgrounds. Then he inexplicably identified photos of black people as gorillas.

These results range from deadly to comical to grossly offensive. And as long as neural networks are used, we risk a lot of damage. Businesses and consumers rightly fear that as long as AI remains opaque, it remains dangerous.

A regulatory response is coming

In response to these concerns, the EU has proposed a

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow