How to manage risk as AI spreads through your organization

Register now for your free virtual pass to the November 9 Low-Code/No-Code Summit. Hear from the leaders of Service Now, Credit Karma, Stitch Fix, Appian, and more. Learn more.

As AI permeates across the enterprise, organizations are struggling to balance benefits and risks. AI is already embedded in a range of tools, from IT infrastructure management to DevOps software to CRM suites, but most of these tools have been adopted without a risk mitigation strategy. related to AI is in place.

Of course, it's important to remember that the list of potential benefits of AI is just as long as the risks, which is why so many organizations skimp on risk assessments in the first place.

Many organizations have already achieved major breakthroughs that would not have been possible without AI. For example, AI is being deployed across the healthcare industry for everything from robot-assisted surgery to reducing medication dosing errors to streamlining administrative workflows. GE Aviation relies on AI to create digital models that better predict when parts will fail, and of course there are many ways to use AI to save money, such as 'conversational AI that takes restaurant orders at the wheel.

That's the good side of AI.

Event

Low-Code/No-Code vertex

Join today's top leaders at the Low-Code/No-Code Summit virtually on November 9. Sign up for your free pass today.

register here

Now let's look at the bad and the ugly.

The risks of AI are as varied as the many use cases its proponents hype, but three areas have proven particularly worrisome: bias, security, and warfare. Let's look at each of these issues separately.

Bias

While HR departments originally believed that AI could be used to eliminate bias in recruitment, the opposite has happened. Models built with implicit biases built into the algorithm end up being actively biased against women and minorities.

For example, Amazon had to abandon its AI-powered automated resume parser because it excluded female applicants. Similarly, when Microsoft used tweets to train a chatbot to interact with Twitter users, they created a monster. As a CBS News headline put it, “Microsoft Shuts Down AI Chatbot After It Turns Nazi.”

These problems may seem inevitable in hindsight, but if market leaders like Microsoft and Google can make these mistakes, so can your business. With Amazon, the AI ​​had been trained on CVs coming mainly from male candidates. With Microsoft's chatbot, the only positive thing you can say about this experience is that at least they didn't use 8chan to train the AI. If you spend five minutes swimming in the toxicity of Twitter, you'll realize what a terrible idea it was to use this dataset for training anything.

Security issues

Uber, Toyota, GM, Google, and Tesla, among others, have been working to make self-driving fleets a reality. Unfortunately, the more researchers experiment with self-driving cars, the further away from the fully autonomous view.

In 2015, the first death caused by a self-driving car occurred in Florida. According to the National Highway Traffic Safety Administration, a Tesla on autopilot mode failed to stop for a tractor-trailer making a left turn at an intersection. The Tesla crashed into the big platform, f...

How to manage risk as AI spreads through your organization

Register now for your free virtual pass to the November 9 Low-Code/No-Code Summit. Hear from the leaders of Service Now, Credit Karma, Stitch Fix, Appian, and more. Learn more.

As AI permeates across the enterprise, organizations are struggling to balance benefits and risks. AI is already embedded in a range of tools, from IT infrastructure management to DevOps software to CRM suites, but most of these tools have been adopted without a risk mitigation strategy. related to AI is in place.

Of course, it's important to remember that the list of potential benefits of AI is just as long as the risks, which is why so many organizations skimp on risk assessments in the first place.

Many organizations have already achieved major breakthroughs that would not have been possible without AI. For example, AI is being deployed across the healthcare industry for everything from robot-assisted surgery to reducing medication dosing errors to streamlining administrative workflows. GE Aviation relies on AI to create digital models that better predict when parts will fail, and of course there are many ways to use AI to save money, such as 'conversational AI that takes restaurant orders at the wheel.

That's the good side of AI.

Event

Low-Code/No-Code vertex

Join today's top leaders at the Low-Code/No-Code Summit virtually on November 9. Sign up for your free pass today.

register here

Now let's look at the bad and the ugly.

The risks of AI are as varied as the many use cases its proponents hype, but three areas have proven particularly worrisome: bias, security, and warfare. Let's look at each of these issues separately.

Bias

While HR departments originally believed that AI could be used to eliminate bias in recruitment, the opposite has happened. Models built with implicit biases built into the algorithm end up being actively biased against women and minorities.

For example, Amazon had to abandon its AI-powered automated resume parser because it excluded female applicants. Similarly, when Microsoft used tweets to train a chatbot to interact with Twitter users, they created a monster. As a CBS News headline put it, “Microsoft Shuts Down AI Chatbot After It Turns Nazi.”

These problems may seem inevitable in hindsight, but if market leaders like Microsoft and Google can make these mistakes, so can your business. With Amazon, the AI ​​had been trained on CVs coming mainly from male candidates. With Microsoft's chatbot, the only positive thing you can say about this experience is that at least they didn't use 8chan to train the AI. If you spend five minutes swimming in the toxicity of Twitter, you'll realize what a terrible idea it was to use this dataset for training anything.

Security issues

Uber, Toyota, GM, Google, and Tesla, among others, have been working to make self-driving fleets a reality. Unfortunately, the more researchers experiment with self-driving cars, the further away from the fully autonomous view.

In 2015, the first death caused by a self-driving car occurred in Florida. According to the National Highway Traffic Safety Administration, a Tesla on autopilot mode failed to stop for a tractor-trailer making a left turn at an intersection. The Tesla crashed into the big platform, f...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow