How Diveplane uses Explainable AI to drive AI adoption

Couldn't attend Transform 2022? Check out all the summit sessions in our on-demand library now! Look here.

"Black box" artificial intelligence (AI) systems are designed to automate decision-making, mapping a user's characteristics into a class that predicts individual behavioral traits such as credit risk, state of health, etc., without revealing why. This is problematic, not only because of the lack of transparency, but also because of the potential biases inherited by the algorithms from human biases or anything hidden in the training data that can lead to unfair or incorrect decisions.

As AI continues to proliferate, technology companies increasingly need to demonstrate their ability to trace the decision-making process, a feature called Explainable AI. This would basically help them understand why a certain prediction or decision was made, what were the important factors in making that prediction or decision, and how confident the model is in that prediction or decision.

To help users ensure that operational decisions are based on fairness and transparency, Diveplane says its products are designed around three principles: predict, explain and show.

Explosive growth of the AI ​​software market

Raleigh, NC-based Diveplane today announced it has raised $25 million in Series A funding to strengthen its position in the AI ​​software market and further invest in its AI solutions. Explainable AIs that provide fair and transparent decision-making and data privacy. .

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to advise on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

register here

Gartner estimates that the AI ​​software market will reach $62 billion in 2022 and will continue to grow at a rate of more than 30% through 2027. Diveplane says it is well positioned to capitalize on the opportunities of the market through its support for multiple real-world uses—prediction, anomaly detection, anonymization, and synthetic data creation—all from a single model on a single platform.

Diveplane's leap from gaming to AI explainable

The company is led by Mike Capps, the former president of Epic Games, and Hazardous Software co-founders Chris Hazard and Mike Resnick.

In fact, Diveplane's AI technology originated from Hazardous Software, which Hazard and Resnick founded in 2007.

The game caught the attention of the general public, even in places the founders did not expect: the United States military. Although they had to return to the show to reconfigure the technology for military use, they were able to create AI software for decision support, visualization and simulation of difficult strategy problems.

They eventually created Hazardous Software and created Diveplane to begin developing an explainable AI system that could support multiple use cases: prediction, anomaly detection, anonymization, and synthetic data creation. And the rest, as they say, is history.

"Today, we're delivering practical, ethical, and efficient machine learning. And you'll only need one model to do it all," Hazard told VentureBeat.

Explainable, verifiable and modifiable

At the heart of Diveplane's offerings is Reactor, a cloud-based machine learning (ML) technology that creates the AI...

How Diveplane uses Explainable AI to drive AI adoption

Couldn't attend Transform 2022? Check out all the summit sessions in our on-demand library now! Look here.

"Black box" artificial intelligence (AI) systems are designed to automate decision-making, mapping a user's characteristics into a class that predicts individual behavioral traits such as credit risk, state of health, etc., without revealing why. This is problematic, not only because of the lack of transparency, but also because of the potential biases inherited by the algorithms from human biases or anything hidden in the training data that can lead to unfair or incorrect decisions.

As AI continues to proliferate, technology companies increasingly need to demonstrate their ability to trace the decision-making process, a feature called Explainable AI. This would basically help them understand why a certain prediction or decision was made, what were the important factors in making that prediction or decision, and how confident the model is in that prediction or decision.

To help users ensure that operational decisions are based on fairness and transparency, Diveplane says its products are designed around three principles: predict, explain and show.

Explosive growth of the AI ​​software market

Raleigh, NC-based Diveplane today announced it has raised $25 million in Series A funding to strengthen its position in the AI ​​software market and further invest in its AI solutions. Explainable AIs that provide fair and transparent decision-making and data privacy. .

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to advise on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

register here

Gartner estimates that the AI ​​software market will reach $62 billion in 2022 and will continue to grow at a rate of more than 30% through 2027. Diveplane says it is well positioned to capitalize on the opportunities of the market through its support for multiple real-world uses—prediction, anomaly detection, anonymization, and synthetic data creation—all from a single model on a single platform.

Diveplane's leap from gaming to AI explainable

The company is led by Mike Capps, the former president of Epic Games, and Hazardous Software co-founders Chris Hazard and Mike Resnick.

In fact, Diveplane's AI technology originated from Hazardous Software, which Hazard and Resnick founded in 2007.

The game caught the attention of the general public, even in places the founders did not expect: the United States military. Although they had to return to the show to reconfigure the technology for military use, they were able to create AI software for decision support, visualization and simulation of difficult strategy problems.

They eventually created Hazardous Software and created Diveplane to begin developing an explainable AI system that could support multiple use cases: prediction, anomaly detection, anonymization, and synthetic data creation. And the rest, as they say, is history.

"Today, we're delivering practical, ethical, and efficient machine learning. And you'll only need one model to do it all," Hazard told VentureBeat.

Explainable, verifiable and modifiable

At the heart of Diveplane's offerings is Reactor, a cloud-based machine learning (ML) technology that creates the AI...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow