TruEra joins the Intel Disruptor program to improve the quality of AI models

Couldn't attend Transform 2022? Check out all the summit sessions in our on-demand library now! Look here.

Organizations building artificial intelligence (AI) models have no shortage of quality challenges, including the need for explainable AI that minimizes the risk of bias.

For Redwood City, Calif.-based startup TruEra, the path to explainable AI is paved with technologies that provide quality AI for models. Founded in 2019, TruEra has raised over $45 million in funding, including a recent investment round that included participation from Hewlett Packard Enterprise (HPE).

This week, TruEra announced the latest stage of its growth, revealing that it has been selected to be part of the Intel Disrupter Initiative, which brings technical partnership and go-to-market support to participants.< /p>

"The big picture here is that as machine learning becomes more widely adopted in the enterprise, there is a growing need to explain, test, and monitor these models, because they're used in higher-stakes use cases," Will Uppington, co-founder. and CEO of TruEra, told VentureBeat.

TruEra addresses the challenges of explainable AI

As the use of AI matures, regulations are emerging around the world for its responsible use.

Responsible use of AI has many facets, including prioritizing data privacy and providing mechanisms for explainability of methods used in models, to encourage fairness and avoid prejudices.

Uppington noted that aside from regulations, the performance of AI systems, which require both speed and accuracy, must be monitored and measured. According to Upington, each time software undergoes a new paradigm shift, a new monitoring infrastructure is required. He argued, however, that the monitoring infrastructure for machine learning is different from other types of software systems that already exist.

Machine learning systems are fundamentally data-driven analytical entities, where models are iterated at a much faster rate than other types of software, he explained.

"The data you see in production becomes the training data for your next iteration," he said. "So today's operational data is tomorrow's training data that is used to directly improve your product."

As such, Uppington argues that to deliver explainable AI, organizations must first really get the right AI model oversight in place. The things a data scientist does to explain and analyze a model during development should be monitored throughout the model's lifecycle. With this approach, Upington said the organization can learn from this operational data and feed it back into the next iteration of the model.

Disrupting the AI ​​market with Intel

The AI ​​quality issue, or lack thereof, is often seen as a barrier to adoption.

“The quality and explainability of AI have become huge hurdles for businesses, often preventing them from getting a return on their AI investments,” said Arijit Bandyopadhyay, CTO of the business analytics and AI at Intel Corporation, in a media advisory. "By partnering with TruEra, Intel is helping to remove these barriers by providing enterprises with access to AI assessment, testing, and monitoring capabilities that can help them leverage AI for a measurable business impact."

Uppington noted that as part of his company's engagement with Intel, he is integrating with cnvrg.io, an Intel company that develops machine learning training services and software. The goal of the integration is to make it easier to build, deploy, and quality monitor AI for organizations using the convrg.io platform.

Integration with Intel is not the first, nor the only silicon vendor that TruEra has...

TruEra joins the Intel Disruptor program to improve the quality of AI models

Couldn't attend Transform 2022? Check out all the summit sessions in our on-demand library now! Look here.

Organizations building artificial intelligence (AI) models have no shortage of quality challenges, including the need for explainable AI that minimizes the risk of bias.

For Redwood City, Calif.-based startup TruEra, the path to explainable AI is paved with technologies that provide quality AI for models. Founded in 2019, TruEra has raised over $45 million in funding, including a recent investment round that included participation from Hewlett Packard Enterprise (HPE).

This week, TruEra announced the latest stage of its growth, revealing that it has been selected to be part of the Intel Disrupter Initiative, which brings technical partnership and go-to-market support to participants.< /p>

"The big picture here is that as machine learning becomes more widely adopted in the enterprise, there is a growing need to explain, test, and monitor these models, because they're used in higher-stakes use cases," Will Uppington, co-founder. and CEO of TruEra, told VentureBeat.

TruEra addresses the challenges of explainable AI

As the use of AI matures, regulations are emerging around the world for its responsible use.

Responsible use of AI has many facets, including prioritizing data privacy and providing mechanisms for explainability of methods used in models, to encourage fairness and avoid prejudices.

Uppington noted that aside from regulations, the performance of AI systems, which require both speed and accuracy, must be monitored and measured. According to Upington, each time software undergoes a new paradigm shift, a new monitoring infrastructure is required. He argued, however, that the monitoring infrastructure for machine learning is different from other types of software systems that already exist.

Machine learning systems are fundamentally data-driven analytical entities, where models are iterated at a much faster rate than other types of software, he explained.

"The data you see in production becomes the training data for your next iteration," he said. "So today's operational data is tomorrow's training data that is used to directly improve your product."

As such, Uppington argues that to deliver explainable AI, organizations must first really get the right AI model oversight in place. The things a data scientist does to explain and analyze a model during development should be monitored throughout the model's lifecycle. With this approach, Upington said the organization can learn from this operational data and feed it back into the next iteration of the model.

Disrupting the AI ​​market with Intel

The AI ​​quality issue, or lack thereof, is often seen as a barrier to adoption.

“The quality and explainability of AI have become huge hurdles for businesses, often preventing them from getting a return on their AI investments,” said Arijit Bandyopadhyay, CTO of the business analytics and AI at Intel Corporation, in a media advisory. "By partnering with TruEra, Intel is helping to remove these barriers by providing enterprises with access to AI assessment, testing, and monitoring capabilities that can help them leverage AI for a measurable business impact."

Uppington noted that as part of his company's engagement with Intel, he is integrating with cnvrg.io, an Intel company that develops machine learning training services and software. The goal of the integration is to make it easier to build, deploy, and quality monitor AI for organizations using the convrg.io platform.

Integration with Intel is not the first, nor the only silicon vendor that TruEra has...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow