How Google is accelerating ML development

Did you miss a MetaBeat 2022 session? Head over to the on-demand library for all of our featured sessions here.

Accelerating the development of machine learning (ML) and artificial intelligence (AI) with optimized performance and cost is a key goal for Google.

Google kicked off its Next 2022 conference this week with a series of announcements about its platform's new AI capabilities, including computer vision as a service with Vertex AI Vision and the new Open-source ML OpenXLA. During a session at the Next 2022 event, Mikhail Chrestkha, outgoing Product Manager at Google Cloud, discussed additional incremental AI improvements, including support for the Nvidia Merlin recommender system framework, AlphaFold batch inference as well as TabNet support.

[Follow VentureBeat's ongoing coverage of Google Cloud Next 2022]

Users of the new technology detailed their use cases and experiences during the session.

Event

Low-Code/No-Code vertex

Join today's top leaders at the Low-Code/No-Code Summit virtually on November 9. Sign up for your free pass today.

register here

"Having access to a solid AI infrastructure becomes a competitive advantage for getting the most out of AI," Chrestkha said.

Uber uses TabNet to improve food delivery

TabNet is a deep tabular data learning approach that uses transformer techniques to help improve speed and relevance.

Chrestkha explained that TabNet is now available on the Google Vertex AI platform, making it easier for users to create large-scale explainable models. He noted that Google's implementation of TabNet will automatically select appropriate feature transformations based on input data, data size, and prediction type to achieve the best results.

TabNet is not a theoretical approach to improving AI predictions, it is an approach that is already yielding positive results in real-world use cases. Among the first users of TabNet was Uber.

Kai Wang, senior product manager at Uber, explained that a platform created by his company, called Michelangelo, now handles 100% of Uber's ML use cases. These use cases include estimated ride arrival time (ETA), UberEats estimated time of delivery (ETD), as well as matching passengers and drivers.

The basic idea behind Michelangelo is to provide Uber ML developers with an infrastructure on which models can be deployed. Wang said Uber is constantly evaluating and integrating third-party components, while selectively investing in key platform areas to build in-house. One of the fundamental third-party tools that Uber relies on is Vertex AI, to help support ML training.

Wang noted that Uber evaluated TabNet with actual Uber use cases. An example use case is UberEat's prep time model, which is used to estimate how long it takes a restaurant to prepare food after an order is received. Wang pointed out that the prep time model is one of the most critical models used at UberEats today.

"We compared the results of TabNet with the reference model, and the TabNet model demonstrated considerable improvement in model performance," Wang said.

Just the FAX for Cohere

How Google is accelerating ML development

Did you miss a MetaBeat 2022 session? Head over to the on-demand library for all of our featured sessions here.

Accelerating the development of machine learning (ML) and artificial intelligence (AI) with optimized performance and cost is a key goal for Google.

Google kicked off its Next 2022 conference this week with a series of announcements about its platform's new AI capabilities, including computer vision as a service with Vertex AI Vision and the new Open-source ML OpenXLA. During a session at the Next 2022 event, Mikhail Chrestkha, outgoing Product Manager at Google Cloud, discussed additional incremental AI improvements, including support for the Nvidia Merlin recommender system framework, AlphaFold batch inference as well as TabNet support.

[Follow VentureBeat's ongoing coverage of Google Cloud Next 2022]

Users of the new technology detailed their use cases and experiences during the session.

Event

Low-Code/No-Code vertex

Join today's top leaders at the Low-Code/No-Code Summit virtually on November 9. Sign up for your free pass today.

register here

"Having access to a solid AI infrastructure becomes a competitive advantage for getting the most out of AI," Chrestkha said.

Uber uses TabNet to improve food delivery

TabNet is a deep tabular data learning approach that uses transformer techniques to help improve speed and relevance.

Chrestkha explained that TabNet is now available on the Google Vertex AI platform, making it easier for users to create large-scale explainable models. He noted that Google's implementation of TabNet will automatically select appropriate feature transformations based on input data, data size, and prediction type to achieve the best results.

TabNet is not a theoretical approach to improving AI predictions, it is an approach that is already yielding positive results in real-world use cases. Among the first users of TabNet was Uber.

Kai Wang, senior product manager at Uber, explained that a platform created by his company, called Michelangelo, now handles 100% of Uber's ML use cases. These use cases include estimated ride arrival time (ETA), UberEats estimated time of delivery (ETD), as well as matching passengers and drivers.

The basic idea behind Michelangelo is to provide Uber ML developers with an infrastructure on which models can be deployed. Wang said Uber is constantly evaluating and integrating third-party components, while selectively investing in key platform areas to build in-house. One of the fundamental third-party tools that Uber relies on is Vertex AI, to help support ML training.

Wang noted that Uber evaluated TabNet with actual Uber use cases. An example use case is UberEat's prep time model, which is used to estimate how long it takes a restaurant to prepare food after an order is received. Wang pointed out that the prep time model is one of the most critical models used at UberEats today.

"We compared the results of TabNet with the reference model, and the TabNet model demonstrated considerable improvement in model performance," Wang said.

Just the FAX for Cohere

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow