How cutting-edge data trains AI for accurate real-time response

Join leaders July 26-28 for Transform AI and Edge Week. Hear high-level leaders discuss topics around AL/ML technology, conversational AI, IVA, NLP, Edge, and more. Book your free pass now!

Autonomous driving is seen as the future of mobility, thanks to companies like Tesla developing advanced AI-based driver assistance systems (ADAS) to help users get around the world. point to point under certain conditions.

The progress has been amazing to many, but the fact remains: we are still a long way from truly autonomous vehicles. In order to achieve true autonomy, self-driving vehicles must outperform human drivers in all conditions, be it a densely populated urban area, a village, or an unexpected on-going scenario. road.

“Most of the time, autonomous driving is pretty easy. Sometimes it’s as simple as driving down a deserted road or following a lead vehicle. wide variety of 'edge cases' that can occur,” Kai Wang, chief forecasting officer of Amazon-owned mobility company Zoox, said at VentureBeat's Transform 2022 conference.

These extreme cases create problems for the algorithms. Imagine a group of people entering the street from a blind spot or a pile of rubble lying in the way.

Event

Transform 2022

Sign up now to get your free virtual pass to Transform AI Week, July 26-28. Hear from the AI ​​and data leaders of Visa, Lowe's eBay, Credit Karma, Kaiser, Honeywell, Google, Nissan, Toyota, John Deere, and more.

register here Zoox Training Effort

Humans are pretty good at recognizing and responding to almost any type of edge case, but machines find it difficult because there are so many possibilities of what can happen down the road. To solve this problem, Zoox, which develops fully autonomous driving software and a purpose-built autonomous robotaxi, took a multi-tiered approach.

“There really isn't a one-size-fits-all solution that will solve all of these cases, so we try to integrate different types of mitigations across our entire system, at each layer to give us the best chance to handle these things,” Wang said.

First, as the exec explained, Zoox enables the perception of different conditions/objects by bringing in data from the sensor modules located at all four corners of its vehicle.

Each pod includes several sensor modalities (RGB cameras, Lidar sensors, radars and thermal sensors) that complement each other. For example, RGB cameras can detect detail in images but fail to measure depth, which is handled by Lidar.

“The job of our perception system is to use all of these sensors together and merge them to produce a single representation of all the objects around us. surrounds,” Wang said.

Once surrounding agents are recognized, the system models where they will end up in the next few seconds. This is done with data-driven deep learning algorithms that come up with a distribution of potential future trajectories. After that, it considers all dynamic entities and their predicted trajectories and makes a decision on what to do or how to navigate safely through the current scenario to the target destination.

remote guidance

While the system effectively models and handles edge cases, it may run into some never-before-seen situations on the road. In these cases, the system stops and uses the homing capabilities to summon a human expert for assistance (while checking for collisions and obstacles with other agents at the same time).

"We have a human operator connected to the situation to suggest a route through the blockage. So far, we have received remote guidance for less than 1% of our total mission time in complex environments. And at As our system gains momentum...

How cutting-edge data trains AI for accurate real-time response

Join leaders July 26-28 for Transform AI and Edge Week. Hear high-level leaders discuss topics around AL/ML technology, conversational AI, IVA, NLP, Edge, and more. Book your free pass now!

Autonomous driving is seen as the future of mobility, thanks to companies like Tesla developing advanced AI-based driver assistance systems (ADAS) to help users get around the world. point to point under certain conditions.

The progress has been amazing to many, but the fact remains: we are still a long way from truly autonomous vehicles. In order to achieve true autonomy, self-driving vehicles must outperform human drivers in all conditions, be it a densely populated urban area, a village, or an unexpected on-going scenario. road.

“Most of the time, autonomous driving is pretty easy. Sometimes it’s as simple as driving down a deserted road or following a lead vehicle. wide variety of 'edge cases' that can occur,” Kai Wang, chief forecasting officer of Amazon-owned mobility company Zoox, said at VentureBeat's Transform 2022 conference.

These extreme cases create problems for the algorithms. Imagine a group of people entering the street from a blind spot or a pile of rubble lying in the way.

Event

Transform 2022

Sign up now to get your free virtual pass to Transform AI Week, July 26-28. Hear from the AI ​​and data leaders of Visa, Lowe's eBay, Credit Karma, Kaiser, Honeywell, Google, Nissan, Toyota, John Deere, and more.

register here Zoox Training Effort

Humans are pretty good at recognizing and responding to almost any type of edge case, but machines find it difficult because there are so many possibilities of what can happen down the road. To solve this problem, Zoox, which develops fully autonomous driving software and a purpose-built autonomous robotaxi, took a multi-tiered approach.

“There really isn't a one-size-fits-all solution that will solve all of these cases, so we try to integrate different types of mitigations across our entire system, at each layer to give us the best chance to handle these things,” Wang said.

First, as the exec explained, Zoox enables the perception of different conditions/objects by bringing in data from the sensor modules located at all four corners of its vehicle.

Each pod includes several sensor modalities (RGB cameras, Lidar sensors, radars and thermal sensors) that complement each other. For example, RGB cameras can detect detail in images but fail to measure depth, which is handled by Lidar.

“The job of our perception system is to use all of these sensors together and merge them to produce a single representation of all the objects around us. surrounds,” Wang said.

Once surrounding agents are recognized, the system models where they will end up in the next few seconds. This is done with data-driven deep learning algorithms that come up with a distribution of potential future trajectories. After that, it considers all dynamic entities and their predicted trajectories and makes a decision on what to do or how to navigate safely through the current scenario to the target destination.

remote guidance

While the system effectively models and handles edge cases, it may run into some never-before-seen situations on the road. In these cases, the system stops and uses the homing capabilities to summon a human expert for assistance (while checking for collisions and obstacles with other agents at the same time).

"We have a human operator connected to the situation to suggest a route through the blockage. So far, we have received remote guidance for less than 1% of our total mission time in complex environments. And at As our system gains momentum...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow