Google Project Astra in practice: full of potential, but it will take time

HAS I/O 2024, Google teasing For Project Star gave We A preview has Or AI assistants are going In THE future. It is A multimodal functionality that combined THE intelligence of Gemini with THE kind of picture acknowledgement abilities You get In Google Lens, as GOOD as powerful natural language answers. However, while THE promotion video was tablecloth, After get has to try he out In person, It is clear There is A long path has go Before something as Star land on your phone. SO here are three take away food Since OUR First of all experience with Google next generation AI.

Sam's take:

Currently, most people interact with digital assistants using their voice, SO RIGHT far Astra multimodality (that's to say. using view And her In addition has text/speech) has communicate with A AI East relatively novel. In theory, he allow computerized entities has work And behave more as A real assistant Or agent – which was A of Google big buzzwords For THE to show – instead of something more robotics that simply answers has speak commands.

The first Project Astra demo we tried used a large touchscreen connected to a downward-facing camera.

Photo by Sat Rutherford/Engadget

In OUR demo, We had THE option of ask Star has say A history base on a few objects We put In in front of camera, After which he said We A beautiful tale about A dinosaur And It is on baguette trying has escape A sinister red light. He was amusing And THE tale was cute, And THE AI work about as GOOD as You would be to wait for. But has THE even time, he was far Since THE apparently omniscient assistant We saw In Google teasing. And next to Since maybe entertaining A child with A original bedtime history, he doesn't feel as Star was TO DO as a lot with THE Information as You could want.

SO My colleague Karissa drew A bucolic scene on A touch screen, has which indicate Star correctly identified THE flower And sun She painted. But THE most engaging demo was When We surrounded back For A second go with Star running on A Pixels 8 Pro. This allowed We has indicate It is cameras has A collection of objects while he follow up And recalled each her location. He was even clever enough has recognize My clothes And Or I had hidden My Sun glasses even However these objects were not initially part of THE demo.

In a few manners, OUR experience Underlines THE potential tops And the lowest of AI. Just THE ability For A digital assistant has say You Or You could to have LEFT your keys Or how a lot apples were In your fruit bowl Before You LEFT For THE grocery store store could help You to safeguard a few real time. But After talk has a few of THE researchers behind Star, there are always A plot of obstacles has overcome.

An AI-generated story about a dinosaur and a wand created by Google's Project Astra

Photo by Sat Rutherford/Engadget

Contrary to A plot of Google recent AI features, Star (which East describe by Google as A "research Preview") always needs help Since THE cloud instead of be able has run on the device. And while he do support a few level of object permanence, those "memories" only last For A Single session, which Currently only spans A little minutes. And even if Star could remember things For longer, there are things as storage And latency has consider, because For each object Star remember, You risk to slow down down THE AI, resulting In A more stuffy experience. SO while It is clear Star has A plot of potential, My excitement was weighed down with THE awareness that he will be a few time Before We can get more full function functionality.

Karissa's take:

Of all THE generative AI progress, multimodal AI has has been THE A I am most plot by. As powerful as THE last models are, I to have A hard time get excited For iterative updates has text based chatbots. But THE idea of AI that can recognize And answer has queries about your surroundings In real time feels as something out of A science fiction movie. He Also given A a lot clear...

Google Project Astra in practice: full of potential, but it will take time

HAS I/O 2024, Google teasing For Project Star gave We A preview has Or AI assistants are going In THE future. It is A multimodal functionality that combined THE intelligence of Gemini with THE kind of picture acknowledgement abilities You get In Google Lens, as GOOD as powerful natural language answers. However, while THE promotion video was tablecloth, After get has to try he out In person, It is clear There is A long path has go Before something as Star land on your phone. SO here are three take away food Since OUR First of all experience with Google next generation AI.

Sam's take:

Currently, most people interact with digital assistants using their voice, SO RIGHT far Astra multimodality (that's to say. using view And her In addition has text/speech) has communicate with A AI East relatively novel. In theory, he allow computerized entities has work And behave more as A real assistant Or agent – which was A of Google big buzzwords For THE to show – instead of something more robotics that simply answers has speak commands.

The first Project Astra demo we tried used a large touchscreen connected to a downward-facing camera.

Photo by Sat Rutherford/Engadget

In OUR demo, We had THE option of ask Star has say A history base on a few objects We put In in front of camera, After which he said We A beautiful tale about A dinosaur And It is on baguette trying has escape A sinister red light. He was amusing And THE tale was cute, And THE AI work about as GOOD as You would be to wait for. But has THE even time, he was far Since THE apparently omniscient assistant We saw In Google teasing. And next to Since maybe entertaining A child with A original bedtime history, he doesn't feel as Star was TO DO as a lot with THE Information as You could want.

SO My colleague Karissa drew A bucolic scene on A touch screen, has which indicate Star correctly identified THE flower And sun She painted. But THE most engaging demo was When We surrounded back For A second go with Star running on A Pixels 8 Pro. This allowed We has indicate It is cameras has A collection of objects while he follow up And recalled each her location. He was even clever enough has recognize My clothes And Or I had hidden My Sun glasses even However these objects were not initially part of THE demo.

In a few manners, OUR experience Underlines THE potential tops And the lowest of AI. Just THE ability For A digital assistant has say You Or You could to have LEFT your keys Or how a lot apples were In your fruit bowl Before You LEFT For THE grocery store store could help You to safeguard a few real time. But After talk has a few of THE researchers behind Star, there are always A plot of obstacles has overcome.

An AI-generated story about a dinosaur and a wand created by Google's Project Astra

Photo by Sat Rutherford/Engadget

Contrary to A plot of Google recent AI features, Star (which East describe by Google as A "research Preview") always needs help Since THE cloud instead of be able has run on the device. And while he do support a few level of object permanence, those "memories" only last For A Single session, which Currently only spans A little minutes. And even if Star could remember things For longer, there are things as storage And latency has consider, because For each object Star remember, You risk to slow down down THE AI, resulting In A more stuffy experience. SO while It is clear Star has A plot of potential, My excitement was weighed down with THE awareness that he will be a few time Before We can get more full function functionality.

Karissa's take:

Of all THE generative AI progress, multimodal AI has has been THE A I am most plot by. As powerful as THE last models are, I to have A hard time get excited For iterative updates has text based chatbots. But THE idea of AI that can recognize And answer has queries about your surroundings In real time feels as something out of A science fiction movie. He Also given A a lot clear...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow