Google Deepmind offers 'self-discovery' framework for LLMs and improves GPT-4 performance

In A offer has improve THE reasoning abilities of big language models (LLM), researchers Since Google deep mind And University of From South California to have propose A new “self-discovery” inciting frame.

Published on arXiV And Cuddles Confront This Morning, THE approach go beyond existing inciting techniques used by LLM And has has been find able of improvement THE performance of known models out there, including OpenAI GPT-4 And that of Google Palm 2.

“Self-discovery substantially improved GPT-4 And Palm 2 performance on difficult reasoning landmarks such as BigBench-Hard, based agent reasoning And MATHEMATICS by as a lot as 32% compared with has Chain of Thought (CoT)”, THE researchers to write In THE paper.

THE frame turned around LLM self discovery intrinsic task reasoning constructions has solve A issue. THE models look has several atomic reasoning modules, such as critical thought And step by step thought, And compose them In A explicit reasoning structure For LLM has follow during decoding.

V.B. Event

THE AI Impact Tour – New York

GOOD be In New York on FEBRUARY 29 In Partnership with Microsoft has discuss how has balance risks And rewards of AI applications. Request A invite has THE exclusive event below.

Request A invite

More interesting way, This approach works with ten has 40 times less inference calculate — something that can be great For businesses.

Self-discovery unique constructions

LLM to have evolved has handle many Tasks, THANKS has their ability has follow instructions, reason And generate consistent answers. HAS TO DO This arrive, THE models, powered by transformer architecture, to use miscellaneous inciting techniques inspired by cognitive theories of how humans reason And solve problems. This understand a few shots And zero shot chain of thought, inspired by how We solve A issue step by step, decomposition inciting of how We to break A issue In several sub-problems And to move back inciting of how We reflect on THE nature of A stain has establish general principles.

While all these methods, most notably chain of thought, TO DO THE job, they all work by manufacturing A implicit Before assumption of how has tackle A given stain. This approach, THE researchers argue, can not be THE best as each stain has A unique intrinsic structure And A particular technical can be better has solve he that THE other.

With THE last research, deep mind And U.S.C. researchers to have propose A general inciting frame that discovers itself This unique underlying structure has take THE RIGHT reasoning technical For THE stain while Also be effective has THE even time.

“Self-discovery East inspired by how humans internally design A reasoning program For problem solving. From A together of atomic reasoning modules describe In natural language such as 'to break down In subtasks And 'critical thought', A LLM, And stain examples without Labels, he compound A consistent reasoning structure intrinsic has THE stain (Step 1) And SO solves instances of THE stain using THE discovered structure (2nd step). Scene 1 works has THE stain level And uses three Actions has guide THE LLM has generate A reasoning structure For THE stain. HAS Scene 2, during THE final decoding, THE LLM simply follows THE self discovery structure has arrive has THE final answer," THE researchers explain.

Google Deepmind offers 'self-discovery' framework for LLMs and improves GPT-4 performance

In A offer has improve THE reasoning abilities of big language models (LLM), researchers Since Google deep mind And University of From South California to have propose A new “self-discovery” inciting frame.

Published on arXiV And Cuddles Confront This Morning, THE approach go beyond existing inciting techniques used by LLM And has has been find able of improvement THE performance of known models out there, including OpenAI GPT-4 And that of Google Palm 2.

“Self-discovery substantially improved GPT-4 And Palm 2 performance on difficult reasoning landmarks such as BigBench-Hard, based agent reasoning And MATHEMATICS by as a lot as 32% compared with has Chain of Thought (CoT)”, THE researchers to write In THE paper.

THE frame turned around LLM self discovery intrinsic task reasoning constructions has solve A issue. THE models look has several atomic reasoning modules, such as critical thought And step by step thought, And compose them In A explicit reasoning structure For LLM has follow during decoding.

V.B. Event

THE AI Impact Tour – New York

GOOD be In New York on FEBRUARY 29 In Partnership with Microsoft has discuss how has balance risks And rewards of AI applications. Request A invite has THE exclusive event below.

Request A invite

More interesting way, This approach works with ten has 40 times less inference calculate — something that can be great For businesses.

Self-discovery unique constructions

LLM to have evolved has handle many Tasks, THANKS has their ability has follow instructions, reason And generate consistent answers. HAS TO DO This arrive, THE models, powered by transformer architecture, to use miscellaneous inciting techniques inspired by cognitive theories of how humans reason And solve problems. This understand a few shots And zero shot chain of thought, inspired by how We solve A issue step by step, decomposition inciting of how We to break A issue In several sub-problems And to move back inciting of how We reflect on THE nature of A stain has establish general principles.

While all these methods, most notably chain of thought, TO DO THE job, they all work by manufacturing A implicit Before assumption of how has tackle A given stain. This approach, THE researchers argue, can not be THE best as each stain has A unique intrinsic structure And A particular technical can be better has solve he that THE other.

With THE last research, deep mind And U.S.C. researchers to have propose A general inciting frame that discovers itself This unique underlying structure has take THE RIGHT reasoning technical For THE stain while Also be effective has THE even time.

“Self-discovery East inspired by how humans internally design A reasoning program For problem solving. From A together of atomic reasoning modules describe In natural language such as 'to break down In subtasks And 'critical thought', A LLM, And stain examples without Labels, he compound A consistent reasoning structure intrinsic has THE stain (Step 1) And SO solves instances of THE stain using THE discovered structure (2nd step). Scene 1 works has THE stain level And uses three Actions has guide THE LLM has generate A reasoning structure For THE stain. HAS Scene 2, during THE final decoding, THE LLM simply follows THE self discovery structure has arrive has THE final answer," THE researchers explain.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow