Instead of AI sensitivity, focus on the current risks of large language models

We're excited to bring Transform 2022 back in person on July 19 and virtually from July 20-28. Join leaders in AI and data for in-depth discussions and exciting networking opportunities. Sign up today!

Recently, a Google engineer made international headlines when he claimed that LaMDA, their chatbot creation system, was sensitive. Since his first post, public debate has raged over whether artificial intelligence (AI) is aware and experiences feelings as intensely as humans.

While the topic is undoubtedly fascinating, it also overshadows other more pressing risks such as the unfairness and loss of privacy posed by Large Scale Language Models (LLMs), especially for companies that strive to incorporate these models into their products and services. . These risks are further amplified by the fact that companies deploying these models often lack knowledge about the specific data and methods used to create it, which can lead to issues of bias, hate speech and stereotyping.

What are LLMs?

LLMs are massive neural networks that learn from huge corpora of free text (think books, Wikipedia, Reddit, etc.). Although they're designed for generating text, like summarizing long documents or answering questions, they've proven to be excellent at a variety of other tasks, from building websites to prescribing drugs to arithmetic. basis.

It is this ability to generalize to tasks for which they were not originally designed that propelled LLMs into a major area of ​​research. Commercialization happens in all industries by adapting basic models created and trained by others (e.g., OpenAI, Google, Microsoft, and other technology companies) for specific tasks.

Event

Transform 2022

Join us at the leading Applied AI event for enterprise business and technology decision makers on July 19 and virtually July 20-28.

register here

Stanford researchers coined the term “fundamental models” to characterize the fact that these pre-trained models underpin countless other applications. Unfortunately, these massive models also come with substantial risks.

The downside of LLMs

The main one of these risks: the environmental cost, which can be huge. A well-cited paper from 2019 found that training a single large model can produce as much carbon as five cars in their lifetime - and the models have only gotten bigger since then. This environmental record has direct implications for a company's ability to meet its sustainability commitments and, more broadly, its ESG objectives. Even when companies rely on models trained by others, the carbon footprint of training those models cannot be ignored, according to...

Instead of AI sensitivity, focus on the current risks of large language models

We're excited to bring Transform 2022 back in person on July 19 and virtually from July 20-28. Join leaders in AI and data for in-depth discussions and exciting networking opportunities. Sign up today!

Recently, a Google engineer made international headlines when he claimed that LaMDA, their chatbot creation system, was sensitive. Since his first post, public debate has raged over whether artificial intelligence (AI) is aware and experiences feelings as intensely as humans.

While the topic is undoubtedly fascinating, it also overshadows other more pressing risks such as the unfairness and loss of privacy posed by Large Scale Language Models (LLMs), especially for companies that strive to incorporate these models into their products and services. . These risks are further amplified by the fact that companies deploying these models often lack knowledge about the specific data and methods used to create it, which can lead to issues of bias, hate speech and stereotyping.

What are LLMs?

LLMs are massive neural networks that learn from huge corpora of free text (think books, Wikipedia, Reddit, etc.). Although they're designed for generating text, like summarizing long documents or answering questions, they've proven to be excellent at a variety of other tasks, from building websites to prescribing drugs to arithmetic. basis.

It is this ability to generalize to tasks for which they were not originally designed that propelled LLMs into a major area of ​​research. Commercialization happens in all industries by adapting basic models created and trained by others (e.g., OpenAI, Google, Microsoft, and other technology companies) for specific tasks.

Event

Transform 2022

Join us at the leading Applied AI event for enterprise business and technology decision makers on July 19 and virtually July 20-28.

register here

Stanford researchers coined the term “fundamental models” to characterize the fact that these pre-trained models underpin countless other applications. Unfortunately, these massive models also come with substantial risks.

The downside of LLMs

The main one of these risks: the environmental cost, which can be huge. A well-cited paper from 2019 found that training a single large model can produce as much carbon as five cars in their lifetime - and the models have only gotten bigger since then. This environmental record has direct implications for a company's ability to meet its sustainability commitments and, more broadly, its ESG objectives. Even when companies rely on models trained by others, the carbon footprint of training those models cannot be ignored, according to...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow