ML Conference discusses using ChatGPT in articles (and why it matters)

Check out all the Smart Security Summit on-demand sessions here.

A machine learning conference to discuss the use of machine learning? While that might sound so meta, in its call for paper submissions on Monday, the International Conference on Machine Learning did indeed note that "papers that include text generated from Large Scale Language Model (LLM) such as ChatGPT are prohibited unless these produced texts are presented as part of the experimental analysis of the paper.”

It didn't take long for a heated social media debate to brew, in what could be a perfect example of what businesses, organizations and institutions of all shapes and sizes, in all verticals, will face in the future: How will humans cope with the rise of large language models that can help communicate (or borrow, expand or plagiarize, depending on your perspective) ideas?

Arguments for and against using ChatGPT

As a debate on Twitter has intensified over the past two days, a variety of arguments for and against the use of LLMs in ML article submissions have emerged.

"So small and medium scale language models are fine, right?" tweeted Yann LeCun, Chief AI Scientist at Meta, adding "I'm just asking the question because, you know... spellcheckers and predictive keyboards are language models."

Event

On-Demand Smart Security Summit

Learn about the essential role of AI and ML in cybersecurity and industry-specific case studies. Watch the on-demand sessions today.

look here

And Sebastian Bubeck, who leads the ML Foundations team at Microsoft Research, called the rule "myopic", tweeting that "ChatGPT and its variants are part of the future. Banning is definitely not the solution."

And Ethan Perez, researcher at Anthropic AI, tweeted that "This rule has a disproportionate impact on my collaborators who are not native English speakers."

Silvia Sellan, a PhD candidate in Computer Graphics and Geometry Processing at the University of Toronto, agreed, tweeting, "I'm trying to give the conference chairs the benefit of the doubt, but I really don't understand this blanket prohibition. As I understand it, LLMs, like Photoshop or the GitHub co-pilot, are a tool that can have both legitimate (e.g. I use it as a non-native English speaker) and nefarious uses… "

The ICML conference meets the LLM ethics rule

Finally, yesterday the ICML clarified its LLM Ethics Policy:

"We (programme managers) have included the following statement in the call for contributions for ICML represented by 2023:

Articles that include text generated from a Large Scale Language Model (LLM) such as ChatGPT are prohibited unless the text produced is presented as part of the experimental analysis of the 'article.

This statement has raised a number of questions from potential authors and has led some to proactively contact us. We welcome your feedback and feedback and would like to further clarify the intent behind this statement and how we plan to implement this policy for ICML 2023.

TLDR;

● The Large Language Model (LLM) policy for ICML 2023 prohibits text entirely produced by LLMs (i.e. "generated"). This does not prohibit authors from using LLMs to edit or polish the text written by the author.

● The LLM Policy relies heavily on the precautionary principle in protecting against potential problems with the use of LLMs, including plagiarism.

● The LLM Policy applies to ICML 2023. We are...

ML Conference discusses using ChatGPT in articles (and why it matters)

Check out all the Smart Security Summit on-demand sessions here.

A machine learning conference to discuss the use of machine learning? While that might sound so meta, in its call for paper submissions on Monday, the International Conference on Machine Learning did indeed note that "papers that include text generated from Large Scale Language Model (LLM) such as ChatGPT are prohibited unless these produced texts are presented as part of the experimental analysis of the paper.”

It didn't take long for a heated social media debate to brew, in what could be a perfect example of what businesses, organizations and institutions of all shapes and sizes, in all verticals, will face in the future: How will humans cope with the rise of large language models that can help communicate (or borrow, expand or plagiarize, depending on your perspective) ideas?

Arguments for and against using ChatGPT

As a debate on Twitter has intensified over the past two days, a variety of arguments for and against the use of LLMs in ML article submissions have emerged.

"So small and medium scale language models are fine, right?" tweeted Yann LeCun, Chief AI Scientist at Meta, adding "I'm just asking the question because, you know... spellcheckers and predictive keyboards are language models."

Event

On-Demand Smart Security Summit

Learn about the essential role of AI and ML in cybersecurity and industry-specific case studies. Watch the on-demand sessions today.

look here

And Sebastian Bubeck, who leads the ML Foundations team at Microsoft Research, called the rule "myopic", tweeting that "ChatGPT and its variants are part of the future. Banning is definitely not the solution."

And Ethan Perez, researcher at Anthropic AI, tweeted that "This rule has a disproportionate impact on my collaborators who are not native English speakers."

Silvia Sellan, a PhD candidate in Computer Graphics and Geometry Processing at the University of Toronto, agreed, tweeting, "I'm trying to give the conference chairs the benefit of the doubt, but I really don't understand this blanket prohibition. As I understand it, LLMs, like Photoshop or the GitHub co-pilot, are a tool that can have both legitimate (e.g. I use it as a non-native English speaker) and nefarious uses… "

The ICML conference meets the LLM ethics rule

Finally, yesterday the ICML clarified its LLM Ethics Policy:

"We (programme managers) have included the following statement in the call for contributions for ICML represented by 2023:

Articles that include text generated from a Large Scale Language Model (LLM) such as ChatGPT are prohibited unless the text produced is presented as part of the experimental analysis of the 'article.

This statement has raised a number of questions from potential authors and has led some to proactively contact us. We welcome your feedback and feedback and would like to further clarify the intent behind this statement and how we plan to implement this policy for ICML 2023.

TLDR;

● The Large Language Model (LLM) policy for ICML 2023 prohibits text entirely produced by LLMs (i.e. "generated"). This does not prohibit authors from using LLMs to edit or polish the text written by the author.

● The LLM Policy relies heavily on the precautionary principle in protecting against potential problems with the use of LLMs, including plagiarism.

● The LLM Policy applies to ICML 2023. We are...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow