Experts call for legal 'safe harbor' for researchers, journalists and artists to evaluate AI tools

Join leaders In Boston on March 27 For A exclusive night of networking, knowledge, And conversation. Request A invite here.

According to has A new paper published by 23 AI researchers, academics And the creatives, 'on port' legal And technical protections are essential has allow researchers, journalists And artists has TO DO "sincerity" evaluations of AI some products And service.

Despite THE need For independent assessment, THE paper said, conduct research related has these vulnerabilities East often legally forbidden by THE terms of service For popular AI models, including those of OpenAI, Google, Anthropic, Inflection, Meta, And Mid Road. THE papers authors called on technology companies has compensate public interest AI research And protect he Since account pendant lights Or legal retaliation.

"While these terms are destined as A deterrent against malicious actors, they Also inadvertently restrict AI security And reliability research; companies to forbid THE research And can impose their Strategies with account suspensions, » said A Blog job accompanying THE paper.

Two of THE papers co-authors, Shayne Longpré of MIT Media Laboratory And Sayash Kapoor of Princeton University, explain has BusinessBeat that This East particularly important When, For example, In A recent effort has dismiss rooms of THE New York Times' trial, OpenAI characterized THE Times' assessment of ChatGPT as "piracy." THE Times' lead Advice replied by saying, "What OpenAI strangely denature as 'piracy' East simply using OpenAI some products has look For evidence that they stole And reproduced THE The temperature protected by copyright It works."

V.B. Event

THE AI Impact Tour – Boston

Were excited For THE following stop on THE AI Impact Tour In Boston on March 27. This exclusive, invite only event, In Partnership with Microsoft, will functionality discussions on best practices For data integrity In 2024 And beyond. Space East limit, SO request A invite today.

Request A invite

Longpré said that THE idea of A 'on port' was First of all propose by THE Knight First of all Amendment Institute For social media platform research In 2022. "They request social media platforms not has to forbid journalists Since trying has investigate THE night of social media, And SO in the same way For searcher protections as GOOD," he said, pointing out that there had has been A history of academics And journalists be as for follow-up, Or even expenses time In prison, as they fought has expose weaknesses In platforms.

"We try has learn as a lot as We could Since This pass effort has to propose A on port For AI research," he said. "With AI, We basically to have No information about how people are using these systems, What kinds of night are event, And A of THE only tools We to have East research to access has these platforms. »

Independent assessment And red team up are 'critical'

THE paper, A On port For AI Assessment And Red Team up, said that has THE authors' awareness, "account pendant lights In THE course of public interest research" to have taken place has companies including OpenAI, Anthropic, Inflection, And Half-way, with " Mid Road be THE most prolific." They quoted artist Reid South, WHO East listed as A of THE papers co-authors A...

Experts call for legal 'safe harbor' for researchers, journalists and artists to evaluate AI tools

Join leaders In Boston on March 27 For A exclusive night of networking, knowledge, And conversation. Request A invite here.

According to has A new paper published by 23 AI researchers, academics And the creatives, 'on port' legal And technical protections are essential has allow researchers, journalists And artists has TO DO "sincerity" evaluations of AI some products And service.

Despite THE need For independent assessment, THE paper said, conduct research related has these vulnerabilities East often legally forbidden by THE terms of service For popular AI models, including those of OpenAI, Google, Anthropic, Inflection, Meta, And Mid Road. THE papers authors called on technology companies has compensate public interest AI research And protect he Since account pendant lights Or legal retaliation.

"While these terms are destined as A deterrent against malicious actors, they Also inadvertently restrict AI security And reliability research; companies to forbid THE research And can impose their Strategies with account suspensions, » said A Blog job accompanying THE paper.

Two of THE papers co-authors, Shayne Longpré of MIT Media Laboratory And Sayash Kapoor of Princeton University, explain has BusinessBeat that This East particularly important When, For example, In A recent effort has dismiss rooms of THE New York Times' trial, OpenAI characterized THE Times' assessment of ChatGPT as "piracy." THE Times' lead Advice replied by saying, "What OpenAI strangely denature as 'piracy' East simply using OpenAI some products has look For evidence that they stole And reproduced THE The temperature protected by copyright It works."

V.B. Event

THE AI Impact Tour – Boston

Were excited For THE following stop on THE AI Impact Tour In Boston on March 27. This exclusive, invite only event, In Partnership with Microsoft, will functionality discussions on best practices For data integrity In 2024 And beyond. Space East limit, SO request A invite today.

Request A invite

Longpré said that THE idea of A 'on port' was First of all propose by THE Knight First of all Amendment Institute For social media platform research In 2022. "They request social media platforms not has to forbid journalists Since trying has investigate THE night of social media, And SO in the same way For searcher protections as GOOD," he said, pointing out that there had has been A history of academics And journalists be as for follow-up, Or even expenses time In prison, as they fought has expose weaknesses In platforms.

"We try has learn as a lot as We could Since This pass effort has to propose A on port For AI research," he said. "With AI, We basically to have No information about how people are using these systems, What kinds of night are event, And A of THE only tools We to have East research to access has these platforms. »

Independent assessment And red team up are 'critical'

THE paper, A On port For AI Assessment And Red Team up, said that has THE authors' awareness, "account pendant lights In THE course of public interest research" to have taken place has companies including OpenAI, Anthropic, Inflection, And Half-way, with " Mid Road be THE most prolific." They quoted artist Reid South, WHO East listed as A of THE papers co-authors A...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow