Is ChatGPT a "virus that has been released in the wild"?

Over three years ago, this editor sat down with Sam Altman for a small event in San Francisco shortly after leaving his post as president of Y Combinator to become CEO of the 'AI company he co-founded in 2015 with Elon Musk and others, OpenAI.

At the time, Altman was describing the potential of OpenAI in language that seemed strange to some. Altman said, for example, that the opportunity with general artificial intelligence - artificial intelligence that can solve problems as well as a human - is so great that if OpenAI were able to crack it, the outfit could "may -to be capturing the light cone of all future value". in the universe." He said the company "was going to have to not publish research" because it was so powerful. When asked if OpenAI was guilty of inciting fear - Musk called repeatedly all organizations developing AI to be regulated - Altman spoke of the dangers of not thinking about "societal consequences" when "you're building something on an exponential curve."

>

The audience laughed at various points in the conversation, unsure how seriously Altman should be taken. No one is laughing now, though. While machines aren't yet as smart as people, the technology OpenAI has since released comes as a surprise to many (including Musk), with some critics fearing it could be our downfall, especially with more advanced technologies. sophisticated ones that should arrive soon.

Indeed, although power users insist it's not that smart, the ChatGPT model that OpenAI made available to the general public last week is so capable of answering questions like a person that professionals from various sectors try to deal with the implications. Educators, for example, wonder how they will be able to distinguish original writing from the algorithmically generated essays they are required to receive - and that can elude anti-plagiarism software.

Paul Kedrosky is not an educator per se. He's an economist, venture capitalist, and MIT fellow who calls himself a "frustrated normal with a penchant for thinking about risk and unintended consequences in complex systems." But he is among those suddenly worried about our collective future, tweeting yesterday: "[Shame on OpenAI for dropping this pocket nuke without restrictions into an unprepared society." Kedrosky wrote, “Obviously I think ChatGPT (and its ilk) should be removed immediately. And, if ever reintroduced, only with strict restrictions. »

We spoke with him yesterday about some of his concerns and why he thinks OpenAI is driving what he considers "the most disruptive change the US economy has seen since 100 years", and not in a good way.

Our chat has been edited for length and clarity.

TC: ChatGPT was released last Wednesday. What triggered your reaction on Twitter?

PK: I've played around with these conversational UIs and AI services in the past, and that's obviously a huge step forward. And what particularly disturbed me here was the flippant brutality of it, with massive consequences for a multitude of different activities. It's not just the obvious ones, like high school essay writing, but in just about any area where there's grammar - [meaning] an organized way of expressing yourself. It could be software engineering, high school term papers, legal papers. All are easily eaten by this ravenous beast and spat out without compensation for what was used to train it.

I overheard a co-worker at UCLA tell me that he had no idea what to try out at the end of the current term, where they were getting hundreds per course and thousands by department, because they no longer had any idea what was wrong and what was not. So to do this so casually - as someone told me earlier today - is reminiscent of the so-called [ethical] hacker who finds a bug in a widely used product and then notifies the developer before the general public does not know. box...

Is ChatGPT a "virus that has been released in the wild"?

Over three years ago, this editor sat down with Sam Altman for a small event in San Francisco shortly after leaving his post as president of Y Combinator to become CEO of the 'AI company he co-founded in 2015 with Elon Musk and others, OpenAI.

At the time, Altman was describing the potential of OpenAI in language that seemed strange to some. Altman said, for example, that the opportunity with general artificial intelligence - artificial intelligence that can solve problems as well as a human - is so great that if OpenAI were able to crack it, the outfit could "may -to be capturing the light cone of all future value". in the universe." He said the company "was going to have to not publish research" because it was so powerful. When asked if OpenAI was guilty of inciting fear - Musk called repeatedly all organizations developing AI to be regulated - Altman spoke of the dangers of not thinking about "societal consequences" when "you're building something on an exponential curve."

>

The audience laughed at various points in the conversation, unsure how seriously Altman should be taken. No one is laughing now, though. While machines aren't yet as smart as people, the technology OpenAI has since released comes as a surprise to many (including Musk), with some critics fearing it could be our downfall, especially with more advanced technologies. sophisticated ones that should arrive soon.

Indeed, although power users insist it's not that smart, the ChatGPT model that OpenAI made available to the general public last week is so capable of answering questions like a person that professionals from various sectors try to deal with the implications. Educators, for example, wonder how they will be able to distinguish original writing from the algorithmically generated essays they are required to receive - and that can elude anti-plagiarism software.

Paul Kedrosky is not an educator per se. He's an economist, venture capitalist, and MIT fellow who calls himself a "frustrated normal with a penchant for thinking about risk and unintended consequences in complex systems." But he is among those suddenly worried about our collective future, tweeting yesterday: "[Shame on OpenAI for dropping this pocket nuke without restrictions into an unprepared society." Kedrosky wrote, “Obviously I think ChatGPT (and its ilk) should be removed immediately. And, if ever reintroduced, only with strict restrictions. »

We spoke with him yesterday about some of his concerns and why he thinks OpenAI is driving what he considers "the most disruptive change the US economy has seen since 100 years", and not in a good way.

Our chat has been edited for length and clarity.

TC: ChatGPT was released last Wednesday. What triggered your reaction on Twitter?

PK: I've played around with these conversational UIs and AI services in the past, and that's obviously a huge step forward. And what particularly disturbed me here was the flippant brutality of it, with massive consequences for a multitude of different activities. It's not just the obvious ones, like high school essay writing, but in just about any area where there's grammar - [meaning] an organized way of expressing yourself. It could be software engineering, high school term papers, legal papers. All are easily eaten by this ravenous beast and spat out without compensation for what was used to train it.

I overheard a co-worker at UCLA tell me that he had no idea what to try out at the end of the current term, where they were getting hundreds per course and thousands by department, because they no longer had any idea what was wrong and what was not. So to do this so casually - as someone told me earlier today - is reminiscent of the so-called [ethical] hacker who finds a bug in a widely used product and then notifies the developer before the general public does not know. box...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow