An A.I. Pioneering What We Should Really Fear

Artificial intelligence fuels our highest ambitions and deepest fears like few other technologies. It's as if with every gleaming, Promethean promise of machines capable of performing tasks at speeds and with skills we can only dream of comes with a compensating nightmare of human displacement and obsolescence. But despite recent A.I. breakthroughs in the previously human-dominated fields of language and visual arts - the prose compositions of the GPT-3 language model and the visual creations of the DALL-E 2 system have generated great interest - our the most serious concerns should probably be tempered. At least that's the opinion of computer scientist Yejin Choi, the 2022 recipient of the prestigious MacArthur "genius" fellowship who conducted groundbreaking research on the development of common sense and ethical reasoning in AI. "There's a bit of hype around potential A.I., as well as A.I. scare," admits Choi, who is 45. Which isn't to say the story of humans and A.I. will be without surprises. "There's a sense of adventure," Choi says of his work, "You explore this uncharted territory. You see something unexpected, and then you feel like, I want to find out what's there." there's more out there!

What are the biggest misconceptions people still have about AI? They make hasty generalizations. "Oh, GPT-3 can write this wonderful blog post. Maybe GPT-4 will be an editor for The New York Times Magazine. [Laughs.] I don't think he can replace anybody there , because he doesn't have a real understanding of the political context and therefore can't really write anything relevant to readers. Then there are concerns about A.I. sentience. There are always people who believe in something that doesn't make sense. People believe in tarot cards. People believe in conspiracy theories. So of course there will be people who believe in A.I. being sentient.

I know this might be the most cliched question to ask you, but I'm going to ask it anyway: Will humans ever create sentient artificial intelligence? I could change my mind, but currently I'm skeptical. I can see some people might have that impression, but when you work so close to AI, you see a lot of limitations. This is the problem. From a distance, it looks like, oh, my God! Up close, I see all the flaws. Anytime there's a lot of models, a lot of data, A.I. is very good at dealing with that - certain things like playing Go or chess. But humans have this tendency to believe that if A.I. can do something smart like translation or chess, then he must be really good at all the easy stuff too. The truth is, what's easy for machines can be hard for humans and vice versa. You'd be surprised how A.I. struggles with basic common sense. It's crazy.

Can you explain what "common sense" means in the context of his teaching AI? One way to describe it is that common sense is the dark matter of intelligence. Normal matter is what we see, what we can interact with. We've long thought that's what's in the physical world - and that's all. It turns out that's only 5% of the universe. Ninety-five percent is dark matter and dark energy, but it is invisible and not directly measurable. We know it exists, because if it doesn't, then normal matter is meaningless. So we know it's there, and we know there's a lot of it. We arrive at this realization with common sense. It is the tacit and implicit knowledge that you and I have. It's so obvious that we don't often talk about it. For example, how many eyes does a horse have? Of them. We don't talk about it, but everyone knows it. We don't know the exact fraction of knowledge that you and I have that we haven't talked about - but still know - but I guess there's a lot. Let me give you another example: you and I know birds can fly, and we know penguins usually can't. So A.I. researchers thought we could code this: birds generally fly, except for penguins. But in fact, the exceptions are the defiance of common sense rules. Newborn baby birds can't fly, birds covered in oil can't fly, injured birds can't fly, caged birds can't fly. The thing is, exceptions aren't exceptional, and you and I can think about them even though no one has told us. It's a fascinating ability, and it's not that easy for A.I.

You kind of referred to GPT-3 skeptically earlier. You think that's not impressive? I'm a big fan of GPT-3, but at the same time I feel like some people are b...

An A.I. Pioneering What We Should Really Fear

Artificial intelligence fuels our highest ambitions and deepest fears like few other technologies. It's as if with every gleaming, Promethean promise of machines capable of performing tasks at speeds and with skills we can only dream of comes with a compensating nightmare of human displacement and obsolescence. But despite recent A.I. breakthroughs in the previously human-dominated fields of language and visual arts - the prose compositions of the GPT-3 language model and the visual creations of the DALL-E 2 system have generated great interest - our the most serious concerns should probably be tempered. At least that's the opinion of computer scientist Yejin Choi, the 2022 recipient of the prestigious MacArthur "genius" fellowship who conducted groundbreaking research on the development of common sense and ethical reasoning in AI. "There's a bit of hype around potential A.I., as well as A.I. scare," admits Choi, who is 45. Which isn't to say the story of humans and A.I. will be without surprises. "There's a sense of adventure," Choi says of his work, "You explore this uncharted territory. You see something unexpected, and then you feel like, I want to find out what's there." there's more out there!

What are the biggest misconceptions people still have about AI? They make hasty generalizations. "Oh, GPT-3 can write this wonderful blog post. Maybe GPT-4 will be an editor for The New York Times Magazine. [Laughs.] I don't think he can replace anybody there , because he doesn't have a real understanding of the political context and therefore can't really write anything relevant to readers. Then there are concerns about A.I. sentience. There are always people who believe in something that doesn't make sense. People believe in tarot cards. People believe in conspiracy theories. So of course there will be people who believe in A.I. being sentient.

I know this might be the most cliched question to ask you, but I'm going to ask it anyway: Will humans ever create sentient artificial intelligence? I could change my mind, but currently I'm skeptical. I can see some people might have that impression, but when you work so close to AI, you see a lot of limitations. This is the problem. From a distance, it looks like, oh, my God! Up close, I see all the flaws. Anytime there's a lot of models, a lot of data, A.I. is very good at dealing with that - certain things like playing Go or chess. But humans have this tendency to believe that if A.I. can do something smart like translation or chess, then he must be really good at all the easy stuff too. The truth is, what's easy for machines can be hard for humans and vice versa. You'd be surprised how A.I. struggles with basic common sense. It's crazy.

Can you explain what "common sense" means in the context of his teaching AI? One way to describe it is that common sense is the dark matter of intelligence. Normal matter is what we see, what we can interact with. We've long thought that's what's in the physical world - and that's all. It turns out that's only 5% of the universe. Ninety-five percent is dark matter and dark energy, but it is invisible and not directly measurable. We know it exists, because if it doesn't, then normal matter is meaningless. So we know it's there, and we know there's a lot of it. We arrive at this realization with common sense. It is the tacit and implicit knowledge that you and I have. It's so obvious that we don't often talk about it. For example, how many eyes does a horse have? Of them. We don't talk about it, but everyone knows it. We don't know the exact fraction of knowledge that you and I have that we haven't talked about - but still know - but I guess there's a lot. Let me give you another example: you and I know birds can fly, and we know penguins usually can't. So A.I. researchers thought we could code this: birds generally fly, except for penguins. But in fact, the exceptions are the defiance of common sense rules. Newborn baby birds can't fly, birds covered in oil can't fly, injured birds can't fly, caged birds can't fly. The thing is, exceptions aren't exceptional, and you and I can think about them even though no one has told us. It's a fascinating ability, and it's not that easy for A.I.

You kind of referred to GPT-3 skeptically earlier. You think that's not impressive? I'm a big fan of GPT-3, but at the same time I feel like some people are b...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow