Get ready to meet Chat GPT clones

Get ready to meet the GPT Chat clonesExpand Edward Olive/Getty Images

ChatGPT may be the most famous and potentially valuable algorithm around, but the artificial intelligence techniques that OpenAI uses to deliver its intelligence are neither unique nor secret. Competing projects and open source clones may soon make ChatGPT-like bots available for anyone to copy and reuse.

Stability AI, a startup that has already developed advanced, open-source image generation technology, is working on an open competitor to ChatGPT. "We're months away from release," says Emad Mostaque, CEO of Stability. A number of competing startups, including Anthropic, Cohere, and AI21, are working on proprietary chatbots similar to OpenAI's bot.

The imminent flood of sophisticated chatbots will make technology more abundant and visible to consumers, as well as more accessible to AI companies, developers and researchers. It could accelerate the rush to make money from AI tools that generate images, code and text.

Established companies such as Microsoft and Slack are integrating ChatGPT into their products, and many startups are working to develop a new ChatGPT API for developers. But greater technology availability can also complicate efforts to predict and mitigate the accompanying risks.

ChatGPT's seductive ability to provide compelling answers to a wide range of queries also leads it to invent facts or adopt problematic personas. It can help to perform malicious tasks such as producing malicious code or spam and disinformation campaigns.

Therefore, some researchers have called for slowing the deployment of ChatGPT-like systems during risk assessment. "There's no need to stop research, but we could certainly regulate large-scale deployment," says Gary Marcus, an AI expert who has sought to draw attention to risks such as misinformation generated by the AI. "We could, for example, request studies of 100,000 people before releasing these technologies to 100 million people."

The wider availability of ChatGPT-like systems and the release of open source versions would make it more difficult to limit research or deploy more widely. And the competition between large and small companies to adopt or match ChatGPT suggests little appetite for slowdown, but rather seems to encourage the proliferation of the technology.

Last week, LLaMA, an AI model developed by Meta and similar to the one at the heart of ChatGPT, was leaked online after being shared with university researchers. The system could be used as a building block in the creation of a chatbot, and its release has sparked concern among those who fear that AI systems known as large language models, and chatbots built on them such as ChatGPT, are used to generate misinformation or automate cybersecurity breaches. Some experts say these risks may be exaggerated, and others suggest that making the technology more transparent will actually help others guard against abuse.

Meta declined to answer questions about the leak, but company spokeswoman Ashley Gabriel provided a statement saying, "While the model is not available to everyone and some have attempted to circumvent approval process, we believe the current release strategy allows us to balance accountability and openness.”

ChatGPT is built...

Get ready to meet Chat GPT clones
Get ready to meet the GPT Chat clonesExpand Edward Olive/Getty Images

ChatGPT may be the most famous and potentially valuable algorithm around, but the artificial intelligence techniques that OpenAI uses to deliver its intelligence are neither unique nor secret. Competing projects and open source clones may soon make ChatGPT-like bots available for anyone to copy and reuse.

Stability AI, a startup that has already developed advanced, open-source image generation technology, is working on an open competitor to ChatGPT. "We're months away from release," says Emad Mostaque, CEO of Stability. A number of competing startups, including Anthropic, Cohere, and AI21, are working on proprietary chatbots similar to OpenAI's bot.

The imminent flood of sophisticated chatbots will make technology more abundant and visible to consumers, as well as more accessible to AI companies, developers and researchers. It could accelerate the rush to make money from AI tools that generate images, code and text.

Established companies such as Microsoft and Slack are integrating ChatGPT into their products, and many startups are working to develop a new ChatGPT API for developers. But greater technology availability can also complicate efforts to predict and mitigate the accompanying risks.

ChatGPT's seductive ability to provide compelling answers to a wide range of queries also leads it to invent facts or adopt problematic personas. It can help to perform malicious tasks such as producing malicious code or spam and disinformation campaigns.

Therefore, some researchers have called for slowing the deployment of ChatGPT-like systems during risk assessment. "There's no need to stop research, but we could certainly regulate large-scale deployment," says Gary Marcus, an AI expert who has sought to draw attention to risks such as misinformation generated by the AI. "We could, for example, request studies of 100,000 people before releasing these technologies to 100 million people."

The wider availability of ChatGPT-like systems and the release of open source versions would make it more difficult to limit research or deploy more widely. And the competition between large and small companies to adopt or match ChatGPT suggests little appetite for slowdown, but rather seems to encourage the proliferation of the technology.

Last week, LLaMA, an AI model developed by Meta and similar to the one at the heart of ChatGPT, was leaked online after being shared with university researchers. The system could be used as a building block in the creation of a chatbot, and its release has sparked concern among those who fear that AI systems known as large language models, and chatbots built on them such as ChatGPT, are used to generate misinformation or automate cybersecurity breaches. Some experts say these risks may be exaggerated, and others suggest that making the technology more transparent will actually help others guard against abuse.

Meta declined to answer questions about the leak, but company spokeswoman Ashley Gabriel provided a statement saying, "While the model is not available to everyone and some have attempted to circumvent approval process, we believe the current release strategy allows us to balance accountability and openness.”

ChatGPT is built...

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow