SAN FRANCISCO, United States—The company behind the ChatGPT app that churns out essays, poems or computing code on command released Tuesday a long-awaited update of its artificial intelligence (AI) technology that it said would be safer and more accurate than its predecessor.
GPT-4 has been widely awaited ever since ChatGPT burst onto the scene in late November, wowing users with its capabilities that were based on an older version of OpenAI’s technology, known as a large language model.
“We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning,” a company blog said, adding that the AI technology “exhibits human-level performance” on some professional and academic tasks.
The company said the model is “more creative and collaborative than ever before” and would “solve difficult problems with greater accuracy” than its earlier versions.
With its update, text responses from GPT-4 will be more accurate, and—in future—will come from both image and text inputs in a major leap forward for the technology, though this aspect has not yet been released.
For example, if a user sends a picture of the inside of a refrigerator, GPT-4 will not only correctly identify what is there, but also concoct what can be prepared with those ingredients.
OpenAI said it was working with a partner company, Be My Eyes, to prepare the next advance.
Much of the new model’s firepower is now available to the general public via ChatGPT Plus, OpenAI’s paid subscription plan and on a AI-powered version of Microsoft’s Bing search engine that is currently being tested.
OpenAI is backed by Microsoft, which earlier this year said it would finance the research company to the tune of billions of dollars.
The Windows-maker then swiftly integrated the tech into its Bing search engine, Edge browser and other products.
Microsoft’s aggressive adoption of ChatGPT has sparked a race with Google which announced its own versions of the AI technology, with Amazon, Baidu and Meta also wading in, eager to avoid being left behind.
Less ‘hallucinations’
OpenAI said that the new version was far less likely to go off the rails than its earlier chatbot with widely reported interactions with ChatGPT or Bing’s chatbot in which users were presented with lies, insults, or other so-called “hallucinations.”
“We spent six months making GPT-4 safer and more aligned. GPT-4 is 82 percent less likely to respond to requests for disallowed content and 40 percent more likely to produce factual responses,” OpenAI said.
Founder Sam Altman admitted that despite the anticipation, GPT-4 “is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it.”
“The power of the algorithm will increase, but it’s not a second revolution,” said Robert Vesoul, CEO of Illuin Technology, a French AI start-up.
Vesoul questioned the safety measures taken by OpenAI, which has already been hit by criticism from billionaire Elon Musk that the company is overly curbing speech on its AI in order to avoid more embarrassing responses.
“I am not sure if I want an AI to block responses on unknown topics… Should an AI decide if I smoke or not?” Vesoul told AFP.
Other companies partnering in the rollout of GPT-4 include Morgan Stanley that will use the AI to help guide its bankers and their clients.
“You essentially have the knowledge of the most knowledgeable person in Wealth Management—instantly,” said Morgan Stanley Jeff McMillan in a statement.
Other partners include Khan Academy, the online tutoring giant and Stripe, a financial app that will use GPT-4 to fight fraud and for other uses.