OpenAI’s latest AI language model, GPT-4, has been unveiled after much discussion and conjecture. It has been used to create applications such as ChatGPT and Bing.

The firm asserts that the model is “freshly creative and cooperative” and “capable of resolving intricate predicaments with improved precision.” It can analyze both text and image input, but it can only communicate through text. OpenAI also warns that these systems still possess many of the same issues as preceding language models, such as a propensity to fabricate data (or “imagine”) and the ability to formulate an offensive and detrimental text.

OpenAI has collaborated with a variety of businesses to incorporate GPT-4 into their services, such as Duolingo, Stripe, and Khan Academy. The public can access the model through ChatGPT Plus, OpenAI’s $20 month-to-month ChatGPT subscription. Additionally, Microsoft’s Bing chatbot utilizes GPT-4, as stated in a post on the Bing blog. It will also be accessible through an API for developers to use. A waitlist for the API is available here, and OpenAI has announced that they will start admitting users today.

OpenAI stated in a research blog post that the difference between GPT-4 and its previous iteration GPT-3.5 is not obvious when used in casual conversations (by which ChatGPT is powered by). The CEO of OpenAI, Sam Altman, tweeted that GPT-4 “is still flawed and limited” but “it appears to be more impressive on first use rather than after a more extended period of use”.

OpenAI has reported that GPT-4 has achieved remarkable results on several tests and benchmarks, such as the Uniform Bar Exam, LSAT, SAT Math, and SAT Evidence-Based Reading & Writing. The system’s scores for these exams were in the 88th percentile and higher, and a complete set of exams and the corresponding scores can be seen here.

In the past 12 months, many people have been making guesses about the possibilities of GPT-4 and how much of an advancement it would be over previous technology. OpenAI’s announcement, however, has revealed that the improvement is more gradual, which is what the firm had already indicated.

Altman, in an interview from January, declared that “individuals are almost looking to be disappointed and they will be.” He continued, “The excitement is similar to… We don’t possess a real artificial general intelligence and that is basically what is expected of us.”

The buzz was amplified last week when a Microsoft representative mentioned the multi-modal AI system’s setup for this week in an interview with the German press. This multi-modal feature means it will generate not only text but other types of media as well. AI experts believe multi-modal systems which combine text, audio, and video will pave the way for more advanced AI systems.

OpenAI states that GPT-4 is indeed multimodal, however in fewer ways than some expected. It can both take in text and images as input and produce text as output. The company claims that the model’s capability to analyze both text and pictures concurrently allows it to interpret more complicated input.

Over the years, OpenAI and other AI language models have been gaining traction, until recently when they have started to become ubiquitous. This has been a long and arduous journey to the present GPT-4.

OpenAI, during their announcement of GPT-4, emphasized the system’s six-month safety training program, and their internal tests which showed the system was 82 percent less prone to responding to unwanted content and forty percent more likely to give factual answers than GPT-3.5.

LEAVE A REPLY

Please enter your comment!
Please enter your name here