What is GPT-4o? OpenAI unveils powerful free-for-all AI model with vision, text, and voice

In what may be described as the most significant and exciting update for ChatGPT for free users, OpenAI has brought the capabilities of GPT-4 to its latest model, GPT-4o. Mira Murati, OpenAI’s CTO and host of the OpenAI Spring Update event, stated that the GPT-4o is quicker and delivers OpenAI’s most sophisticated features to everyone.

The live broadcast event featured the launch of ChatGPT’s desktop client, a web UI upgrade, free access to GPT-4o, and a live demonstration of GPT-4o’s capabilities. During the session, Murati reiterated OpenAI’s mission: “to ensure that artificial intelligence benefits all humanity.”

The desktop app for ChatGPT demonstrated at the event was designed to make accessing ChatGPT straightforward and easy to integrate into one’s workflow. Furthermore, Murati claims that the revamped UI was designed to make interactions more natural and allow users to focus more on ChatGPT’s collaborative aspects. According to Murati, AI models are becoming more complicated, and OpenAI wants its users to interact with them more naturally and engagingly.

What is GPT-4o?

GPT-4o provides GPT-4-level intelligence while being significantly faster and improving its capabilities in text, vision, and audio. Murati described the new model as OpenAI’s first significant step forward in terms of simplicity of use. GPT-4o makes human-machine interaction considerably more natural and easier.

The voice mode on GPT-4o is efficient and intuitively recognizes the voice of the speaker or multiple speakers. Until now, the voice mode had three models that worked together to provide the capability. These are transcription, intelligence, and text-to-speech, and they all came together to create the voice mode. This resulted in latency; however, with GPT-4o, all of this happens natively. 

Currently, over 100 million users use ChatGPT to learn, create, and collaborate. Until now, advanced tools were only available to paying customers. With GPT-4o and its increased efficiency, all users can now access OpenAI’s powerful capabilities. Starting today, users will be able to use GPTs from the GPT store, essentially gaining access to over a million GPTs. This will open up more possibilities for developers, as they will have a much larger audience.

GPT-4o also has a vision, allowing users to upload photographs and documents and initiate conversations about them. During conversations, one can also use the Memory feature to search for real-time information. Furthermore, OpenAI has enhanced the quality and speed in 50 different languages.

GPT-4o capabilities

Mark Chen and Barret Zoph, OpenAI’s research heads, demonstrated some interesting live demos at the event. They began with real-time discussions with ChatGPT, demonstrating how the model can detect a user’s emotions and generate several types of emotive conversation styles. While showcasing Vision capabilities, Zoph wrote a math problem in linear equations, asking ChatGPT to solve it in a step-by-step way. The chatbot solved the problem with ease.

The ChatGPT desktop application was also presented. The duo asked ChatGPT to assist them with some coding questions. The bot assisted them and also generated a one-sentence summary of complex charts. It can also tell you about your emotions by glancing at your face in real time. Murati and Chen also exhibited the live, real-time translation capabilities of ChatGPT.

GPT-4o will be deployed iteratively over the next two weeks. GPT-4o is also available via API, and it is two times faster, 50% cheaper, and has five times greater rate limits than GPT-4 Turbo.

Exit mobile version