OpenAI Spring Update and GPT-4o

May 14, 2024

OpenAI announced a few very nice updates during its Spring Update yesterday, most notably including GPT-4o:

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time(opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.

The video demo was very interesting regarding the conversation style of GPT-4o. We’ve had text-to-speech capabilities for a long time, but this feels much more conversational and ‘real’. If the product is as good as this demo, it’s going to be really cool. The conversation felt very relaxed and, for lack of a better term, human.

Also announced is a desktop app for macOS, which will be available to Plus users starting today. Very interesting to see a macOS app before a Windows desktop app. Excited to give this a spin.