Artificial Intelligence

ChatGPT-4 Turbo explained: What makes it the biggest update since launch

OpenAI just dropped some major news at its inaugural developer conference – they’ve rolled out the latest upgrades to their massive language models (LLM). The highlight of the show is the introduction of the GPT-4 Turbo, which is now in preview. GPT-4 Turbo is a souped-up version of the already impressive GPT-4, boasting a significantly expanded context window and tapping into the freshest knowledge available.

OpenAI is saying their new AI model is not only stronger but also comes with a cheaper price tag compared to the older versions. Unlike its predecessors, it’s been trained on info up to April 2023, a significant leap from the last version, which was last updated in September 2021. I gave it a spin, and it’s true – using GPT-4 lets ChatGPT pull in info from events up to April 2023. That update is already in action.

Also Read: We might be in for a ‘hologram revolution’ soon

GPT-4 Turbo goes big with its context window

GPT-4 Turbo has a way bigger context window than the older versions. This is the info that GPT-4 Turbo looks at before spitting out a reply. Now, it’s rocking a whopping 128,000-token context window (that’s the chunk of text or code that LLMs read). OpenAI spills the beans in their blog post, saying that’s like having around 300 pages of text to work with.

You could practically toss a whole novel at ChatGPT during one chat, thanks to that massive context window – a way bigger playground than the older versions with their 8,000 or 32,000 token limits.

Context windows play a big role for LLMs – they’re like guardrails to keep them on track. If you’ve chatted with big language models, you’ve probably seen them veer off course if the convo gets too lengthy. That can lead to some wild and creepy responses like the time Bing Chat told us it dreamed of being human. With GPT-4 Turbo, the hope is that it’ll hold off the craze for way longer than the current model.

Developers will be catching a break with GPT-4 Turbo – it’s more budget-friendly, running at $0.01 for every 1,000 input tokens (about 750 words) and $0.03 for every 1,000 output tokens. OpenAI figures this new version is three times more pocket-friendly compared to its predecessors.

GPT-4 Turbo Diligently follows instructions

OpenAI claims that GPT-4 Turbo steps up its game in following instructions diligently. You can now instruct it to use your preferred coding language, like XML or JSON, for generating results. GPT-4 Turbo is also on board with images and text-to-speech, and it’s keeping the integration with DALL-E 3.

OpenAI didn’t stop at GPT-4 Turbo – they also dropped another bomb with GPTs. These are basically personalized versions of ChatGPT that anyone, coding know-how or not, can whip up for their specific needs. You can cook up these GPTs for personal or business use and even share them around. OpenAI spills the beans that GPTs are up for grabs right now for ChatGPT Plus subscribers and enterprise users.

Finally, with the ongoing copyright worries, OpenAI is throwing in with Google and Microsoft, pledging to take the legal heat if any of its customers get hit with a copyright infringement lawsuit.

So, with the massive context window, the fresh copyright protection, and a knack for following instructions, the GPT-4 Turbo could be a double-edged sword. ChatGPT is pretty decent at steering clear of no-go zones, but let’s face it, there’s a shadowy side. This souped-up version, while super capable, might still carry the usual pitfalls of other language models, just cranked up to eleven this time.

New features coming to ChatGPT-4

Previously, OpenAI dropped the news that they’re rolling out these new goodies to ChatGPT Plus and Enterprise users over the next two weeks. The voice feature is up for grabs on iOS and Android, but you gotta opt in. Meanwhile, the images feature is on deck for all ChatGPT platforms. OpenAI’s game plan is to spread the love for images and voice features to more than just the paying users after the initial rollout.

The voice chat feature is like having a spoken chat with ChatGPT. You hit the button, ask your question out loud, and instead of getting a text reply, the chatbot talks back to you. It’s a bit like dealing with virtual assistants like Alexa or Google Assistant and might be a sign of a major shake-up in how virtual assistants work. OpenAI’s news dropped right on the heels of Amazon spilling the beans about a similar feature heading to Alexa.

To make voice and audio chats happen with ChatGPT, OpenAI relies on a fresh text-to-speech model. This bad boy can whip up “human-like audio from just text and a few seconds of sample speech.” And if you’re going the other way – from speech to text – their Whisper model’s got you covered, transcribing your spoken words into text.

Also Read: Deepmind’s AI tool can be beneficial in predicting genetic diseases

OpenAI roping in new partners

With the image feature, you can snap a pic and throw it into the ChatGPT mix along with your question. There’s even a drawing tool in the app to make things crystal clear. You can go back and forth with the chatbot until you sort out your problem, kinda like how Microsoft’s fresh Copilot feature in Windows works, and that one’s powered by OpenAI’s model too.

ChatGPT started out as a text-to-speech tool not too long ago, around the end of last year. But OpenAI didn’t stop there. They’ve amped it up from the original GPT-3 language model to GPT-3.5, and now the latest and greatest GPT-4, the model that’s getting all these cool new features.

Back in March, when GPT-4 made its debut, OpenAI spilled the beans on some cool partnerships. Duolingo hopped on board, using the AI model to fine-tune listening and speech lessons in the language learning app. Then there’s the Spotify collab, making podcast translations happen without losing the podcaster’s voice vibe. OpenAI also teamed up with Be My Eyes, a mobile app assisting blind and low-vision folks. A bunch of these apps and services were already in action even before the images and voice upgrade.

Vishal Kawadkar

With over 8 years of experience in tech journalism, Vishal is someone with an innate passion for exploring and delivering fresh takes. Embracing curiosity and innovation, he strives to provide an informed and unique outlook on the ever-evolving world of technology.

Recent Posts

Best Video Editing Software For PC

Video editing is one of the most in-demand skills in today’s content creation era. If…

8 months ago

Samsung planning to introduce blood glucose monitoring with Galaxy Watch 7

There have been whispers about Samsung's ambition to equip their wearable gadgets with a neat trick:…

8 months ago

TSMC to lock horns with Intel with its A16 chip manufacturing tech

Taiwan Semiconductor Manufacturing Co (TSMC) recently dropped the news that they're gearing up to kick off production…

8 months ago

Is ChatGPT accurate and should we believe what it says?

Modern chatbots like ChatGPT can churn out dozens of words per second, making them incredibly…

8 months ago

Mark Zuckerberg claims Meta is years away from making money through gen AI

The race for generative AI is in full swing, but don't count on it raking…

8 months ago

How JioCinema’s dirt cheap plans can mean trouble for Netflix, Amazon Prime

JioCinema, the famous Indian on-demand video-streaming service, unveiled a new monthly subscription plan, starting at…

8 months ago