GPT-4 Turbo extends the context length to 128,000 tokens

The extended context length is comparable to a 300-page book.

Reading time icon 3 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

GPT 4 Turbo

OpenAI just announced GPT-4 Turbo, at its first-ever OpenAI DevDay Event. GPT-4 Turbo extends the context length to 128,000 tokens.

The company says the 128,000 tokens are comparable to a 300-page book, so users will now have much more space to interact with ChatGPT in one single discussion.

While unlimited context length is still in development, as we covered a story earlier this year where Microsoft is trying to achieve it, GPT-4 extending its context length to such a degree will be more than useful.

GPT-4 Turbo is more capable and has knowledge of world events up to April 2023. It has a 128k context window so it can fit the equivalent of more than 300 pages of text in a single prompt. We also optimized its performance so we are able to offer GPT-4 Turbo at a 3x cheaper price for input tokens and a 2x cheaper price for output tokens compared to GPT-4.

OpenAI

GPT-4 Turbo is currently available in preview mode, but OpenAI promises to release its stable version in the following weeks.

GPT-4 Turbo: Here’s everything you need to know about it

GPT-4 Turbo will have an improved instruction following, and according to OpenAI, this GPT version will also feature new improvements for function calling.

We’re releasing several improvements today, including the ability to call multiple functions in a single message: users can send one message requesting multiple actions, such as “open the car window and turn off the A/C”, which would previously require multiple roundtrips with the model (learn more). We are also improving function calling accuracy: GPT-4 Turbo is more likely to return the right function parameters.

OpenAI

The model will also be capable of reproducible outputs, which according to OpenAI’s own words, is invaluable to developers.

This beta feature is useful for use cases such as replaying requests for debugging, writing more comprehensive unit tests, and generally having a higher degree of control over the model behavior. We at OpenAI have been using this feature internally for our own unit tests and have found it invaluable. 

OpenAI

Along with GPT-4 Turbo, GPT 3.5 Turbo is also coming, and it will be released with multiple new features.

The new 3.5 Turbo supports improved instruction following, JSON mode, and parallel function calling. For instance, our internal evals show a 38% improvement on format following tasks such as generating JSON, XML and YAML. Developers can access this new model by calling gpt-3.5-turbo-1106 in the API. Applications using the gpt-3.5-turbo name will automatically be upgraded to the new model on December 11. Older models will continue to be accessible by passing gpt-3.5-turbo-0613 in the API until June 13, 2024.

OpenAI

What are your thoughts on the new GPT-4 Turbo?

More about the topics: AI, ChatGPT