Mistral AI's Codestral LLM can handle over 80 programming languages

Codestral performed better than some other models in benchmarks

Reading time icon 3 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

An AI generated image of Codestral from Mistral

Mistral AI, the French company, developed Codestral, its first coding large language model (LLM). The AI is fluent in over 80 programming languages, including Python, C, C++, Java, and JavaScript. Despite being a French model, Codestral masters English. So, if you are a developer, you can use it without any limitations to create AI applications.

How is Codestral different than other coding LLMs?

Codestral is a 22B model, and it is accessible through an API endpoint. Developers can use this LLM to write and interact with code. Besides, it can generate code from scratch, complete coding functions, write tests, and use its fill-in-the-middle feature to deal with partial codes.

In addition, developers will get code suggestions based on their queries, and its variety of programming languages will also help them code in different environments. Besides, the LLM from Mistral AI reduces the chances of encountering bugs and errors.

According to benchmarks, Codestral is better than the previous coding LLMs, such as CodeLlama 70B, Llama 3 79B, and DeepSeek Coder 33B. The coding LLM sets new standards in the performance/latency space. Compared to the others, it performed better in most Python benchmarks. However, DeepSeek Coder 33B scored better than Codestral in MBPP, and Llama 3 70B outperformed the new LLM by 3.6% in the Spider benchmark for SQL.

Codestral also fell slightly behind Llama 3 70B in the HumanEval benchmark for C+, C#, and Typescript. However, Codestral performed better than the other LLMs in Bash, Java, and PHP. On top of that, its average score in HumanEval was 0.3% better than Llama’s, which had 61.2%.

The coding LLM from Mistral AI works with some tools for developer productivity and AI application-making. For example, Llama Index and LangChain integrated the model to help you build agentic applications. In addition, Continue.dev and Tabrine let you use Codestral in VSCode and JetBrains environments to develop code.

How can you start using the Mistral AI coding LLM?

To start using Codestral for non-commercial research and test purposes, download it from HuggingFace. You can also find the coding LLM on two API endpoints: codestral.mistral.ai and api.mistral.ai.

The codestral.mistral.ai API endpoint is for users looking to use the Instruct or Fill-In-the-Middle routes inside their IDE. It also comes with an API key managed at a personal level. Yet, it will be free for eight weeks during the beta period. On top of that, MistralAI gated it behind a waitlist.

The api.mistral.ai. is a bit different because you need to buy tokens to use queries. Also, you don’t have to use an API key, and it is mostly for research, batch queries, or third-party application development. You can also use Codestral on LeChat, Mistral AI’s free conversational interface.

Ultimately, Codestral from Mistral AI has great potential to become one of the best coding LLMs on the market. However, the competition is fierce, especially since developers use ChatGPT and GitHub Copilot. Additionally, there are other small coding models available as well.

Will you try Codestral? Let us know in the comments.

More about the topics: AI, Programming tools and tips