Orca-Math is better than 10x larger LLMs and more efficient

A new super AI for math is in the works, and it shows great results

Reading time icon 2 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

Orca-Math seen from an AI's perspective

Orca-Math, the new Mistral 7B model, is everything you need for math problems. After all, researchers trained it using an impressive method. In addition, according to research, the AI is capable of overshadowing larger models, such as Gemini PRO or ChatGPT 3.5.

Furthermore, the Orca AI retains a few of the math problems given to improve its system further. The team behind it hopes to soon introduce smaller, larger language models similar to Orca.

Is there an AI that can do math?

Various AIs can do math. Take, Orca-Math as an example. However, some are better than others, primarily based on difficulty levels. In addition, you are one of many who took math tests, as AI does it too. Through them, researchers can determine which model is faster, more efficient, and overall better than others. On top of that, they need to score at least 80 on the GSM8K test. Fortunately, according to Arindam Mitra’s post on X, Orca-Math scored 86.81.

There are multiple training methods for AI. For example, some researchers use a lot of different math problems, while others use fewer problems but increase their difficulty every time the AI clears them. To train Orca-Math, researchers used multiple tools: one to generate problems, GPT-4 to answer them, and another to check them. In the end, they used the results to enhance Orca Math.

Afterward, researchers used a Suggester and Editor tool to improve the difficulty and complexity of the problems. As a result, they created a system that increases the capabilities and effectiveness of the AI.

In a nutshell, Orca-Math is a small LLM with excellent capabilities that can surpass other LLMs. This math AI is a step further toward better ones. Furthermore, the project proves proper training can make smaller LLMs more efficient. Also, if you are interested, know that Microsoft published their 200K Orca-Math word problems.

What are your thoughts? Are you going to use Orca-Math when it becomes available? Let us know in the comments.

More about the topics: AI, microsoft