Microsoft’s new rStar-Math technique upgrades small models to outperform OpenAI’s o1-preview at math problems

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Microsoft is doubling down on the potential of small language models (SLMs) with the unveiling of rStar-Math, a new reasoning technique that can be applied to small models to boost their performance on math problems using reasoning techniques — performance similar to, and in some cases exceeding, that of OpenAI’s o1-preview model.

While still in a research phase — as outlined in a paper published on pre-review site arXiv.org and credited to eight authors at Microsoft, Peking University and Tsinghua University in China — the technique was applied to several different smaller open-source models including Microsoft’s own Phi-3 mini, Alibaba’s Qwen-1.5B (a 1.5-billion-parameter model), and Qwen-7B (a 7-billion-parameter model). It showed improved performance on all of them, even exceeding OpenAI’s previously most advanced model at the MATH (word problem solving) third-party benchmark of 12,500 questions covering various branches such as geometry and algebra, and all levels of difficulty.

Ultimately, according to a post on Hugging Face, the researchers plan to make their code and data available on Github at https://github.com/microsoft/rStar, though one of the paper’s authors, Li Lyna Zhang, wrote in the comments on the Hugging Face post that the team is “still undergoing the internal review process for open-source release.” As such, “the repository remains private for now. Please stay tuned!”

Community members expressed enthusiasm, calling the innovations “impressive” and praising the blend of Monte Carlo Tree Search (MCTS) with step-by-step reasoning. One commenter highlighted the simplicity and utility of using Q-values for step scoring, while others speculated on future applications in geometric proofs and symbolic reasoning.

This news follows closely on the heels of the open-sourcing of Microsoft’s Phi-4 model, a smaller 14-billion-parameter AI system now available on Hugging Face under the permissive MIT license.

While the Phi-4 release has expanded access to high-performance small models, rStar-Math showcases a specialized approach: using smaller AI systems to achieve state-of-the-art results in mathematical reasoning.

rStar-Math works by using several different models and components to help a target small model ‘self-evolve’

The key to rStar-Math is that it leverages Monte Carlo Tree Search (MCTS), a method that mimics human “deep thinking” by iteratively refining step-by-step solutions to mathematical problems.

The researchers used MCTS because it “breaks down complex math problems into simpler single-step generation tasks, reducing the difficulty” for smaller models.

However, they didn’t just apply MCTS as other researchers have done. Instead, in a stroke of brilliance, they also ask the model they trained to always output its “chain-of-thought” reasoning steps as both natural language descriptions and Python code.

They mandated the model would include the natural language responses as Python code comments, and only those outputs using Python would be used to train the model.

The researchers also trained a “policy model” to generate math reasoning steps and a process preference model (PPM) to select the most promising steps to solving the problems, and improved them both over four rounds of “self-evolution,” with each model improving the other.

For their starting data, the researchers said they used “747,000 math word problems from publicly available sources,” along with their solutions, but generated new steps for solving them with the two models described above.

Record-breaking results

After four rounds of self-evolution, rStar-Math achieved significant milestones:

• On the MATH benchmark, the accuracy of the Qwen2.5-Math-7B model jumped from 58.8% to 90.0%, outperforming OpenAI o1-preview.

• On the American Invitational Mathematics Examination (AIME), it solved 53.3% of problems, placing among the top 20% of high school competitors.

These results highlight the power of SLMs in handling complex mathematical reasoning, traditionally dominated by larger systems.

Smaller is better?

In recent years, AI innovation has largely been driven by scaling up language models, with increasing parameters seen as a way to improve performance. Yet, the high costs associated with these massive models, from computational resources to energy consumption, have raised questions about scalability.

Microsoft is offering an alternative path, focusing on efficiency. The release of rStar-Math further underscores this commitment by demonstrating how SLMs can rival — and in some cases exceed — the capabilities of their larger counterparts.

Microsoft’s dual releases of Phi-4 and the rStar-Math paper suggest that compact, specialized models can provide powerful alternatives to the industry’s largest systems.

Moreover, by outperforming larger competitors in key benchmarks, these models challenge the notion that bigger is always better. They open doors for mid-sized organizations and academic researchers to access cutting-edge capabilities without the financial or environmental burden of massive models.

Related Posts

USPS Halts All Packages From China, Sending the Ecommerce Industry Into Chaos

The United States Postal Services has abruptly stopped accepting all packages from Hong Kong and China until further notice, according to an international service disruption notice posted on the USPS…

Read more

Don’t Throw Out Used Zip Ties – Here’s A Trick To Reuse Them

Mihalec/Getty Images It’s no secret that zip ties are made to be tough. Whether for cable management, gardening, fixing broken items, or keeping bags closed, these seemingly simple strips are…

Read more

Chaos Consumes USAID as State Department Moves to Send Overseas Staffers Home

As Elon Musk’s DOGE team continues its efforts to dismantle the US government’s primary agency for distributing foreign aid, its overseas employees are stuck in limbo. Workers at the United…

Read more

Can You Bring A Swiss Army Knife On A Plane? What The TSA Rules Say

Dev Images/Getty Images Nearly a quarter-century after the Transport Security Administration arrived in airports across the country, we’re still wondering whether to take our shoes off in line. The rules…

Read more

The Best Years For The Jeep Compass, And Some To Avoid

During the 2000s, Jeep launched several SUVs including the Liberty, Patriot, Compass and Commander. However, only one of them, a mid-size value focused SUV, is still around, the Jeep Compass….

Read more

Why Do Pilots Say ‘Mayday’ In An Emergency? The History Behind The Term

Juan Silva/Getty Images You’ve heard it before in plenty of TV shows and movies. A plane is plummeting to the ground or it loses engine power and the pilot says,…

Read more

Leave a Reply