OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Scientists are drowning in data. With millions of research papers published every year, even the most dedicated experts struggle to stay updated on the latest findings in their fields.

A new artificial intelligence system, called OpenScholar, is promising to rewrite the rules for how researchers access, evaluate, and synthesize scientific literature. Built by the Allen Institute for AI (Ai2) and the University of Washington, OpenScholar combines cutting-edge retrieval systems with a fine-tuned language model to deliver citation-backed, comprehensive answers to complex research questions.

“Scientific progress depends on researchers’ ability to synthesize the growing body of literature,” the OpenScholar researchers wrote in their paper. But that ability is increasingly constrained by the sheer volume of information. OpenScholar, they argue, offers a path forward—one that not only helps researchers navigate the deluge of papers but also challenges the dominance of proprietary AI systems like OpenAI’s GPT-4o.

[embedded content]

How OpenScholar’s AI brain processes 45 million research papers in seconds

At OpenScholar’s core is a retrieval-augmented language model that taps into a datastore of more than 45 million open-access academic papers. When a researcher asks a question, OpenScholar doesn’t merely generate a response from pre-trained knowledge, as models like GPT-4o often do. Instead, it actively retrieves relevant papers, synthesizes their findings, and generates an answer grounded in those sources.

This ability to stay “grounded” in real literature is a major differentiator. In tests using a new benchmark called ScholarQABench, designed specifically to evaluate AI systems on open-ended scientific questions, OpenScholar excelled. The system demonstrated superior performance on factuality and citation accuracy, even outperforming much larger proprietary models like GPT-4o.

One particularly damning finding involved GPT-4o’s tendency to generate fabricated citations—hallucinations, in AI parlance. When tasked with answering biomedical research questions, GPT-4o cited nonexistent papers in more than 90% of cases. OpenScholar, by contrast, remained firmly anchored in verifiable sources.

The grounding in real, retrieved papers is fundamental. The system uses what the researchers describe as their “self-feedback inference loop” and “iteratively refines its outputs through natural language feedback, which improves quality and adaptively incorporates supplementary information.”

The implications for researchers, policy-makers, and business leaders are significant. OpenScholar could become an essential tool for accelerating scientific discovery, enabling experts to synthesize knowledge faster and with greater confidence.

How OpenScholar works: The system begins by searching 45 million research papers (left), uses AI to retrieve and rank relevant passages, generates an initial response, and then refines it through an iterative feedback loop before verifying citations. This process allows OpenScholar to provide accurate, citation-backed answers to complex scientific questions. | Source: Allen Institute for AI and University of Washington

Inside the David vs. Goliath battle: Can open source AI compete with Big Tech?

OpenScholar’s debut comes at a time when the AI ecosystem is increasingly dominated by closed, proprietary systems. Models like OpenAI’s GPT-4o and Anthropic’s Claude offer impressive capabilities, but they are expensive, opaque, and inaccessible to many researchers. OpenScholar flips this model on its head by being fully open-source.

The OpenScholar team has released not only the code for the language model but also the entire retrieval pipeline, a specialized 8-billion-parameter model fine-tuned for scientific tasks, and a datastore of scientific papers. “To our knowledge, this is the first open release of a complete pipeline for a scientific assistant LM—from data to training recipes to model checkpoints,” the researchers wrote in their blog post announcing the system.

This openness is not just a philosophical stance; it’s also a practical advantage. OpenScholar’s smaller size and streamlined architecture make it far more cost-efficient than proprietary systems. For example, the researchers estimate that OpenScholar-8B is 100 times cheaper to operate than PaperQA2, a concurrent system built on GPT-4o.

This cost-efficiency could democratize access to powerful AI tools for smaller institutions, underfunded labs, and researchers in developing countries.

Still, OpenScholar is not without limitations. Its datastore is restricted to open-access papers, leaving out paywalled research that dominates some fields. This constraint, while legally necessary, means the system might miss critical findings in areas like medicine or engineering. The researchers acknowledge this gap and hope future iterations can responsibly incorporate closed-access content.

How OpenScholar performs: Expert evaluations show OpenScholar (OS-GPT4o and OS-8B) competing favorably with both human experts and GPT-4o across four key metrics: organization, coverage, relevance and usefulness. Notably, both OpenScholar versions were rated as more “useful” than human-written responses. | Source: Allen Institute for AI and University of Washington

The new scientific method: When AI becomes your research partner

The OpenScholar project raises important questions about the role of AI in science. While the system’s ability to synthesize literature is impressive, it is not infallible. In expert evaluations, OpenScholar’s answers were preferred over human-written responses 70% of the time, but the remaining 30% highlighted areas where the model fell short—such as failing to cite foundational papers or selecting less representative studies.

These limitations underscore a broader truth: AI tools like OpenScholar are meant to augment, not replace, human expertise. The system is designed to assist researchers by handling the time-consuming task of literature synthesis, allowing them to focus on interpretation and advancing knowledge.

Critics may point out that OpenScholar’s reliance on open-access papers limits its immediate utility in high-stakes fields like pharmaceuticals, where much of the research is locked behind paywalls. Others argue that the system’s performance, while strong, still depends heavily on the quality of the retrieved data. If the retrieval step fails, the entire pipeline risks producing suboptimal results.

But even with its limitations, OpenScholar represents a watershed moment in scientific computing. While earlier AI models impressed with their ability to engage in conversation, OpenScholar demonstrates something more fundamental: the capacity to process, understand, and synthesize scientific literature with near-human accuracy.

The numbers tell a compelling story. OpenScholar’s 8-billion-parameter model outperforms GPT-4o while being orders of magnitude smaller. It matches human experts in citation accuracy where other AIs fail 90% of the time. And perhaps most tellingly, experts prefer its answers to those written by their peers.

These achievements suggest we’re entering a new era of AI-assisted research, where the bottleneck in scientific progress may no longer be our ability to process existing knowledge, but rather our capacity to ask the right questions.

The researchers have released everything—code, models, data, and tools—betting that openness will accelerate progress more than keeping their breakthroughs behind closed doors.

In doing so, they’ve answered one of the most pressing questions in AI development: Can open-source solutions compete with Big Tech’s black boxes?

The answer, it seems, is hiding in plain sight among 45 million papers.

Related Posts

Oppo’s Find X8 Pro Wants to Be an AI Phone, but Its Camera Is the Real Star

Even before unboxing the Find X8 Pro — let alone using it, I knew Oppo wanted AI to be this phone’s main attraction. It was obvious: The words Oppo AI…

Read more

Best Internet Providers in Orlando, Florida

What is the best internet provider in Orlando? Orlando offers plenty of internet service providers to choose from. Your best option will ultimately depend on what’s available in your area….

Read more

The best tablets in 2024

This content originally appeared on Mashable for a US audience and has been adapted for the UK audience. If you don’t have a tablet, where are you been? Along with…

Read more

The best ideas for last-minute gifts (that will actually arrive on time)

This content originally appeared on Mashable for a US audience and has been adapted for the UK audience. Don’t feel bad. We’ve all done it. At one time or another,…

Read more

Diagrams so easy, it’s like magic: Get lifetime access to Microsoft Visio 2024 for £63

TL;DR: Get a lifetime device license for Microsoft Visio Professional 2024 for £63.21 (reg. £458.43) and make pro-level diagrams in a snap. Need a way to make sense of your…

Read more

DOJ says Google must sell Chrome to crack open its search monopoly

/ Google and the DOJ will be back in a DC federal court in April for the remedies trial. p>span:first-child]:text-gray-13 [&_.duet–article-byline-and]:text-gray-13″> By Lauren Feiner, a senior policy reporter at The…

Read more

Leave a Reply