Cochrane has confirmed the two artificial intelligence (AI) tools selected to take part in our innovative platform study evaluating how artificial intelligence could support and enhance key stages of evidence synthesis.
About the selected tools
The selected tools are Laser AI and Nested Knowledge. The tools were selected from a pool of 48 submissions received in late 2025 following a selection process aligned with Responsible AI in Evidence SynthEsis (RAISE) principles.
“We’re thrilled that Laser AI has been selected as one of the tools for this important study - the timing couldn’t be better, and Cochrane is the right partner to lead such an evaluation. At a moment when AI agents are rapidly reshaping workflows, we’re especially excited to benefit from Cochrane’s decades of methodological leadership and contribute to achieving a better understanding of how these technologies can support evidence synthesis.” – Artur Nowak, Co-founder of Evidence Prime
"We are inspired by Cochrane's mission of supporting trusted, timely, evidence-driven decisions. The Cochrane Library is the leading collection of exactly the types of high-quality evidence syntheses that Nested Knowledge was built to support. We look forward to learning about the best practices Cochrane experts deploy when using AI-enabled systems, as well as their vision for the future of evidence synthesis." – Kevin Kallmes, Chief Executive Officer and Cofounder, Nested Knowledge
Five additional tools remain on a reserve list and may be incorporated later as the platform study evolves.
As part of entering into formal agreements with the developers, Cochrane has provided a minimum contribution for their participation in the study. This enables us to run the study under a legally binding contractual agreement to ensure our expectations and standards are met on aspects including data protection and intellectual property.
What does this mean for Cochrane authors?
The selection of these tools is not a formal Cochrane endorsement. Our position, as outlined in our published position statement, is that Cochrane authors can use AI tools as long as they can demonstrate that it will not compromise the methodological rigor or integrity of their synthesis. This study within a review protocol in this innovative platform study is our approach to evaluating this for the reviews included in the study.
For Cochrane authors who wish to use AI tools, we advise that they follow the RAISE recommendations and guidance, in particular, the third paper in the collection (RAISE 3) that offers guidance on selecting and using AI evidence synthesis tools. Cochrane authors could apply this to Laser AI, to Nested Knowledge, or any other AI tool.
To help systematic reviewers navigate this, new guidance released in March 2026 includes an overview of how AI is being used in different types of tools at different stages of the review process, alongside recommendation on their use.
Categories of recommendations of AI use dependent on the systematic review task and AI tool class. Reproduced Table 1 from Responsible use of AI in Evidence Synthesis (RAISE 2026) 3: selecting and using AI evidence synthesis tools’
| Recommendation | |
| Acceptable for use AI outputs may be used directly within the review | AI outputs may be used directly within the review workflow, if any limitations or potential biases are acknowledged and accounted for. |
| Human verification required | AI outputs may be used to support review tasks but must be carefully checked by humans before use. The degree of checking required may vary, but typically this will require a human to read and possibly make amendments to the entirety of the output. |
| Requires validation within the review | AI outputs may be used if their performance is explicitly evaluated within the context of the review itself and deemed adequate (e.g. comparable to human performance). |
| Exploratory and supplementary use | AI outputs may be used for developing ideas or as a starting point to support understanding. All outputs should be extensively refined by human reviewers prior to use for a review task. Alternatively, outputs may be appropriate for use as an additional, supplementary approach, but without replacing established processes. |
| Not acceptable for use | The current state of technology means that these AI outputs have such serious limitations, that they should not be relied upon. |
Current state of AI tool for evidence synthesis (February 2026). Reproduced excerpt Table 2 from Responsible use of AI in Evidence Synthesis (RAISE 2026) 3: selecting and using AI evidence synthesis tools’
| Task | Tool class | Details and considerations | Example tools | Recommendation |
| Writing a protocol | ||||
| Question formulation | Generative LLMs | Asking LLMs to provide novel questions for synthesis may support early question development. However, suggestions may be incomplete, irrelevant, subject to bias (based on its sources), or overlap with past reviews. | ChatGPT, CoPilot, Claude, Gemini, DeepSeek | Human verification required |
| Drafting | Generative LLMs | Pre-trained LLMs can provide an outline using well-established protocol formats. Users may also provide a format / direct the LLM to resources to support this. | ChatGPT, CoPilot, Claude, Gemini, DeepSeek | Human verification required |
| The search | ||||
| Exploring the literature | Unsupervised | Topic modelling tools aid in identifying clusters of evidence quickly to get a sense of key themes/areas of interest. | Carrot2 | Acceptable for use |
| Agentic AI | AI agents develop, refine, and perform searches based on natural language queries. Highly dependent on data sources the tool has access to and requires human input at each stage to guide agent. May be helpful to gain a sense of the literature at an early stage but should not be used as part of any formal evidence retrieval. | Undermind, Elicit, Asta Find Papers | Acceptable for use | |
| Search strategy development | Rule-based | Tools analyze frequency of keywords and/or controlled vocabularies in search results. Specialized tools are required to cover indexing from different bibliographic databases. May provide additional keywords to inform search strategy but should be used in combination with other search development methods. | Yale MeSH Analyzer, TERA WordFreq, PubReMiner, SearchBuildR | Acceptable for use |
In addition, we’ve distilled the key considerations for a systematic reviewer from the responsible handover of AI framework in RAISE 3, creating a much simpler list of considerations for systematic reviewers to apply to critically assess AI tools and whether they could use them. This includes how to assess and select a tool, with attention to ethical, legal, and regulatory considerations.
Looking ahead
Within Cochrane’s AI tool platform study, the research team and two Cochrane review author teams have started training on Laser AI and Nested Knowledge. Once onboarded, we will be piloting the protocol with these two Cochrane author teams before onboarding further Cochrane review author teams in the study. We are aiming for interim analyses mid-year and results in the latter part of 2026.
Alongside this, we are focusing on improving AI literacy and promoting best practice for responsible AI use across the Cochrane community by collaborating with the joint AI Methods Group and on partner projects like Destiny (which is focused on Digitial Evidence Synthesis Tool INnovation for Yielding Improvements in Climate & Health).
We are investing in developing guidance, training and resources for systematic reviewers, editors and the wider evidence synthesis community, which will be shared as they become available.
Part of AI tool platform study was supported by the Wellcome Trust grant number 323143/Z/24/Z.
Find out more
Read more about the selection process and how Cochrane values were appliedCochrane launches innovative study to assess AI tools for evidence synthesis