移至主內容

Setting the standards for responsible AI use in evidence synthesis

Leading organizations in evidence synthesis have united to produce a joint statement on artificial intelligence use in evidence synthesis

Image
Digital face partially overlaid with binary code and colorful light patterns on the left, merging into a network of glowing dots and lines on a blue background.


Cochrane, the Campbell Collaboration, JBI, and the Collaboration for Environmental Evidence have published a joint position statement on the responsible use of artificial intelligence (AI) in evidence synthesis. This collaborative effort is an important step in shaping how AI is integrated into the production of high-quality, trustworthy research.

Evidence syntheses, including systematic reviews, are built on the principles of research integrity. There is wide recognition that AI and automation have the potential to transform the way we produce evidence syntheses. However, this technology is also potentially disruptive. To safeguard evidence synthesis as the cornerstone of trusted, evidence-informed decision making, Cochrane has come together with other organizations to collaborate on a responsible and pragmatic approach to AI use in evidence synthesis.

The statement supports the Responsible use of AI in evidence SynthEsis (RAISE) recommendations, a framework designed to guide the ethical and transparent use of AI across the evidence synthesis ecosystem. The statement also sets out clear expectations for evidence synthesists, including transparent reporting, assuming responsibility, and ensuring that AI will not compromise the methodological rigour or integrity of their synthesis.

“This joint position statement marks a pivotal moment for the evidence synthesis community,” said Ella Flemyng, Cochrane’s Head of Editorial Policy and Research Integrity Cochrane, and co-convenor of the joint AI Methods Group that authored the position statement. 

“By aligning Cochrane, the Campbell Collaboration, JBI and the Collaboration for Environmental Evidence around the RAISE recommendations, we’re setting a clear, shared standard for the responsible use of AI in evidence synthesis. It’s a proactive step to safeguard research integrity while embracing innovation – ensuring that AI enhances, rather than undermines, the evidence we produce. This guidance empowers evidence synthesists to make informed, transparent decisions, and supports them in navigating the evolving AI landscape with more confidence and accountability.”

The statement acknowledges the opportunities and risks posed by AI, particularly large language models, and calls for human oversight, transparency, and justification when AI is used in evidence synthesis. It also urges AI tool developers to proactively align with RAISE principles, providing clear documentation and transparency around limitations and potential biases.

Published simultaneously in Cochrane Database of Systematic Reviews, Campbell Systematic Reviews, JBI Evidence Synthesis, and Environmental Evidence, the statement reflects a unified commitment to responsible innovation across the field.

Read the full statement hereLearn about how Cochrane is advancing AI in evidence synthesis 

 

我們對Cookie的使用

我們使用必要的 cookie 使我們的網站正常運作。我們還希望設置可選擇分析的 cookie,以幫助我們進行改進網站。除非您啟用它們,否則我們不會設置可選擇的 cookie。使用此工具將在您的設備上設置 cookie,以記住您的偏好。您隨時可以隨時通過點擊每個頁面下方的「Cookies 設置」連結來更改 Cookie 偏好。
有關我們使用 cookie 的更多詳細資訊,請參閱我們的 cookie 頁面

接受所有
配置