Cancer Research UK logo.
SearchDonate
  • Search

Use of generative AI in funding applications

This policy sets out our position on the use of generative AI tools in funding applications and peer review of funding.

Download a PDF version of this policy

Summary of changes

This policy was updated on 26/11/2025 with the following change:

  • Section 4.2. Confirmed the revision of our confidentiality agreements for peer reviewers.

1. Purpose

This policy sets out our position on the use of generative artificial intelligence (AI) tools in funding applications to us.

2. Scope

This policy applies to all those involved in funding applications to us or funding reviews including researchers and their teams, host institutions and our peer reviewers of funding applications.

3. Definitions

This is a caption table example

Generative AI

A novel type of artificial intelligence system that identifies patterns and structures in data and generates novel content such as text, images and other media in response to instructions (‘prompts’).

Host institution

The university, institution or other organisation at which some or all of the research funded under a funding application to us will be carried out.

Research integrity

We, as a signatory to the UK’s Concordat to Support Research Integrity, use the definition and description of research integrity as outlined within that document with core elements of honesty, rigour, transparency and open communication, care and respect and accountability.

4. Key points

We aim to ensure researchers we fund can engage with, and benefit from, the opportunities of generative AI tools, for example, ChatGPT, such as supporting content generation for computer code, whilst protecting against potential ethical, legal and integrity issues to maintain the high standards of the research we fund.

For further guidance, researchers should refer to our guidance for researchers on the use of generative AI(PDF, 386 KB).

4.1. Requirements for our funding applicants

We advise researchers and their teams to use caution in relation to the use of generative AI tools in developing their funding applications for our funding, and to stay up to date with the policies and guidance from their institutions, funders and the sector.

Our funding applicants must do the following:

  • Support the highest levels of research integrity as set out in our Research Integrity: Guidelines for Research Conduct.

  • Ensure generative AI tools are used in accordance with relevant legal and ethical standards, including data privacy where those standards exist or as they develop.

  • Use generative AI tools responsibly to ensure the originality, validity, reliability and integrity of outputs created or modified by generative AI tools. This includes ensuring funding applications contain accurate information and do not contain false or misleading information.

  • Correctly and explicitly attribute outputs from generative AI tools in funding applications or research by listing the generative AI source, where practicable, naming the specific model/s used and software, and specifying how content was generated (such as listing the prompt used).

  • Adhere to host institution policies on the use of generative AI tools, particularly those concerning plagiarism and fabrication.

When approving a funding application submission to us, host institutions must take responsibility for ensuring the funding application content is not in breach of our Research Integrity: Guidelines for Research Conduct.

4.2. Requirements for our peer reviewers

To ensure we fund the best quality science and researchers, we operate a robust, rigorous and confidential peer review process to make funding decisions.

As set out in our Code of Practice for Funding Panels and Committees(PDF, 213 KB), our funding committee and panel discussions, papers and correspondence relating to applications for funding are strictly confidential.

Our peer reviewers must not do the following:

  • Input any content from a funding application to us into generative AI tools. This content is provided to you by us in your capacity as a peer reviewer and sharing with a generative AI tool constitutes a breach of our peer review confidentiality and research integrity requirements.

  • Use generative AI tools in formulating or editing your peer review critiques for funding applications or reviews. We have revised our confidentiality agreements for peer reviewers to clarify this prohibition, which relates to confidentiality and intellectual property concerns in the sharing, viewing and use of data inputted into generative AI tools.

Our peer reviewers are required to sign and submit a revised confidentiality agreement to confirm compliance with the confidential nature of the review process, including not uploading or sharing content or original concepts from a funding application to us or peer review critique into generative AI tools.

4.3. Actions we may take if our requirements are breached

Where there has been a potential breach of this policy, we require the following:

  • We should be informed as soon as possible about the issues identified at policies@cancer.org.uk. We recognise this is a fast-evolving field and wish to work with and support our researchers and reviewers and learn from issues raised.

  • We retain the right to apply sanctions under our Research Integrity: Guidelines for Research Conduct policy, which may include discontinuing a funding application or funding activities, restriction from future peer review or taking other sanctions at its own discretion.

4.4. Policy updates

Given the speed of development of generative AI tools, we, in collaboration with other research funders, intend to monitor developments and evolve this policy from time to time as appropriate.

5. Support and advice

For any queries about this policy, please contact policies@cancer.org.uk.

For more information, please see the following linked documents: