Cambridge tackles AI research ethics at higher ed summit

Cambridge's Managing Director of Academic Publishing joined education leaders to debate integrity and artificial intelligence

AI photo

Cambridge University Press & Assessment is a leader in creating AI research ethics policies with the organisation's first policy going live in March 2023 - and work in this space has not stopped since.

At the Times Higher Education (THE) Campus Live on 6 December 2023, Mandy Hill, Managing Director of Academic at Cambridge, detailed the development of the policy and outlined the challenges and opportunities of AI for the research community.

The Times Higher Education event brought together higher education leaders with industry partners and policy experts to debate and plan how to achieve institutional success.

photo of THE Campus banner
We saw a need, so we went for it.
Mandy Hill, Managing Director, Academic

Hill spoke on a hybrid panel exploring how university leaders can adapt institutional policy to make constructive use of generative AI tools, such as ChatGPT, while responding to issues that arise.

Mandy Hill on-screen, joining the event remotely

Mandy Hill, Managing Director of Academic, Cambridge, joined the panel discussion online

Mandy Hill, Managing Director of Academic, Cambridge, joined the panel discussion online

With Jill Matheson of the UK Committee on Research Integrity, and Michael Webb of UK digital, data and technology agency, Jisc, Hill discussed:

  • How to build the freedom to use AI into policy
  • Creating effective policies and guidelines to maintain integrity
  • Addressing reliability and bias in AI

Sara Custer, moderator of the discussion and editor of THE Campus, asked Mandy Hill: "Cambridge has very much been a first mover in the AI policy development space. How were you able to act so quickly and what gave you the gumption to put this policy out there?"

Mandy Hill said: "We had a proactive team that identified the need for this policy - and authors were asking for guidance on the question of ethical AI use in research."

"We saw a need, so we went for it. "

Proliferating policies

One challenge for researchers is where to look for accepted policy, when institutions, publishers and professional bodies, in the UK and internationally, all publish guidelines. Matheson called for policy clarity and alignment, and Webb argued for the development of universal principles for AI use rather than detailed policy.

Mandy Hill expressed Cambridge's stance on researchers' responsibilities. She explained: "Our attitude is: where the AI tool can help a researcher articulate their research better, why not let them use it? Researchers have been using tools to help them for years. Our policy accepts the use of AI tools and recognises this may be helping them."

"But the ethical use of AI is conditional. It has to be declared that AI is being used, and while it can be used, it cannot be named as an author. The named author needs to take full accountability for what they publish."

"Most of us don't really know what's training these tools and models. Researchers using generative AI tools have to really ask themselves: can they back up the data being generated?"

Quality matters so much to us. That's why the peer-review process is still so important.
Mandy Hill, Managing Director, Academic

Importance of regulation

The need for regulation was discussed due to the worsening issue of 'paper mills' - when organisations sell fake academic research papers - in academic publishing.

There's a risk of divergence in the AI policy-making space between the approach to regulation in the UK and Europe, and potential difficulties for researchers collaborating on international projects.

Enforcing biases

One of Mandy Hill's comments that prompted most agreement from the audience was on AI limitations. The datasets generative AI models are trained on reflect material at hand. Generative AI continues to function more strongly in English than in any other language: because more English-language data has been fed into it.

It's important that people understand AI is not an all-encompassing tool. It's enforcing western biases.
Mandy Hill