Woman with binary computer code reflecting on her face.

What do charity trustees need to know about AI?

Trustees have to keep an eye on both short-term and long-term risks and opportunities. So it’s natural they would take an interest in AI, as the technology is being hailed the opportunity of the century – and an existential risk.

Amid all the hype, what do trustees need to know? Thanks to the support of the Clothworkers’ Company, we recently held a discussion on this question with:

Here is a quick summary of their perspectives. (Please note that this is based on notes taken at the event – any errors are mine, not the speakers’.)

Tris: think about strategy to decide where to focus first

When we asses new digital technologies, we tend to overestimate the short-term benefits and underestimate the long-term risks.

There are many potential benefits to AI:

  • freeing up staff time
  • growing skills
  • pattern recognition in data
  • productivity gains

But there are bigger-picture risks. Namely, who owns AI data? And are there biases embedded in the data? These must be balanced.

Many organisations are looking at AI and not wanting to ‘miss the boat’. But many organisations lack the capacity in staff and governance teams to manage properly.

AI, particularly generative AI, is just one technology that we’ve seen in recent decades. Charities are still in the process of adopting these other technologies.

Trustees need to think about where technology fits strategy more generally in order to decide where to focus first.

Kieran: don’t look at AI in a vacuum

The key reason AI is daunting for trustees is the pace of change. Every day there’s a ground-breaking development – it can be a lot to keep up with if it’s not your day job. It could still be a fad.

But the starting point for trustees now has to be your legal duties. The guiding principle is to promote your charity’s purpose – does AI do this? Either directly or by creating efficiencies? There are no hard and fast rules on this.

There are four key risks to be aware of:

  • Data collection laws and GDPR compliance
  • Infringement risks of AI generated work
  • Discrimination in outputs
  • Hallucinations or inaccuracies – this is a particular risk when charities rely on AI for impact data analysis.

These risks can be managed. Take a use-case approach to adopting AI. Check who owns the data. Keep policies under review.

But it’s important not to look at AI in a vacuum. It interacts with other aspects of your organisation.

Some practical steps you can take include:

  • A skills audit of the organisations. If the team doesn’t have the AI skills, can they be upskilled?
  • Nominate a dedicated board member to focus on AI
  • Seek external advice

Remember, using external tools doesn’t remove responsibility in decision-making.

Read more from Kieran and Mishcon de Reya – Navigating the AI landscape: A guide for charities on opportunities, risks and compliance.

Yasmin: Don’t focus just on the tech experts

At the moment, the narrative being pushed by AI companies and government is adoption – everyone needs to adopt AI. But the idea of ‘low-hanging fruit’ for gains is really short-sighted.

Crucially, it misses the essential truth that technology is situated in the broader systems of society, politics, environment, and power.

If you’re a trustee interacting with environmental and social missions, you need to think about how these tools really affect your mission and whether they align.

At JRF, we have been working on the ‘AI for public good’ programme. This includes looking at AI in the public sector.

What has been important about this work has been not just focusing on technical experts, but bringing together people who care about social and environmental justice who may not have thought about the technology before. These sorts of conversations can really cut through the hype.

But one major challenge of a real debate is that language around AI is very imprecise. To start with, there’s no simple agreed definition of AI. No shared understanding of tools and their functions. So making sure everyone is on the same page can be difficult.

Gabriela: Think of AI as a trainee

We were able to use AI to increase donations and reduce the amount of mail we were sending supporters. This was based on machine learning models, starting in 2020 looking at our donor base and improving targeting.

In October last year, we joined an AI collaboration with 9 other charities. It’s important to keep in touch with other charities grappling with the same problems.

When we were working on implementation, we had two main deliverables:

  • A roadmap that embraces our mission and values
  • A policy reviewed by our data board of leaders and trustees.

Sitting down with trustees and leaders was really important in clarifying red lines – why are we using AI? What won’t we be using AI for? What’s worked really well is engaging with people throughout the organisation.

When we use AI internally, we think of AI as the trainee. You’re the subject matter expert, and you need to train it up. But it can help with simple tasks – summarise e-mails, book leave, etc.

Trustees need to stay informed and ask difficult questions, there may be things missed. Be aware of the risks, but don’t be afraid of AI. You may already have systems with AI plug-ins – you don’t need to start from scratch.


Our next event for trustees asks how can trustees be more adaptive and systemic? Join us online for free on 8 May at 12.30pm. Find out more and register now.

Categories:

Contact NPC

Related items

Footer