How to build trust and fairness in public sector AI
The rapid adoption of artificial intelligence (AI) has raised awareness about the ethical implications of using data-driven technologies around privacy, fairness, and transparent decision-making. How can the public sector balance individual and collective benefits and what role do ethics play in the establishment of trust between governments and citizens in the use of AI?
Four leaders from the Food Standards Agency (FSA), NHS Transformation, the Centre for Data Ethics and Innovation (CDEI), and UK Research and Innovation (UKRI), joined Government Transformation’s GM for Government, David Wilde, on a Government Data Show panel to discuss what public sector organisations can do to ensure that AI solutions are safe and ethical.
Striking the right balance between innovation and caution
Balancing the individual and ecosystem benefits of AI is at the core of CDEI’s role to promote responsible innovation of data-driven technologies. The centre, which is part of the Department for Digital, Culture, Media & Sport (DCMS), works with both private and public bodies that are building AI or data-driven products, or are seeking to regulate or promote innovation in those areas.
Mark Durkee, Team Lead of Public Sector Innovation at CDEI, said that organisations should not be scared of using these tools but instead “be really selective and really careful in choosing the right way of using them.”
Durkee mentioned the case of recruitment, where AI has started to help companies find the right candidates for jobs. Although the use of AI in this area presents many opportunities, he said that due consideration should be given to the varied data protection regulations of different countries, as well as ensuring that fairness is embedded in the system.
“I think lots of recruiters in the UK, with the best of intentions, are really struggling to get their head around how they deploy this in the UK context, respecting UK quality or UK data protection law, and at the same time just treating candidates fairly and appropriately,” added Durkee.
Understanding the benefits and risks of AI
AI also has great potential in healthcare. Brhmie Balaram, Head of AI Research & Ethics in the NHS AI Lab at NHS Transformation, explained that use cases include improving the patient experience, supporting the healthcare workforce and helping running NHS systems more efficiently.
“We can use natural language processing to help read unstructured doctors' notes or deploy computer vision to support the diagnosis of diseases and conditions using images, such as X rays, and CT scans,” Balaram said. “And AI can also be used for forecasting purposes to help us make the best use of capacity and resources.”
Balaram said that, ultimately, the aim of deploying AI within the healthcare infrastructure is to ease the pressure on the NHS, the social care system and staff without compromising on quality of care. However, she added, the NHS has not yet reached the point where it can deploy AI widely: “We're still learning about the balance of benefits through evaluating mature technologies that we hope to roll out in future.”
In the case of the FSA, the agency is using AI to help inspectors in abattoirs and cutting plants to predict which businesses are most at risk of breaches and to predict food hygiene ratings. “We’re about risk, understanding it, predicting it and then minimising it,” said Julie Pierce, Director Openness, Data & Digital, Science at FSA. AI tools here are used “in support of the human - the human is still sitting in the room, ultimately making those decisions.”
Like her peers on the panel, Esra Kasapoglu, Director of AI and Data Economy at UKRI, agreed that there is still a need to better understand how AI is transforming society, including who is most affected by it and what the consequences are of any AI solution deployment.
“We're still scratching the surface of what AI means to us,” Kasapoglu said. “The knowledge and approach of social and behavioural sciences will be crucial in this process, because we are not fundamentally looking at AI as a technology but as a way of working, it’s a new ideology transforming our lives.”
Both the public and private sectors need to navigate the disruption brought by AI and other data-driven technologies. Kasapoglu added that it is also crucial that those developing and deploying AI have a better understanding of the dangers and unintended consequences of AI and prepare accordingly.
“We don't need everyone to become a data scientist, but we just need a little bit more understanding of the consequences and impacts, and how we can better use these technologies,” she told delegates.
Ensuring trust and confidence
If ethics should have a central role in the development and deployment of AI by any organisation, in the case of the government it is essential to build trust with citizens and ensure that public services are distributed fairly.
Durkee said that any AI technology used by the government should be trustworthy and trusted - which he stressed are not the same. To achieve trust, communication is paramount, he said: “You can have technology where you thought really carefully about the ethical trade-offs, carefully balanced privacy and fairness and value to the consumer, and still not have managed to explain that to anybody.”
In the public sector context, this presents a barrier between the government and citizens, who might be concerned about the implications of AI in their lives. In addition, a technology should be trustworthy and provide assurances around its ethical issue and bias mitigation.
Balaram added a third notion of confidence when using AI solutions, which is of particular relevance in the healthcare context. “We are actually starting to distinguish between concepts of trust, trustworthiness and confidence when it comes to AI,” she said.
Whereas trust is a binary concept based on the belief of a product or system (a product is either trusted or not), trustworthiness encompasses the quality of a product or system being deserving of trust or competence. Although confidence is similar to trust in that it conveys a belief in the product or system, it differs in that it is not a binary concept, since confidence can be variable depending on certain factors.
Balaram said: ”I think [this distinction] is really important for understanding and when considering the use of AI in health and care because higher confidence is not always a desirable objective, we advocate instead for what we're calling appropriate confidence among clinicians, which is a level of confidence in an AI system that is justified depending on the technology and the clinical context.”
“For example, it may be entirely reasonable to consider a specific AI technology as trustworthy, but for the appropriate confidence and a particular prediction from that technology to be low because it contradicts strong clinical evidence, or because the AI is being used in an unusual clinical situation.”
Whereas Balaram thinks that it is important for the NHS to assure the public that they are accounting for ethical principles such as fairness and transparency, a significant factor in establishing trustworthiness in the use of AI within the NHS will be equipping healthcare professionals with the skills and knowledge to critically appraise AI during clinical use, as opposed to just using it unquestioningly.
Uncovering human bias through AI
Pierce shared that the use of AI at FSA has helped the department shine a light on the biases that were ingrained in their systems but went unnoticed: “It became a really valuable diversion, we understood the quality of the data that we were collecting within the systems, we understood the human bias that we were applying… I think we moved from AI being a strange, magic world that we were fearful of, to viewing it as an equal in the room - it's just a slightly different approach.”
We are quickly learning that AI doesn't only scale solutions, but also risks.
At UKRI, the public body focuses on ensuring that AI deployments are trusted, inclusive and transparent, Kasapoglu said.
“We are quickly learning that AI doesn't only scale solutions, but also risks,” added Kasapoglu. “AI systems learn from humans and so they inevitably reflect our biases and values. The questions about ethics are very uncomfortable because they require us to reflect on our own values, as well as those of the people who make the systems."
Kasapoglu said that UKRI is instigating conversations between the public and private sector as collaboration between the two will be essential to help deliver the benefits of AI at an individual and ecosystem levels. “Collaboration is the name of the game moving forward,” she said.