Government AI to be more ethical, transparent & accountable

The Ethics, Transparency and Accountability Framework for Automated Decision-Making has been released to improve the general literacy within government around the use of automated or algorithmic decision-making. It builds on A Guide to Using AI in the Public Sector published by the Government Digital Service (GDS) and the Office for Artificial Intelligence last year.

The seven point framework is intended to help government departments with the "safe, sustainable and ethical" use of AI-based decision-making systems:

  1. Test to avoid any unintended outcomes or consequences
  2. Deliver fair services for all of our users and citizens
  3. Be clear who is responsible
  4. Handle data safely and protect citizens’ interests
  5. Help users and citizens understand how it impacts them
  6. Ensure that you are compliant with the law
  7. Build something that is future proof

“Under data protection law, for fully automated processes, you are required to give individuals specific information about the process. Process owners need to introduce simple ways for the impacted person(s) to request human intervention or challenge a decision,” the framework says.

“When automated or algorithmic systems assist a decision made by an accountable officer, you should be able to explain how the system reached that decision or suggested decision in plain English.”

The framework also explicitly acknowledges that “algorithms are not the solution to every policy problem”, and that public authorities should consider whether using an automated system is appropriate in their specific contexts before pressing forward with their deployment.

“Scrutiny should be applied to all automated and algorithmic decision-making. They should not be the go-to solution to resolve the most complex and difficult issues because of the high risk associated with them,” says the framework, adding that the risks associated with automated decision-making systems are highly dependent on the policy areas and context they are being used in.

“Senior owners should conduct a thorough risk assessment, exploring all options. You should be confident that the policy intent, specification or outcome will be best achieved through an automated or algorithmic decision-making system.”

New call-to-action

The framework must be adhered to when public sector bodies are working with third parties, requiring early engagement to ensure it is embedded into any commercial arrangements.

Despite the framework including examples of how both solely automated and partly automated decision-making systems are used in workplaces – for example, to decide how much an employee is paid, the principles themselves do not directly address the effects such systems can have on workplace dynamics.

The new framework has been developed in line with guidance from government (such as the Data Ethics Framework) and industry, as well as relevant legislation. It supports the priorities of the Central Digital and Data Office, and aligns with wider cross- government strategies in the digital, data and technology space.

Departments are advised to "use the framework with existing organisational guidance and processes". Last year the Government noted that there was "little awareness" of existing data guidelines across Government, according to Natalia Domagala, Head of Data Ethics at GDS.

Also Read