UK government failing to list use of AI on mandatory register | Artificial intelligence (AI)

by Pelican Press
8 views 8 minutes read

UK government failing to list use of AI on mandatory register | Artificial intelligence (AI)

Not a single Whitehall department has registered the use of artificial intelligence systems since the government said it would become mandatory, prompting warnings that the public sector is “flying blind” about the deployment of algorithmic technology affecting millions of lives.

AI is already being used by government to inform decisions on everything from benefit payments to immigration enforcement, and records show public bodies have awarded dozens of contracts for AI and algorithmic services. A contract for facial recognition software, worth up to £20m, was put up for grabs last week by a police procurement body set up by the Home Office, reigniting concerns about “mass biometric surveillance”.

But details of only nine algorithmic systems have so far been submitted to a public register, with none of a growing number of AI programs used in the welfare system, by the Home Office or by the police among them. The dearth of information comes despite the government announcing in February this year that the use of the AI register would now be “a requirement for all government departments”.

Experts have warned that if adopted uncritically, AI brings potential for harms, with recent prominent examples of IT systems not working as intended including the Post Office’s Horizon software. AI in use within Whitehall ranges from Microsoft’s Copilot system, which is being widely trialled, to automated fraud and error checks in the benefits system. One recent AI contract notice issued by the Department for Work and Pensions (DWP) described “a mushrooming of interest within DWP, which mirrors that of wider government and society”.

Peter Kyle, the secretary of state for science and technology, has admitted the public sector “hasn’t taken seriously enough the need to be transparent in the way that the government uses algorithms”.

Asked about the lack of transparency, Kyle told the Guardian: “I accept that if the government is using algorithms on behalf of the public, the public have a right to know. The public needs to feel that algorithms are there to serve them and not the other way around. The only way to do that is to be transparent about their use.”

Big Brother Watch, a privacy rights campaign group, said the emergence of the police facial recognition contract, despite MPs warning of a lack of legislation to regulate its use, was “yet another example of the lack of transparency from government over the use of AI tech.”

“The secretive use of AI and algorithms to impact people’s lives puts everyones’ data rights at risk. Government departments must be open and honest about how they uses this tech,” said Madeleine Stone, chief advocacy officer.

The Home Office declined to comment.

The Ada Lovelace Institute recently warned that AI systems might appear to reduce administrative burdens, “but can severely damage public trust and reduce public benefit if the predictions or outcomes they produce are discriminatory, harmful or simply ineffective”.

Imogen Parker, an associate director at the data and AI research body, said: “Lack of transparency isn’t just keeping the public in the dark, it also means the public sector is flying blind in its adoption of AI. Failing to publish algorithmic transparency records is limiting the public sector’s ability to determine whether these tools work, learn from what doesn’t, and monitor the different social impacts of these tools.”

Only three algorithms have been recorded on the national register since the end of 2022. They are a system used by the Cabinet Office to identify digital records of long-term historical value, an AI-powered camera being used to analyse pedestrian crossings in Cambridge, and a system to analyse patient reviews of NHS services.

But since February there have been 164 contracts with public bodies that mention AI, according to Tussell, a firm that monitors public contracts. Tech companies including Microsoft and Meta are vigorously promoting their AI systems across government. Google Cloud funded a recent report that claimed greater deployment of generative AI could free up to £38bn across the public sector by 2030. Kyle called it “a powerful reminder of how generative AI can be revolutionary for government services”.

Not all the latest public sector AI involves data about members of the public. One £7m contract with Derby city council is described as “Transforming the Council Using AI Technology” and a £4.5m contract with the department for education is to “improve the performance of AI for education”.

A spokesperson for the department of science and technology confirmed the transparency standard “is now mandatory for all departments” and said “a number of records [are] due to be published shortly”.

Where is the government already using AI?

  • The Department for Work and Pensions has been using generative AI to read more than 20,000 documents a day to “understand and summarise correspondence” after which the full information is then shared with officials for decision-making. It has automated systems for detecting fraud and error in universal credit claims, and AI assists agents working on personal independence payment claims by summarising evidence. This autumn, DWP started deploying basic AI tools in jobcentres, allowing work coaches to ask questions about universal credit guidance in an attempt to improve the effectiveness of conversations with jobseekers.

  • The Home Office deploys an AI-powered immigration enforcement system, which critics call a “robo-caseworker”. An algorithm is involved in shaping decisions, including returning people to their home countries. The government describes it as a “rules-based” rather than AI system, as it does not involve machine-learning from data. It says it brings efficiencies by prioritising work, but that a human remains responsible for each decision. The system is being used amid a rising caseload of asylum seekers who are subject to removal action, now at about 41,000 people.

  • Several police forces use facial recognition software to track down suspected criminals with the help of artificial intelligence. These have included the Metropolitan police, South Wales police and Essex police. Critics have warned that such software “will transform the streets of Britain into hi-tech police line-ups”, but supporters say it catches criminal suspects and the data of innocent passersby is not stored.

  • NHS England has a £330m contract with Palantir to create a huge new data platform. The deal with the US company that builds AI-enabled digital infrastructure and is led by Donald Trump backer Peter Thiel has sparked concerns about patient privacy, although Palantir says its customers retain full control of the data.

  • An AI chatbot is being trialled to help people navigate the sprawling gov.uk government website. It has been built by the government’s digital service using OpenAI’s ChatGPT technology. Redbox, another AI chatbot for use by civil servants in Downing Street and other government departments, has also been deployed to allow officials to quickly delve into secure government papers and get rapid summaries and tailored briefings.



Source link

#government #failing #list #mandatory #register #Artificial #intelligence

You may also like