We need to be much more precise about what AI can and can’t do in public services

We need to be much more precise about what AI can and can’t do in public services. Join the conversation on LinkedIn. https://www.linkedin.com/posts/antlerboy_we-need-to-be-much-more-precise-about-what-activity-7447175907456212992-zu-u?utm_source=share&utm_medium=member_desktop&rcm=ACoAAACuq-oBecVFDW6PCf3lkoG-peMeuLBeoho

We need to be much more precise about what AI can and can’t do in public services. A lot of the hype assumes that if ‘Big Tech’ can make powerful predictions, government should be able to do the same.

But the environments aren’t comparable.

They operate in deeply quantified spaces. Clicks. Dwell time. Conversion. Churn. And their goals are brutally simple: revenue, growth, capital. They optimise for what works and discard what doesn’t — including customers who don’t convert. They aren’t subject to democratic control.

Public services are in a different world.

Outcomes? Complex, contested, often only partially measurable. Child protection. Planning. Community safety. Social care. These are not optimisation problems with a single score.

They’re ethical, legal and political judgements made under scrutiny.

AI as ‘decision support’ can

– cluster consultation themes.

– surface comparable past cases.

– reduce fragmentation across silos.

Useful — maybe very powerful.

But all of that relies on legibility.

To synthesise, compare and predict, you need captured data. Transcripts. Case notes. Structured records. Formal artefacts. Much of the real work in public services happens in the partially illegible space: tacit judgement, political temperature, informal negotiation, contextual interpretation.

And professional human judgement is contestable — but defensible, in a recognised legal and ethical frame. However imperfect, it has legitimacy.

AI introduces a different kind of black box.

If a planning authority used AI to summarise 60,000 consultation responses on a controversial development, and that summary materially shaped the decision, how confident would you feel at Judicial Review?

Private sector can tolerate statistical improvement with occasional visible failure. Public services can’t.

‘Can AI help?’ Of course — but where does it fit?

Cognitive scaffolding in the formal layer of organisations? Yes.

Reducing fragmentation and surfacing patterns in large volumes of text? Yes.

Making trade-offs explicit? Possibly.

Replacing or materially shaping contested, high-stakes judgement under democratic constraint? I doubt it

If you handed a tech platform responsibility for the 1,111 services of a typical unitary authority — bins, libraries, child protection, licensing — and wrapped that in statutory duties, ethical obligations, constrained budgets, legacy estates, pensions, local press and political oversight… they would certainly create a more data-rich environment. But would ‘embrace complexity’ survive contact with reality?

There is learning in both directions. Big tech uses instrumentation and feedback at a depth most public bodies do not. Public services understand legitimacy, contestability and moral responsibility in a way most platforms never have to.

If we are serious about AI in government, we need to stop importing metaphors from adtech and start taking the structural differences seriously. Or we’ll keep asking the wrong questions.

Leave a comment