I was chatting recently with Jeni Tennison about public sector digital work, and we were lamenting how much people still underplay the challenge of adoption.[1]
So much of the conversation about AI is a conversation about use cases. But one lesson from digital transformation in the last 20+ years is that new technologies spread achingly slowly and unevenly in the public sector, even once we find compelling ways to use the new technologies.
We also know that technologies sometimes spread counterproductively, leading to missteps that corrode trust, or waste money, or cause harm. So the challenge is not just adoption, but judicious adoption.
This isn’t only true in the public sector — diffusion in the private sector is also often slow or of poor quality. Many companies go bust before adopting new technologies that could have saved them, or they use new technologies badly. This is why, in the private sector, the cycle of creative destruction is ultimately such a powerful driver of change.
There are lots of reasons old organisations don’t adopt new technologies, or do so badly
Low leadership capabilities
Broken marketplaces (vendor lock-in, market power, weak procurement)
Cultures that are inhospitable to innovation, or that don’t fail fast enough
Habitual behaviour/a lack of bandwidth to think about doing things differently
But Jeni and I were also reflecting that a lot has been learned from public sector digital work about how to support adoption.
So, in no particular order, here are seven lessons from work to spread internet-era technologies in the public sector, and some takeaways for AI.
This is really just a prompt for discussion. And I also recommend reading this piece from Jeni that looks at AI adoption more through a user and accessibility lens.
Seven lessons on adoption from last time
1. Some services (but not many) will improve quickly
Some services seemed to lend themselves well to internet-era technologies, and transforming these services was relatively easy. With the internet, this applied to services that were highly transactional and already centralised, like applying for a driving licence or a new passport.
Moving these services to run well on the internet didn’t take all that long (i.e. 1–2 years) and dramatically improved outcomes for users while also saving money. However, most services weren’t this simple, requiring knock-on changes to back-end processes, or cross-government coordination. So while these early use cases got people excited about digital transformation — and indeed in the UK they were used explicitly as exemplars — later work on other services was more difficult.
Note also Richard Pope’s point that government services often need to be more than transactional, so it’s imporant to go beyond utilitarian measures to meet deeper needs, e.g. for transparency, accountability, and democracy.
What is the lesson for AI?
There will probably be some services (maybe conversational or information-processing tasks that are already centralised?) that will lend themselves well to AI. These services could get a lot better quickly, which will generate excitement, but they will not be typical. When measuring the impact of this work, it will be important to factor in not just speed and convenience but also qualities like trustworthiness. Each interaction with the state is a chance not just to meet a presenting user need but to enhance (or corrode) deeper things of value, like confidence in democracy.
2. “Bibles” of good practice were very effective
GDS documented and distilled good practices in their design principles and service standards, making them easier to adopt. These “bibles” also made it easier to justify good practices in business cases, and made it possible to integrate good practice as a requirement in procurement.
The design system built by GDS proved to be one of the best ways to help the public sector use the internet better. The styles, components and patterns themselves were widely copied, saving hours of effort, making it easier to make better websites. This quickly improved the consistency, intuitiveness, accessibility, and aesthetic experience of interacting with government. These design systems were later forked and adapted by organisations in and beyond the public sector, so they also helped to set a new standard for web-based services beyond government.[2]
More widely, it was helpful when GDS had the power to enforce certain standards, i.e. by holding approval right for digital projects. This is a good example of why driving adoption is about quality as well as quantity, i.e. stopping crappy/old-fashioned ways of spending money on digital might have increased friction in one sense, but it also tilted spend towards uses of internet-era technologies that were actually, you know, useful.
What is the lesson for AI?
There will be challenges related to AI that are similarly infrastructural, and that could be usefully solved once by an expert team at the centre of government, for example in relation to common user interfaces, accessibility or transparency and explainability. Doing this work well and in the open in an easily replicable way could be very valuable. Also: never underestimate the importance of good UX/design. And don’t forget that hard levers are often indispensible if you’re trying to drive consistent improvements in digital work across government.
3. Leadership is critical
For the first decade or so of decisive public sector digital transformation in the UK— let’s say from 2010 to 2020 — internet-era technologies spread mainly in pockets via leadership. Good public sector digital leaders understood internet-era technologies, but they also understood the ways of working they required. Plus they knew the workings of government and how to be effective inside them. They built good teams and they worked hard to protect these teams from the inhospitable system around them.
These leaders were often mid-ranking officials who were given permission — or who were at least tolerated — by a leader above them. This was a fragile mechanism for adoption because it meant that sometimes the work went backwards, or wobbled, when the leader moved on. In other cases, these pockets have endured/expanded, and many of these digital leaders went on to become advocates for contemporary management practices more generally across government. Each of these layers of leadership — permission, sponsorship, evangelism — proved important.
What is the lesson for AI?
It would be worth investing in a new generation of leaders, perhaps in mid-ranking layers, who really ‘get’ AI, i.e. who understand the things AI can do, and the things it can’t do, and the risks and ethical concerns, and the enabling conditions that good AI work requires. The test for these leaders is not: are you an impressive technologist? The test is more one of mentality: do you know how to lead a team that can use AI responsibly and ethically to deliver outcomes of public value?
Side note: Leadership is one area where we’re still not close to being finished with the last wave of disruption from internet-era technologies. The average leader in the public sector still doesn’t understand how the internet works and isn’t trained in internet-era management practices. (Despite these approaches now being decades old and well-codified, including for public service.) Making progress on this wider problem of leadership capability would probably unlock more value than a narrow push on AI.
4. Professions and communities of practice are powerful
Skills and capabilities are very important to good public sector digital work. In fact, skills and capabilities are so important that, with internet-era technologies, it was often helpful to start by looking at the work through this lens, asking: what new skills and capabilities do we need?
An example is the DDaT professional framework, which was effective at codifying new skills required in government from engineering to design, such as in role profiles. This made it quicker and easier for government departments to hire the skills they needed and it helped people working in these roles to know what good looked like and to know how to progress.
The formal codification of DDaT professions was complemented by more informal or social mechanisms like communities. This was especially powerful when formal practices combined with informal communities in ‘communities of practice’, e.g. see this from Emily Webber.
Examples include Sarah Winters’ early work to build content design as a discipline, or later work on design driven by Lou Downe, and still later work on data science led by Laura Gilbert. Thriving professions and communities drew new skills into the public sector and also helped create and maintain cultures conducive to good work by those disciplines.
Notice how DDaT communities worked hard to define a civic version of the discipline in question. For example, the public sector design discipline foregrounded accessibility and ethics. These professions later became engines of change in their own right and their influence now runs well beyond digital work — and beyond the public sector — again especially with design and the rise of adjacent disciplines like policy design.
Side note: A big blocker to the diffusion of internet-era technologies has been that ‘non-digital’ disciplines — finance, procurement, programme management, etc — were often slow to understand the new technologies. These wider professions oversaw (and still oversee) processes that were/are incompatible with the management practises that good digital work requires. So upskilling neighbouring disciplines is also vital.
What is the lesson for AI?
It would be useful to ask: what skills and capabilities do we need for responsible civic applications of AI? What new professions do we need? And what new communities? (The government AI community is a promising start.) It would make sense to:
Codify the skills/capabilities required of AI in formal ways, e.g. into role profiles and progression frameworks, making it easy for public bodies to know what good looks like for hiring, development, progression
Use communities of practice, or mechanisms like work shadowing, to forge the human connections that are helpful for developing tacit knowledge, peer-learning, etc. (Including by supporting existing networks like UKGovcamp.)
Upgrade wider civil service professions to include the skills required for civic applications of AI, and don’t forget to make sure the basic requirements of digital work are included while you’re at it
Consider building this work around the notion of ‘Civic AI’ — the discipline of using AI-related technologies responsibly and ethically for public good
5. Make space to co-design technology with practitioners
People adopt new technologies when they’re useful and when the terms of use feel acceptable — whether that’s public sector workers using new technologies in their day jobs, or citizens using them in their lives. (Or at least, that’s true when there aren’t big barriers to adoption, such as not knowing how to use a computer, or not having broadband, or having to integrate the technology with an incompatible legacy system.)
The best way to make technology useful is to build it with deep attention to user needs. This means getting close feedback from, or co-designing with, the people who will use the technology, for example people from the relevant frontline profession.
An example is Oak National Academy, who have created digital materials for teachers. Their materials saw a big uptick in use during the pandemic, when they were often used by parents trying to teach children at home, and are now used by many thousands of teachers.
Oak’s materials are good because they are informed by their deep understanding of pedagogy, and it helps that teachers trust them (probably more than they trust DfE). It also helps that teachers can use the materials relatively easily, i.e. there aren’t big barriers to using a lesson planning tool, as there would be with, for example, a digital patient record in the NHS, which has to integrate with other systems. (Note that this means it’s also worth working hard to remove dependencies like these, to ungum systems from sticky dependencies on legacy systems.)
What is the lesson for AI?
It would probably be worth funding a network of centres of expertise, or incubators, where AI technologists can work hand-in-hand with practitioners — teachers, doctors, early years educators— to build useful AI applications. This might work best where adoption can happen organically via decisions made by professionals, and less well when there are major systemic barriers to adoption (e.g. with back-end systems).
6. Cross-government platforms can work if you nail it (but this is very context dependent)
Lots of lessons were learned from attempts to build cross-government platforms, both from successes like Notify and Pay and struggles like Verify and data registers.
Or maybe I should say ‘lessons should have been learned’, because I’m not 100% sure we’ve yet distilled these lessons, or found enough time to reflect on them, to inform the approach to AI.
For a flavour of these lessons — and to see the strength of feeling they inspire — see the exchanges in response to this post on Blue Sky, which are worth reading in full.
One thing that’s clear is that all platforms are different. The challenge of developing a single verification process, for example, is difficult in a way that sending out notifications isn’t. Or at least it’s a different kind of difficult.
At the risk of being horribly reductive, I guess some high level lessons from attempts to build cross-government platforms might be:
Good user research is very, very important
Some decisions are ultimately political / policy choices, as opposed to being technology choices — especially when it comes to really core infrastructure like verification. (This will be true of much of the data infrastructure required by AI.)
Data is critical and is often underestimated (especially unsexy aspects of data, like data infrastructure/architecture/standards/metadata, hence the uphill battle that was fought on registers)
Sometimes things that seem obvious on the face of it turn out to be more hassle than they’re worth. An example would be the efforts to create a single shared list of countries.
When work becomes highly political, backing and strategic clarity from Ministers is vital
If you want to read more about the lessons from past attempts to build government platforms, here are some useful links:
A history of Notify (h/t @jenit.bsky.social)
A take on why data registers failed from David Durant (h/t @brendanarnold.bsky.social)
A thread from @bm.wel.by on some lessons from ‘government as a platform’
What is the lesson for AI?
A tricky one! I guess those issues about data quality become especially relevant. We underplay data quality and data infrastructure at our peril. Plus don’t overestimate public sector maturity on data infrastructure — it’s 2024 and there is still no single register for public sector entities, and only patchy, incompatible adoption of such basic infrastructure as electronic patient records (not for want of trying).
A more general lesson, again, is that good upfront user research is very important (and my hunch is that we’re currently under-egging this with respect to AI). i.e. if you want to know the most promising use cases for AI, the best way to find out is to spend far more time with users, e.g. public sector workers and citizens trying to carry out a task. Beyond this, the lessons for different applications of AI will be different. Some applications of AI will be very political, with lots of dependencies (as was the case with verification) whereas in other cases (as with Notify) it might be enough to build tools that are useful.
7. Broken markets / big incumbents will screw things up
The last lesson might be the most important: the broken market for digital services in government, and the dominance of incumbents who didn’t know what they were doing, was a huge barrier to adoption of internet-era technologies.
One of the main things that slowed down the adoption of internet-era technologies is the way that government bodies were locked into contracts with incumbents like CGI, Fujitsu, etc. A short list of big firms sold bad products and services, and people making procurement decisions either (a.) didn’t know any better, or (b.) were locked in by other choices made years before, or (c.) felt they didn’t have other options. And these legacy providers are still at it, signing new contracts and extending old ones.
With internet technologies, boring work to improve procurement proved really important. That included approaches to procurement that bring in user testing, so that your staff can tell you how awful that BigTechCorp HR system really is before you sign the contract. It also meant offering smaller contracts that were more accessible to modern digital studios, and of course the Digital Marketplace helped with this. Plus informal networks like UKGovCamp helped to connect new digital providers into the community of DDaT professionals in government.
What is the lesson for AI?
Don’t forget the power of incumbents to slow down the adoption of new technologies, even when they claim to be helping. Don’t underplay the unsexy issue of procurement capabilities, and the legal capabilities required to renegotiate and exit legacy contracts. And make another push on the Digital Marketplace, especially in those areas of government — e.g. local government — that are still tightly locked into legacy technologies.
So, what is the takeaway? I guess: try to focus as much on adoption as you do on use cases (ideally much more). Also: there is no silver bullet for AI adoption, there is no silver bullet for AI adoption, there is no silver bullet for AI adoption, etc.
But more helpfully: we know a lot about how to speed up adoption in sensible ways. It requires bold but patient work using the kinds of mechanisms described above, pushing on many fronts, over many years. And if you do this work properly, it can be transformative over 5–10 years.
And if you want a reminder of our repeatedly breathless optimism on new technologies, I recommend an afternoon reading this lovely archive of old reports on ‘e-government’. The Green Paper published on a CD-ROM is a highlight. h/t Jerry Fishenden for maintaining the collection.
As ever, putting this out there as a provocation. Any mistakes are mine and all the smart observations were inspired by Jeni. Interested if any of it resonates or jars or if anything big is missing.
Footnotes
This post puts to one side the obviously important debate about the value/risks of driving AI adoption — ethics, privacy, trustworthiness, etc — which lots of people are already writing about.
Thanks to Steve.rss for pointing out that the forms guidance in News UK’s design system — which powers The Sun, The Times, Wall Street Journal and other publications — was based on the GOV.UK Design System. This is just one of many examples.
For more on similar themes, here’s a post on a report Nesta commissioned from Public Digital, The Radical How, looking at the application of contemporary management practices in government. And a slightly out there post on freedom in the age of autonomous machines. You can follow my writing on Blue Sky, Medium, or Substack. Oh, and here’s how to leave Twitter if you haven’t already.