I suspect one reason we’re struggling to govern in the 21st century is that we’ve seen a shift in the character of social problems.
If you think about the defining problems of the 20th century, many were technocratic in nature, or lent themselves to technocratic solutions. I’m thinking of challenges like:
Extending life expectancy by tackling acute diseases
Sustaining high employment and mitigating the scars of recessions
Insuring waged labourers against fluctuations in earnings
By contrast, many of today’s biggest social problems feel irreducibly human, or inherently relational or experiential. We lose something vital if we specify them in technical terms. I’m thinking of challenges like:
Healing the mental health of our children
Tackling loneliness and a loss of status and belonging
Caring for very old people and helping people to have good deaths
No doubt I’m being selective. But even if we aim-off for that I think we see the same pattern. In fact I’d go further and say that this shift in the character of problems isn’t an accident, it follows naturally from the way public policy has developed over the last century.
If you spend a long time applying technocratic solutions to societal problems, as we have for the last few generations, then eventually the problems you’ll face will be on average less likely to submit to these types of solutions.[1]
It’s a bit like fishing from a pond for a long time using the same technique. After a while, the remaining fish in the pond will be the ones that don’t bite on that particular technique.
This creates a kind of paradox, in that the more successful your technique for governing, the more you’ll need to change it. Or, to be more precise, the more unevenly successful your technique, the more you’ll need to complement it with another technique — one that you choose precisely because it’s different.
This is why I write a lot these days about the modes of inquiry we bring to public policy. Because I think we need to learn how to switch modes.
Or rather we need to learn how to complement our technocratic mode — one that we’ve spent decades institutionalising into a whole system of government — with a more human way of governing.
This all needs some unpacking. So, a couple of questions:
Why did we get so technocratic in the first place?
In what ways would human government be different?
Along the way I’ll try to clarify the key distinction between technocratic and human modes of government.
Lately I’ve gone pretty far down the rabbit hole of critiques of technocratic government. In particular I’ve been reading books from the mid-1950s to the early-1970s by writers like Theodore Roszak, Hannah Arendt, and Langdon Winner.
I enjoy reading these books because they challenge some of the deepest assumptions that underpin public policy today. The way we treat concepts like efficiency and productivity as first-order goals, and find it awkward even to talk about basic human ideas like care, beauty, or dignity. Or the way we’ve given economics such prime position in our public policy discourse, making strangely little use of disciplines like history, sociology, philosophy, art, or design.
So I find that this writing acts like smelling salts, waking us up from a technocratic slumber. But I find it less helpful at explaining why we fell into that slumber in the first place.
When you read these critiques — especially the ones from more radical thinkers like Jaques Ellul — you get the impression that the rise of technocratic government was some kind of evil conspiracy. Or, at best, it was all the fault of a cadre of unthinking, soulless bureaucrats, sleep-walking us down a technocratic path.
Having worked in public policy for a long time, this doesn’t quite ring true to me. I’m not saying there are no unthinking bureaucrats. But in my experience even our most technocratic institutions — say a government department like the Treasury or DWP, or a regulator like the FCA — are staffed mainly by thoughtful, caring, rounded human beings. Which is to say that most people who work in public policy know that there’s more to their work than technocratic problems and technocratic solutions. And yet we build these coldly technocratic systems anyway.
So for me it’s this ‘anyway’ that demands an explanation. Why do we do it to ourselves? I suspect a big part of the explanation is the way we’ve come to think about scale.
Technocratic forms of governance are easy to scale. Or at least that’s the story we tell ourselves.
The logic is supposed to flow from the type of problem to the type of solution: if you want to solve a complicated social problem at the scale of a modern society, you have no choice but to reach for a catalogue of technocratic solutions.
What kind of solutions will you find listed in the technocracy catalogue? One section of the catalogue is full of tightly codified, rule-based systems. A good example is the modern benefit system, with its formulaic eligibility criteria. The modern benefit system essentially is these criteria; they’re the code that makeup the software. And most of what we call ‘benefit policy’ is now really just a debate about the best way to optimise these criteria to make the system allocatively efficient and to incentivise behaviour.
Another example of a codified system is the medical guidance that is issued by a body like NICE, which governs access to treatments on the NHS. In this case, we optimise the rules to maximise the return we get from a pound spent on healthcare in terms of quality-adjusted years of life.
Another section in the technocracy catalogue is called ‘policy interventions’. These are custom-solutions to particular problems, working on a model of diagnosis and cure. For each diagnosed social problem, there’s an academic literature showing which solutions or interventions work. (Well, really, it’s an economic literature, since we make little use in modern technocracy of disciplines like history or sociology.) Our basic model is to develop an intervention based on this literature, and then to test the intervention, and then to decide whether or not to ‘roll it out’.
So technocratic approaches have something in common: they aim for consistency at scale. Rules can be followed across a system like Job Centres or the NHS, even if the system is fragmented and the staff are low paid and exhausted or demoralised. Policy interventions can be ‘rolled out’, by which we mean replicated with high fidelity to the specified solution; they aspire to the same replicability we expect of a scientific experiment.
In a sense all I’m doing here is describing the application of the scientific method to policy-making, which is part of the paradigm of bureaucracy, which is really just the way we govern at scale.
And indeed if we zoom out, this all fits into the bigger story we tell ourselves about the way modern societies developed. First, humans lived in small hunter-gatherer bands that were simple and egalitarian. Then we discovered agriculture, and this generated surpluses that enabled cities and complex societies. This left us no choice but to develop a bureaucratic, or technocratic, state in order to govern these complex societies at scale.
It’s this bigger story, or teleology, that David Graeber and David Wengrow critique in their best-selling book, The Dawn of Everything. The short version of the argument is that the anthropological evidence doesn’t support this idea of a progression or convergence towards the type of society we have today. Over thousands of years of human history, we experimented with many different ways of governing societies, and we often ran big and complicated societies in ways that were not hierachical or bureaucratic. We even moved back and forth between different modes of government intentionally, and often with a sense of play. So this whole idea of an inevitable convergence on today’s approach is more myth than reality.
Anyway, let’s leave that debate for another day and come back to the topic of this post. When I talk about the alternatives to technocracy — a more ‘human’ way of responding to problems— what kind of thing do I have in mind?
Imagine a young person who helped out an elderly neighbour during Covid lockdowns. After being shocked at how isolated and lonely their neighbour had been, they set up a charity that organises lunches to help people make friends across generations.
Or consider a person who grew up in a low income neighbourhood, did well at school, and then moved back home to setup an inspirational children’s centre, raising families’ aspirations.
Or think of a person who is deeply affected when their grandmother dies in a hospital bed, hooked up to beeping machines. Soon after, they see a derelict local building up for sale, and after debating the idea with their family they decide to buy the building and renovate it, turning it into a compassionate hospice to bring comfort to people who are dying.
I’m sure we’d all agree that work like this can have huge social value. But what we’re interested in is what happens when this work scales — or rather when it doesn’t.
For anyone who works in public policy, this story will be familiar, because it plays out again and again. What tends to happen with human work like this is that an inspirational project has an impressive impact in a local area and becomes the flavour of the month. Maybe it features in a zeitgeisty book or the founder is profiled in a Sunday newspaper or the Prime Minister visits and gives a speech, calling out the project as an example of the future of public service delivery. Soon, money flows in from foundations or government departments who want to scale the project to other areas. But almost always what happens next is the same: the original results aren’t repeated elsewhere.
What follows is also quite predictable: we do our best to make the program more scalable, by which we mean more replicable. We do this by evaluating the project as precisely as we can, asking: what is it about it that works? We approach the problem like scientists who are working hard to identify, isolate, and synthesise an active ingredient in a plant.
Don’t get me wrong; this is all good and well-intentioned work. I’m not arguing against the scientific method. And let’s not overstate the case; some projects do spread beyond their original applications. So I’m not saying this work is pointless.
But I also think it’s fair to say that the spread of this type of human work — its pace and ‘fidelity’, to use the jargon — almost always disappoints. The later impacts are almost always far smaller than the original impact — and sometimes zero — and it’s all so painfully slow. In fact, when we treat scale as synonymous with fidelity and replicability, this sense that the original impact fades has come to seem natural; how it could not?[2]
An example comes to mind from when I worked in Number 10 in the late 2000s. At the time, parenting ‘interventions’ were all the rage and one program in particular had us excited — a way to encourage pro-developmental behaviours in new parents. The intervention was promising partly because the evidence on impacts was so robust but also because it was so tightly specified; it gave practitioners scripts and step-by-step guides on how they should work with expectant mums. This made us hope that it might achieve the holy grail — maybe it would ‘roll out’ with ‘high fidelity’, even across the fragmented, low paid early years system.
15 years on, that parenting program has, to be fair, spread a little. But I think it’s also fair to say that this programme and countless others — work that sits in these intensely human spaces like parenting — have disappointed relative to the original excitement.
All of which raises a question: what if we’re just trying to do the wrong thing? Maybe if we’re trying to scale human work we need an approach to scale that’s, well, more human.
In his 1968 critique of technocracy, The Making of a Counter Culture, Theodore Roszak contrasts two ways of seeing the world. We can look at the world through eyes of flesh — objective, rational, and calculating — or we can look at the world through eyes of fire — visionary, artistic, transcendent.
I like Roszak’s metaphor in part because it reminds us how deep technocracy runs. Technocracy isn’t a method, it’s a way of seeing the world. When we look at the world through technocratic eyes, we see social problems as scientific or engineering problems, and so we reach for scientific or engineering approaches to scale — we can’t see any other options.
But Roszak’s metaphor also gets me thinking: isn’t fire a good reminder that we have other ways to talk about scale?
We all know fire spreads. But we don’t tend to say fire scales. If you light a new fire from an old one, you don’t say you’ve replicated the first fire. And if the second fire fails to light, you don’t say the original fire had low external validity.
So what do we say when we talk about fire? We use words like energy, or we talk about qualities like heat. We say a fire ‘catches’ or ‘dies out’. And when a fire really spreads we abandon scientific language altogether: we call it a wildfire.
Most of all, though, when we talk about fire we talk about environmental conditions. We say a heatwave had dried out the brush on the forest floor. Or the house had dodgy wiring. Or shelves of old library books acted like kindling.
In the last 20 years, one of the biggest advances in public policy has come in the field of social innovation. This work is often overlooked in Whitehall, partly because it’s heavy on jargon, but mainly because it jars so fundamentally with a technocratic worldview.
If you immerse yourself in this thinking, what you find is a more human way to think about scale. A way to spread high impact work that is robust and also more fitting for work that is human; work like that lunch club, or children’s centre, or hospice, that is relational, experiential, or highly context-dependent.
I’m breaking my own rules on jargon. So let’s wrap up with five simple and practical differences between these two modes: technocratic and human.
1. Fidelity vs. flexibility
The idea of fidelity is central to technocracy. We treat the work as a task of identifying and extracting the essence of a solution — the thing about it that works — so that we can replicate it elsewhere.
With human work, like healing loneliness by helping people to make friends, we need to talk less about fidelity and more about flexibility. This is because the work is so inescapably context-dependent, which means it has elements that cannot be abstracted. The fire in the belly of a local leader, or the historical resonance of a building, or the galvanising effect of a tragedy.
This doesn’t mean human work is any less robust; if anything, it’s even more sharply focused on outcomes. But to the extent that we try to perfect something, it’s not the solution to the problem but a way of developing a solution in context. And this means that imposing a pre-set solution can actually hinder scaling; instead, we need to create the right conditions and leave some space open.[3]
2. People over policy
In technocratic governance, policy is separate from and prior to delivery. This fits the mechanical mental model that we inherited from the industrial age; we act like we’re commissioning a factory. First we decide what to make — the policy — and then we send the spec to the factory — the delivery.
This mental model determines how we think about people, in that we pretty much don’t. To the extent that we care who the factory employs, it’s mainly through the lens of efficiency, like Adam Smith’s pin factory. Hence we tend to underpay frontline jobs and make them deskilled and repetitive.
With human work, everything is relational, so the people who work at the frontline are primary, not secondary. In a sense we’re replacing the image of a factory with a craft shop, so before we jump to a spec, we might start by finding skilled craftspeople, and even seek their views on the thing we’d like to make.
One practical consequence of this is that in human work local leaders matter a lot. And what we want is a particular type of leader — someone chosen not for their abstract knowledge and an elite CV but more for their knowhow, or the practical wisdom that can only be learned from doing. We also value situated qualities like legitimacy in the community. What we want is power in context; people who can make change happen here.
3. Risk and reward
In technocratic governance, our mechanical ways of thinking make us approach risk and reward in a particular way.
If you’re making widgets in a factory, you want to keep your failure rate below 1 percent. This works well for a social problem like curing acute disease with medicine; if every tenth patient has bad side effects, or every tenth widget fails, you should stop what you’re doing straightaway.
With context-specific, human work, like lifting kids’ aspirations, there’s a lot that’s not just unknown but unknowable, so we’re bound to ‘fail’ more often. Rather than trying to limit ‘failure’ rates below 1 percent, we try to make sure that, when we fail, we fail well. Again it’s less about whether our horse won the race and more about whether it was a wise bet to have made. Hence we’re back to the idea of trying to refine a process for developing a solution in context, not the solution itself.
In human work, we think about success differently too. In technocratic work, we govern through that terrible Whitehall phrase: the ‘announceable’. A politician gives a speech and takes the credit for a clever new policy. In human work, the agency sits in the community, so we need recognition to sit locally too. This means human work needs more mature politicians — people who aren’t scared of ‘failure’ and who don’t want to claim credit for every success. This is why human work tends to mean devolving power and investing in local media environments.
4. Stories, or how vs. what
In technocracy, one aspect of the separation of policy and delivery is the separation of policy and communications. This reflects an objective worldview: we start by finding the objectively ‘right’ answer and then we work out later how to communicate it. Sometimes we forget to think about comms entirely and issue policies that read like technical manuals.
When you think about it for more than a few minutes, this is a really weird, robotic way to govern a society. And although it works OK for highly technocratic issues — when the policy is: ‘prescribe 10 milligrams of this drug per day’ — it’s no surprise that it fails for human or relational work.
I suspect this is partly because human work — a nurse caring for a person who is dying, or a parent encouraging a child — is less about what is said and more about how it’s said. Qualities like authenticity and kindness matter a lot. In human work, the thing we’re trying to spread isn’t just a technical dosage or some content or words, administered like medicine; it’s a spirit or feeling. So we need ideas and stories that inspire that spirit in people, and so stories and framings are integral to the work.[4]
5. Social infrastructure
Finally, when we govern as technocrats we talk a lot about initiatives or interventions. We talk less about infrastructure. Or rather we talk a bit about physical infrastructure like roads and railways but almost never about social infrastructure like the quality and capacity of human relationships or community bonds.
Human relationships are the substrate through which human work spreads. As the author and researcher Otto Scharmer said at a workshop I attended recently: “At the end of the day, it’s the quality of our relationships that will determine our level of impact.” So human work is a bit like good gardening — you invest relatively more time tending to the quality of your soil, and relatively less time fussing about each individual plant.
In one sense none of this is new or radical; we’ve known all these insights for years. What is radical is what happens when we add all these approaches together into a human mode of government.
What we get is a way of governing that is different, from top to bottom, to our default technocratic mode. And since this technocratic mode runs really deep — and almost all of our policy institutions, from our tools to our language, mental models, and organisations — are built within that technocratic mode, we find doing things in this more human way really, really difficult.
But let’s end where we started: a lot of the biggest problems we face today are inherently human. So my suggestion is that if we want to repeat the strides of progress we made in the 20th century, and cut new ground on new problems, we’ll need to learn how to do human work at scale. This will require us to complement the technocratic institutions we built in the 20th century with more human governing institutions.
That’s the thought anyway. As ever, all shared openly and more sharply than I believe it, in order to provoke conversation. I’d be interested in feedback and critiques.
Footnotes
We also find ourselves facing problems that have been actively exacerbated by the technocratic solutions we applied in the past. For example, think of how our technocratic and unfeeling welfare system stigmatises people, which feeds the crisis in esteem and belonging.
One thing that’s telling about trying to replicate a programme in a scientific sense is that the later impacts are never, and in a sense cannot be, bigger than the original impact. Like with photocopying, the copies can only fade. This is different to human approaches, where it’s possible to find something better than the original idea.
An example to bring this to life: a colleague at Nesta, Nadeen Haidar, told me about a local project run by the People Powered Results team to boost school attendance. Rather than fly in with a solution, they ran a ‘100 day challenge’ with local people to come up with ideas. The answer eventually came from a firefighter: kids love fire engines, so let’s reward attendance with a visit to the local fire station. When they tried it, attendance leapt. A technocratic response would be to evaluate this scheme and roll it out. A human response is to repeat the 100 day challenge in other communities to find ideas that work for them.
I suspect this is why a lot of the most successful policy work is really well branded. For example, it makes me think about a lovely scheme called ’11 before 11’ that was run at the REAch2 chain of primary schools. It promised every child at the schools — a big chain, with thousands of kids — that before they turned 11 they’d have 11 life experiences. This list included: ‘You will eat a meal in a restaurant in a foreign capital city’ and ‘You will cook a meal for your family using vegetables you’ve grown yourself’. The idea was powerful and caught on partly because it was so cleverly framed. It spoke to a spirit, showing that every child mattered, and it expanded kids’ horizons.
For more on a similar vein, you can read my two recent posts: How to solve wicked problems and Move fast and fix things. And for the big picture story of how we govern in the 21st century, there’s my book, End State.