Dashboards or not
I’ve been mulling the question of whether or not we like dashboards, and how we should think about data in the context of missions.
This is prompted by this article (“Keir Starmer’s nerve centre has echoes of Dominic Cummings”) and by Rachel Coldicutt’s thread, which links to two must-reads:
Shannon Mattern’s history of the urban dashboard
Rachel’s takedown of Cummings’ ‘Seeing Room’
I feel torn on this topic because (a) I think data is very important for missions and Labour should go big on improving data capabilities in government (building on some really strong foundations).
But also (b) when people talk about data and missions it quickly becomes a conversation about dashboards/control rooms, which are quite unhelpful images.
Skipping past the practical issues with dashboards — over-simplifying, gaming, Goodhart’s Law, etc — I worry about the mentalities that are evoked/encouraged by metaphors like ‘dashboards’ and ‘nerve centres’.
The image of a dashboard is especially problematic for the way it evokes driving a car, or piloting a plane — pulling levers, pushing pedals, pressing buttons on a machine. Which, as Rachel points out, leads to silly ideas like ‘governing via an iPhone’. And which generally gets you into a mechanistic mindset, when of course a machine is not the kind of thing a state is. (And neither is a healthcare service, or a justice system.)
I’m partly just making the obvious point about complexity, i.e. states and healthcare services are more like organisms than they are like machines.
But I’m also making the slightly less obvious point that a state’s decision-making capacity and intelligence is spread widely throughout the system, so it’s less like a machine with a pilot— or an animal with a central intelligence — and more like a distributed intelligence/octopus.
This quality of distributed intelligence is why Nasa’s Mission Control is also a misleading analogy for the role data can play in a government.
In Nasa’s Mission Control you had lots of deeply technical experts in a room controlling a spaceship with only a few people in it. Whereas in public policy you have the opposite — a few people at the centre who think they have more control than they do, and thousands of people, spread across the system — in different places, organisations, sectors, etc— whose collective decisions emerge as a societal/economic outcome.
(This is, by the way, a good thing, because those people have lots of accumulated knowhow, in part thanks to their daily contact with societal problems. And it is also an inevitable thing because any complex system that tries to centralise decision-making will quickly be overwhelmed.)
This is why, for me, the Bloomberg terminal gets closer to being a good analogy for what’s needed in government. Because the reason Bloomberg terminals are powerful is precisely that lots of people have one, so it’s a mechanism for increasing the intelligence of the system. (Putting aside the obvious catty comeback of whether or not lots of bankers sitting at Bloomberg terminals really constitutes an intelligent system.)
So what metaphors are better than ‘dashboard’ when we’re talking about missions? I guess we might consider metaphors like:
The vital signs of a patient. This works well because although vital signs are important, everyone knows they are only indicators of the patient’s condition. We know vital signs need interpretation and we know this requires professional judgement. We also know that when a medical response is administered, the dosage will be a tricky judgement, and the outcome will be a bit uncertain, and the treatment may need to be recalibrated later, which is why you keep monitoring.
Maps. This works well as a metaphor because maps help you understand the territory you’re in but leave open the question of where you’re going / the best route. Maps are about situating, raising understanding, etc. Maps help us spot patterns. They also help us spot relationships between datasets, when you overlay them. Maps also retain a certain richness and granularity, even though of course they hide certain political judgments. Maps also tend to be public / open.
Situation rooms, but minus the bravado. A situation room doesn’t try to boil everything down to a number, like a speedometer, it’s more about curating rich intelligence to make sense of a rapidly evolving and complex situation. And actually, conceived properly, a situation room doesn’t centralise decision-making. It’s assumed that most of the key judgments will be made outside the room by people on the ground — they know best, and they can react quicker. But some judgments are reserved for people in the room, e.g. about strategy, resourcing, critical go/no-go decisions. Still, I would avoid situation rooms as a metaphor if possible because of the obvious issue of performative war-games.
So, overall, I’m a fan of a big push to harness intelligence for missions, if we can avoid dashboards being the overall framing, and if we can see things like dashboards as a small part (<5%?) of the work to be done.
The wider project is really about building collective intelligence as a critical capability in contemporary government. And under this broader heading I would include some mix of the following:
Build data capabilities that are useful for decision-makers throughout the system, not just a dashboard for No 10. Try to increase the level of understanding in the system, so that the average decision is more responsive, better aligned to the overall mission goal, etc. See the decisions made at the ‘centre’ of the system as important but particular, and remember they sit in a wider system of thousands of decisions.
Recognise that a lot of the most powerful mechanisms for alignment are ‘soft’ ones. e.g., do people across the system know about the mission goal, and understand it, and do they have a sense of what they can personally contribute, and do they feel motivated by the mission?
Emphasise curating and visualising data in ways that are helpful for decision-makers. As per Rachel’s point: make interaction design a key skill-set in collective intelligence (and, I would add, incorporate a deep understanding of behavioural science).
Work especially hard to join up data across silos, including data of unconventional types from unconventional sources, helping to tackle the ‘seeing only part of the elephant’ problem. Use all the levers you can to support this, e.g. build new institutions like data trusts to house and govern access to joined-up data; use regulatory levers to mandate the opening up of valuable data that should be treated as a public good; use innovation mechanisms (e.g. challenge prizes) to encourage people to make good use of data (here’s an example).
Think about this work generally as harnessing intelligence from the system, because insight/intelligence is widely distributed, e.g. it sits in the heads of nurses, teachers, etc. Invest in platforms and techniques for collective intelligence (see the work of CCID). Don’t just pull data up to inform decisions, share data laterally too, and make this a conversation, e.g. playback what’s being learned. Put more effort than normal into convening people to make sense of data together.
Recognise that the process of doing this work is as important as the product from the work (h/t Terence Eden for this point). Because it’s only when you try to pull the data together that you’ll realise data is missing/can’t be trusted/can’t be linked/etc. Think of this work as progressive and ongoing, i.e. it’s a long-running project to make data and decision-making better, it’s not a project to build a thing. A lot of this work is about practicing / refining better rituals for using data, rather than being about building new data products.
Complement this work by changing the way you set up teams, so that people can adapt their work when they learn something new. This means building teams around outcomes, not outputs, and making sure teams are free to pivot (for a way to do this, see The Radical How, a report commissioned by Nesta from Public Digital).
Those are just some suggestions. My point really is just: ‘yes’ to data for missions, conceived as collective intelligence, but let’s put dashboards in perspective and keep an eye on the behaviours they encourage.
For more on similar themes, here’s a post that puts that report from Nesta and Public Digital, The Radical How, in the wider context of missions. And a report from Nesta and the Institute for Government on the ‘how’ of missions. To stay in touch with my writing, you can follow me on Blue Sky, Medium, or Substack.