I am a planner within Arup’s integrated...
I believe that when it comes to evaluating urban interventions, we are missing a trick. Evaluations of urban interventions (both physical and policy-based) are abundant, however their focus is too narrow. We can, and should, go further.
Are we even asking ourselves the right questions? Take the theory of loop learning. This centres upon the difference between an evaluation which asks “Are we doing things right?” (single-loop learning) and the far broader, and in my view far better, “Are we doing the right things?” (double- loop learning).
Whether we’re designing the next generation of bridges in London or masterplanning swathes of Johannesburg, the desire to benefit local communities is the driving force behind urban intervention. So our ultimate goal is in fact to do the right things.
Despite this, too often evaluation strategies are primarily concerned with doing things right. The OMEGA Centre for Mega-Projects in Transport and Development reports that there is an excessive focus on the ‘iron-triangle’ of budget, timing and specification. This misses the more fundamental issue of whether the intervention was right in the first place. What we really need is a better understanding of the extent to which urban interventions have delivered wider benefits such as higher employment, wage growth or improved quality of life.
For this reason, I believe we need to take a long-term approach to evaluation, one which is integrated into urban interventions from their inception and which continues long after completion. In addition, we should try to establish what impacts are directly caused by each intervention. This task is daunting, but it can be accomplished.
The most accurate way to evaluate the extent to which an intervention causes local benefits is through counterfactual evaluation, which compares observed outcomes with what we would have expected to happen without the intervention.
High Speed 1 (HS1) is a good example of a project that is lauded for meeting its schedule and its budget, and it has undoubtedly brought far-reaching benefits. But there have been few (if any) attempts to establish what would have happened without it – as a result of wider economic trends, for example. Who’s to say that adding more lanes to the adjacent M2 motorway, or building superfast broadband wouldn’t have been better alternatives?
Similarly, we can also learn a lot more about iconic projects that are deemed to be successful for reasons other than simply time, budget and specification, for example King’s Cross Central. Until we really get to grips with the causal effects of these types of interventions, how can we be sure that we’re making the right decisions?
Counterfactual evaluation isn’t straightforward, not least because the effects of major projects often take many years to become apparent, and because it can be difficult to get adequate data.
The process is also time-consuming and resource-intensive, so it has largely been confined to academia and larger-scale policy interventions. Indeed, the What Works Centre for Local Economic Growth has repeatedly shown that there is a real shortage of robust counterfactual evaluations at our disposal. We need to convince policymakers and the private sector that conducting the evaluation is worth it, given the fundamental information we gain.
Counterfactual evaluation is an extremely useful tool for evaluating the wider impact of urban interventions, and knowing the magnitude of these impacts allows us to understand which interventions really work. We can then do the right things in our towns and cities, helping us shape even better places.
So let’s do the right thing and start evaluating properly!