We have a problem. A recent report by the Department for Transport into the effectiveness of major transport schemes in the UK has come to a rather depressing conclusion: we have no idea how to effectively measure the impact of what we, as transport planners do.
The report itself contains much in the way of useful information about the impacts of schemes, specifically that:
11 of 13 schemes estimated that the schemes delivered high or very high value for money;
Public transport schemes generally increased passenger numbers and improved passenger satisfaction;
Highway schemes generally reduced congestion and improved journey times.
But that is about as far as it gets. The overall conclusion of this work is that practice in relation to monitoring and evaluation needs to significantly improve, and in most cases scheme sponsors were unable to determine whether or not it was their scheme that caused the impact.
If we are what we measure, then we have no idea what we are. Transport planners tend to be very good at measuring processes, but very poor at measuring outcomes and the impact of what we do. And considering that we do this poorly at the scheme level, this leads to significant problems in monitoring impacts systemically.
Under the standard evaluation methodology, for all schemes local authorities and the Department for Transpirt are required to monitor for the following impacts:
Scheme build
Delivery timescale
Cost
Scheme objectives
Impacts on travel demand
Travel times and reliability of travel times
Impacts on the economy
Carbon impacts
Practice in monitoring these is too often driven by the availability bias. Simply put, impacts are monitored by the availability of the data which is collected routinely by authorities. Procuring new data sources just for individual schemes is expensive (to give you an example, a single half day manual count of pedestrians and cyclists is around £600, including data analysis – you often need several such counts in a single location to capture the full range of impacts).

Scheme build, delivery timescale, and costs are simply a part of standard project management. Impacts on travel demand are usually done by manual and automatic counts, that are again done regularly by local authorities. Everything else? A mix of trying to localise generalised data sources (usually from the Department for Transport), or expensive one-off data collection.
The result of this? An incomplete understanding of the impacts of schemes, and where the impacts of schemes is assessed it is not done systemically. This means, fundamentally, that we cannot determine whether the schemes that we deliver are helping us to achieve our objectives.
Over the last several months, I have been working with authorities on establishing evaluation frameworks that seek to understand the impacts of schemes on a more systemic level. As part of which I have learned several lessons.
The first is that data is not the issue. With data such as the Google Journey Time API, Waze traffic data, Strava data (flawed as it is), and Bus Open Data, we have more data than ever. The ability to process this in a systemic manner is another thing entirely, but the point stands. We have enough data, or at least enough that is useful.
The second is that there is a shocking lack of learning within authorities. Any lessons learned are usually shared with decision makers, but they are rarely shared beyond that apart from reports and through some team meetings. Rarely are workshops, presentations, or knowledge libraries maintained, or any effort made to share best practice and the lessons learned from scheme delivery. This is consistent across authorities. Space needs to be given to learn lessons and to share best practice.
The final lesson is that monitoring and evaluation is seen as an activity and not a process. What this means is that whenever a scheme is proposed, a new monitoring and evaluation process is established for each scheme, and often late in its development. The Monitoring and Evaluation Plan is not necessarily agreed at the Project Initiation Stage, and even then it is based upon available data and a variable level of understanding of quite how important the objectives are for determining what is important to measure.
Personally, I am a firm believer in you are what you measure. And good quality monitoring and evaluation can have a disproportionate impact on how change can be embedded successfully. So over the coming weeks, I will be sharing some posts about how you can establish good quality monitoring and evaluation as a process. This is based upon the below framework, that can establish a new process of monitoring and evaluation anywhere. The key stages are as so:

Pre-project stage: where your scheme aims and objectives determine the impact that you want to measure, based on a sound logic
Project initiation: where you agree your plan, and confirm the way that you will test to see whether or not your scheme has had impact compared to background changes
Project delivery: where you collect the data before the scheme is delivered
Project close out: where you set up and agree the process for monitoring the impact of the scheme, to ensure that it is monitored.
Post-scheme monitoring: where you monitor the impact of the scheme.
Share findings: where you plan out how you will share your lessons learned with everyone.
Underpinning this all is committing the sufficient time and resources to effective monitoring and evaluation. Let me make one thing clear right now: this is not cheap to do, and it requires finances to collect data, do data analysis, and to pay staff to manage the process. But doing this well is surprisingly cheap.
From a personal estimate, assuming 2 FTE staff to co-ordinate monitoring and evaluation across an organisations programme of schemes and use of basic spreadsheets and programme management processes, the basic cost of Monitoring and Evaluation is around £100,000 per annum. But, the data collection cross is highly variable, and scalable according to your programme. In coming posts, I will cover how scalable this cost can be, and opportunities to reduce costs.
I hope that you find this coming series useful. I would love to hear what your thoughts are about good monitoring and evaluation, and your experiences of it.



