This blog, plus additional exclusive content, was published last Friday on my free Mobility Matters newsletter. You can sign up for the newsletter by clicking here.


This week’s blog contains not some words that I said at the TransportAI conference last Friday. But they are what I planned to say, but did not get the chance to as we did not get an opening gambit. In the spirit of the event, Google Gemini helped me with ideas, but these are all my own words.

Good morning, everyone.

Our title today involves the word “Supporting.” Specifically, supporting the responsible buying and using of AI in transport.

And I want to start by being honest about how we usually interpret titles like that in the public sector. Usually, when we talk about “responsibility” and “AI” in the same breath, the energy in the room drops. We tense up. We start thinking about risk registers, GDPR compliance, negative headlines, and the endless, comfortable safety of “pilot purgatory.”

We treat “responsibility” as a brake. Something that slows us down to keep us safe.

I want to challenge that immediately. Today, I am going to take an unashamedly pro-AI stance. My argument to you is that we need to stop treating responsibility as a brake, and start treating it as the steering wheel. The steering wheel doesn’t exist to stop the car; it exists so you can get to where you need to go.

We are here to discuss how to “get on and do it.” Because the cost of delay is no longer just “lagging behind.” The cost of delay is a transport network that is slower, dirtier, and less efficient than the technology already allows it to be.

So, let’s look at the prize. Why are we doing this? If we successfully support our organisations to buy and use this tech responsibly, what is the change?

It isn’t about shiny robots or futuristic dashboards. The biggest positive change for normal people is the “Invisible Journey.”

It’s the journey where the friction disappears. It’s a transport network that becomes predictive, rather than reactive.

Currently, our systems are reactive. A bus breaks down, we send a mechanic. A road gets congested, we tweet a warning and flick on the overhead signs.

AI moves us to prediction.

Look at Project Green Light in Manchester. This isn’t science fiction; it’s happening now. Transport for Greater Manchester partnered with Google to use AI to optimize traffic light timing.

The result? A 30% reduction in stop-and-go traffic at those junctions. And up to a 10% reduction in emissions.

For the engineers in the room, that is an efficiency stat. But for the citizen? It means the bus didn’t get stuck at a red light for no reason. It means they got home to their kids five minutes earlier. They don’t know AI did it. They might not even care. They just know the city worked.

That is the vision. But to get there, we have to tackle the two verbs in our title: “Buying” and “Using.”

Let’s be frank: Public sector procurement was designed to buy concrete, tarmac, and finished widgets. It was not designed to buy a capability that learns.

We have a habit of trying to write 100-page specifications for outcomes we can’t yet see. And because we can’t specify it perfectly, we retreat to “trials.” We fund short-term R&D grants. We dip a toe in. We need to stop funding pilots that go nowhere and start funding commercial scalability.

Look at Milton Keynes.

When they introduced Starship Technologies delivery robots, they didn’t try to procure “autonomous delivery” via a standard tender with a fixed safety spec for a robot that hadn’t been invented yet. They would still be writing that tender today.

Instead, they used an agile approach. They granted a “permit to operate.” They focused on the outcome—last-mile delivery—and learned the safety constraints iteratively.

Because they were brave enough to buy the outcome rather than specify the tech, Milton Keynes is now a global leader. Those robots complete 140,000 road crossings a day. They have normalised Level 4 autonomy in the city.

The lesson? Don’t buy features. Buy outcomes.

We are seeing this with National Highways’ Digital Roads strategy. They are moving toward incentivising suppliers on the availability of the road network.

Think about that shift. If you pay a supplier to fix potholes, they want potholes. If you pay a supplier for road availability, they will use AI to predict the pothole and fix it before it breaks, because a closed road costs them money.

That is how you “buy” responsibly. You write the contract so that the AI is incentivized to do the right thing.

But buying it is only half the battle. The second challenge is “Using” it.

And the biggest hurdle here isn’t technical. It’s cultural.

It is the “Black Box” problem. How do you convince a veteran traffic manager, who has managed the roads by gut instinct for 30 years, to trust an algorithm?

We see the answer in Network Rail.

They are using Plain Line Pattern Recognition (PLPR). Trains scan the tracks at 125mph, capturing 70,000 images per second, looking for defects.

The challenge wasn’t the software. The challenge was getting the maintenance crews to trust a computer telling them to go out and fix a fault they couldn’t see with their naked eye.

But they bridged that gap. And the result? They have replaced manual inspections on 8,500 miles of track. They moved staff from “walking the track”—which is dangerous and low-value—to “fixing the track.”

That is responsible use. It saves taxpayers millions, but more importantly, it makes the workforce safer and more effective. It didn’t replace the humans – it gave them superpowers.

Now, I know some of you are thinking: “But what about privacy? What about the data risks?”

This is where we need to be very clear. Responsible does not mean “Risk Averse.” Responsible means “Privacy by Design.”

We get paralysed by GDPR because we think we need to hoard data. You don’t.

Look at VivaCity Technology, used by Oxfordshire and many others here.

They use AI sensors to analyse traffic—pedestrians, cyclists, cars. But the “Responsible” part is that the classification happens on the device. The video is processed, the data is counted, and the video is deleted in milliseconds. No personal data ever leaves the pole.

Because of that “Responsible” choice, Oxfordshire County Council bypassed the privacy swamp. They now have granular data to redesign Witney High Street, confident they aren’t guessing. They are reducing tarmac space for cars based on hard proof of pedestrian numbers.

They proved that if you design for privacy from day one, “responsibility” actually speeds you up.

So, how do we support this moving forward? I have three challenges for you today.

First: Sort out your data.

You cannot buy AI if your data is locked in PDFs or legacy servers. AI eats data. If the cupboards are empty, don’t invite it to dinner.

Commit to standards like the Bus Open Data Service (BODS) or GTFS. Before you write an RFP for AI, spend three months standardizing what you already have. That is the single most practical step you can take.

Second: Use the tools we have.

We don’t need to invent new ethical frameworks. The UK Government has the Algorithmic Transparency Recording Standard. Use it. Mandate it.

If a vendor can’t explain to a citizen why the AI made a decision, don’t buy it. Make that transparency part of the “Buy.”

Third: Expect Failure.

We need to kill the fear of the National Audit Office.

Look at the TfWM Future Transport Zone. They trialled “Mobility Credits”—paying people ÂŁ3,000 to scrap their cars and use public transport.

It was a radical experiment. 82 cars were scrapped.

We need to create an “Innovation Pot” where it is legally acceptable for 30% of projects to fail, as long as we publish the lessons. If we don’t allow failure, we don’t allow innovation.

For many of you, no doubt what I say here will not be enough. You might not like AI’s carbon footprint, or object to the idea of tech bros having all of that power.

You are entitled to that opinion, and I respect that.

My view is that AI is not defined by ChatGPT or Grok. It is far more nuanced a technology than that. AI is being deployed to help discover new medicines, and helping citizens in oppressive regimes to fight back against them. Why can’t transport find out in what ways it can be useful?

After all, I very much doubt that Google’s AI which helped Manchester with its traffic lights will become Skynet or HAL,

Having said that, I want to leave you with one final thought.

Don’t fall in love with AI.

No citizen wakes up in the morning wishing there was more “Artificial Intelligence” in their town.

They wake up wishing the bus was on time. They wake up wishing the traffic lights worked. They wake up wishing the air was cleaner.

Don’t buy AI. Buy the solution to their problem.

If you focus relentlessly on the problem, you will naturally buy responsibly. You will naturally use it well.

The technology is ready. The case studies—from Manchester to Milton Keynes—prove it works. The only thing waiting is us.

Let’s get on with it.

Trending

Discover more from Mobility Matters

Subscribe now to keep reading and get access to the full archive.

Continue reading