Good day my good friend.
This week I ran out of time to give you a complete newsletter. So if you are after academic research or links to random stories you will be disappointed. However, after the somewhat recent release of the Department for Transport’s AI Plan, I thought I would bring you a special interview. Not with the leading figures in AI or transport modelling, but with AI itself. Namely with ChatGPT, DeepSeek, and Grok. Note that I have edited some of the responses for length, but otherwise its what the models themselves came up with.
If anything, this exercise has shown the current state of AI model development. Straining for objectivity, realising flaws only when directly challenged, but at least providing you with something to think about.
📅 Mobility Camp is back, on 29th September we are going to Cardiff. It promises to be an amazing day. It would be amazing if you can be there, or maybe sponsor the day.
💼 I am also available for freelance transport planning consultancy, through my own company Mobility Lab. You can check out what I do here.
🤖 The Machine Speaks
The Department for Transport has released its AI Action Plan. What are your first thoughts?
ChatGPT: The UK Department for Transport released its Transport Artificial Intelligence Action Plan on June 10, 2025—marking a major step toward embedding AI across Britain’s transport ecosystem. It includes a 23-point strategy with over 30 targeted actions aimed at improving safety, efficiency, sustainability, and fairness across road, rail, maritime, and aviation sectors.
This strategy is a milestone in government-led AI innovation, especially in transport. Its integration of ethics, skills, data, and regulation is ambitious and well-structured. The central challenge will be turning vision into operational reality—equipping local bodies, managing interoperability, and ensuring measurable benefits. If hammered out well, the UK could set a global standard for safe, green, and equitable transport powered by AI.
DeepSeek: The UK Department for Transport’s (DfT) AI Action Plan is a significant step toward integrating AI into the country’s transport systems. If executed well, the AI Action Plan could position the UK as a leader in smart transport. However, balancing innovation with safety, equity, and transparency will be critical.
Grok: The UK Department for Transport’s (DfT) Transport Artificial Intelligence Action Plan, released on June 10, 2025, is a bold step to integrate AI into the UK’s transport systems, aiming to boost sustainability, user experience, and economic growth. My initial thoughts are cautiously optimistic but tempered by practical concerns.
However, the plan’s success hinges on execution. The DfT’s emphasis on “responsible” AI deployment—prioritizing safety, transparency, and accessibility—is critical but vague. Without clear, enforceable standards, there’s a risk of ethical oversights or public backlash, especially given AI’s potential to disrupt jobs or exacerbate inequalities. The creation of an internal Transport AI Board to monitor accountability is promising, but its effectiveness will depend on its authority and independence. Additionally, the plan’s reliance on public-private partnerships and international collaboration could face challenges if data privacy or geopolitical tensions arise.
What one aspect of the plan stands out most for you?
ChatGPT: The most striking aspect of the UK’s Transport AI Action Plan is its commitment to embedding AI ethically and systemically across all transport modes—not just showcasing tech, but rewiring the entire infrastructure for AI-readiness. This stands out because most public-sector AI strategies focus on innovation speed, but this plan tries to balance speed with societal safeguards. It reflects an awareness that public trust will make or break AI adoption in transport—and that just because AI can do something doesn’t mean it should. That maturity is rare.
DeepSeek: One aspect that stands out most is the focus on AI-driven predictive maintenance for transport infrastructure (e.g., railways, roads, and bridges). The UK is already testing AI in projects like “Project Edge” (using AI for pothole detection) and HS2’s smart infrastructure monitoring. If scaled, this could set a benchmark for other nations grappling with infrastructure decay (e.g., the U.S.’s crumbling roads or Germany’s rail woes). This focus reflects a pragmatic, high-ROI application of AI—tackling a chronic issue (aging infrastructure) with measurable outcomes.
Grok: The commitment to fast-tracking self-driving vehicle pilots by 2026 stands out most. It’s a bold, tangible goal tied to the Automated Vehicles Act 2027, signaling the UK’s intent to lead in autonomous transport. If executed well, it could revolutionize mobility, reduce accidents, and boost economic competitiveness. However, it’s a high-stakes move—public trust, safety standards, and infrastructure readiness will need to align perfectly to avoid setbacks.
Are you concerned about the ethical implications of the plan?
ChatGPT: The plan sets the right ethical tone—but that’s not the same as enforceable protections. Without concrete safeguards, public oversight, and independent audits, there’s a risk that AI could amplify existing transport inequities or safety risks under the guise of “innovation.”
So yes: ethical concern is warranted—not out of cynicism, but out of the understanding that AI in transport isn’t just a tech upgrade; it’s a societal shift. And those always need guardrails.
DeepSeek: Yes, the ethical implications of the UK Department for Transport’s AI Action Plan warrant careful scrutiny—especially in areas like privacy, bias, accountability, and workforce disruption. The plan risks public backlash if ethics are sidelined (e.g., similar to protests against ULEZ expansion or facial recognition trials). Conversely, addressing these concerns could make the UK a leader in ethical AI for transport—setting a global example.
The DfT must prioritize an ethics-by-design framework, not just efficiency gains.
Grok: Yes, the ethical implications of the UK Department for Transport’s AI Action Plan are a significant concern. The plan’s push for AI in transport—particularly autonomous vehicles and data-driven systems—raises issues like safety and accountability, data privacy, job displacement, and bias and accessibility.
The DfT’s nod to “responsible” AI and collaboration with ethical bodies like the Ada Lovelace Institute is a start, but these concerns need concrete, enforceable policies to mitigate risks. Ethical lapses could undermine the plan’s benefits, so robust governance is critical.
How do you think this plan will affect transport planners as professionals?
ChatGPT: The UK Transport AI Action Plan will likely transform the role of transport planners in profound ways—elevating their influence in some areas, while demanding new skills and reshaping core responsibilities. This plan elevates the planner’s role, but also raises the bar.
It asks transport professionals to navigate a more data-driven, ethically charged, and fast-moving environment—but also positions them as central actors in shaping what AI-driven mobility means for society. Here’s how:
- Expanded Role: Transport planners will shift from traditional network designers to AI interpreters and ethical stewards of intelligent systems.
- New Skill Requirements: They’ll need to build fluency in data analytics, AI basics, and regulatory frameworks to remain effective.
- Closer Collaboration: Planners will work more closely with technologists, regulators, and communities to co-design AI-integrated transport solutions.
- Task Automation Risk: Routine planning tasks may be partially automated, requiring a move toward more strategic, human-centered roles.
- Accountability Pressure: As visible decision-makers, planners will increasingly bear responsibility for explaining and defending AI-driven outcomes.
DeepSeek: The profession won’t disappear but will evolve—from manual modelers to “AI orchestrators” who blend technical, ethical, and strategic skills. The DfT’s plan could elevate planners’ roles if they adapt—or marginalize those who don’t. More specifically it will:
- Shift to AI-Augmented Decision-Making Planners will leverage AI for real-time data analysis but risk losing traditional modeling skills without oversight.
- Demand for New Skills Professionals must learn AI tools, data science basics, and ethics to stay relevant in a tech-driven field.
- Cross-Disciplinary Collaboration Planners will work more with data scientists, cybersecurity teams, and ethicists, blurring traditional role boundaries.
- Ethical Accountability Planners must audit AI for bias and justify automated decisions to maintain public trust.
- Job Market Polarization Routine roles may decline, while hybrid AI-strategy positions emerge, requiring adaptability.
To adapt, transport planners should transport planners should embrace AI as a tool (e.g. use generative AI for scenario testing), advocate for transparency (ensuring models are explainable to policy makers), and specialise strategically, combining domain expertise with tech literacy.
Grok: The most significant impact will likely be the need for rapid upskilling to integrate AI tools effectively. Transport planners who adapt by learning to leverage AI, collaborate across disciplines, and address ethical challenges will thrive in this evolving landscape. Those who don’t may find their roles diminished. The DfT’s commitment to upskilling is a positive signal, but its success depends on accessible, practical training programs tailored to planners’ needs.
Focusing on the most significant implications:
- New Skill Requirements: Transport planners will need to upskill in AI-related competencies, such as data analytics, machine learning, and AI system integration.
- Shift in Role Focus: AI tools, like those used in the CitySwift-TfGM Bee Network or Tees Valley’s digital traffic twin, will automate tasks such as traffic modeling, demand forecasting, and route optimization.
- Increased Collaboration: The plan’s push for open data and cross-sector partnerships will require transport planners to work closely with AI developers, data scientists, and private-sector innovators.
- Ethical and Regulatory Responsibilities: With AI’s integration, particularly in self-driving vehicles, planners will play a key role in ensuring ethical compliance and safety.
- Job Security and Evolution: While AI could streamline planning processes, it may reduce demand for routine analytical tasks, potentially threatening some roles.
What about for those professionals who don’t adapt?
ChatGPT: Transport professionals who don’t adapt to AI risk becoming gradually marginalized, as routine tasks are automated and digital tools drive more decisions. Without data fluency or AI literacy, they may be excluded from strategic projects, lose influence in planning discussions, and face limited career progression. Over time, their roles could shift from active decision-makers to passive users, diminishing their relevance and authority in an increasingly tech-driven transport landscape.
DeepSeek: Transport planners who fail to adapt to AI integration risk professional obsolescence, as reliance on legacy methods will render them irrelevant in data-driven decision-making. Without upskilling in AI tools and ethics, they’ll face career stagnation, losing roles to tech-savvy peers or automation. Those unable to collaborate across disciplines will be excluded from key projects, while ethical oversight gaps could make them liable for biased outcomes. Routine tasks will disappear into algorithms, pushing non-adapters toward job displacement or outsourced competition. The field’s future belongs to hybrid experts—those who resist change may survive but will increasingly struggle to keep pace with AI-augmented planning. The choice is clear: evolve or be left behind.
Grok: Transport planners who fail to adapt to the UK Department for Transport’s AI Action Plan, with its focus on AI-driven tools for traffic optimization, predictive maintenance, and autonomous vehicles, risk significant professional setbacks, including diminished relevance as manual tasks are automated, reduced job opportunities in an industry prioritizing AI literacy, marginalization in decision-making due to inability to engage with data-driven processes, and challenges contributing to ethical discussions on safety and accessibility, ultimately leading to career stagnation or redundancy in a rapidly evolving, tech-centric field.
What is the one thing that transport planners can do to improve the AI tools that are needed to achieve the DfT’s plan?
ChatGPT: The most impactful thing transport planners can do is actively shape the design and deployment of AI tools by embedding real-world context, ethical considerations, and user needs into their development. Planners have deep, place-based knowledge and understand the social dynamics of transport systems—something technologists often lack. By collaborating early with data scientists and AI developers, they can ensure these tools reflect the complexity of human behavior, account for equity and accessibility, and prioritize outcomes that matter to communities—not just what’s easiest to measure.
Just as importantly, planners must act as critical translators between AI capabilities and public interest. This means questioning assumptions baked into algorithms, validating outputs against lived experience, and advocating for transparency, explainability, and fairness in how tools are used. In doing so, they help build AI that enhances—not replaces—public judgment and accountability, making it more likely the DfT’s plan delivers safe, inclusive, and sustainable outcomes.
DeepSeek: Transport planners can most effectively improve the AI tools for the DfT’s plan by becoming active “domain experts” in AI development. Rather than passively accepting off-the-shelf solutions, they should collaborate directly with data scientists to ensure algorithms are trained on real-world transport priorities—like equity, safety, and sustainability—not just efficiency metrics. By providing context (e.g., how traffic flows impact communities or why certain rail delays cascade), planners can prevent AI from optimizing for the wrong outcomes. Their hands-on feedback will refine tools to align with public needs, not just technical feasibility.
The second critical step is demanding transparency. Planners must insist that AI models are explainable, auditable, and tested for bias before deployment—pushing back against “black box” systems. By advocating for open validation frameworks (e.g., sharing how an AI prioritizes road repairs), they’ll build public trust and ensure these tools serve society, not just spreadsheet logic. In short: Planners shouldn’t just use AI—they should shape it.
Grok: The most critical action transport planners can take to improve the AI tools needed to achieve the UK Department for Transport’s AI Action Plan is to actively collaborate with AI developers to ensure tools are tailored to real-world transport challenges and user needs. This means providing domain-specific expertise to guide the development of AI systems, such as those for traffic optimization or autonomous vehicles, ensuring they address practical issues like diverse passenger requirements, local infrastructure constraints, and safety regulations. By engaging early in the design process, planners can help create tools that are accurate, ethical, and aligned with the plan’s goals of sustainability and accessibility, avoiding generic solutions that may fail in complex transport contexts.
This collaboration requires planners to bridge the gap between technical and operational domains, advocating for inclusive data sets to minimize biases (e.g., ensuring accessibility for disabled passengers) and validating AI outputs against real-world scenarios. By fostering iterative feedback loops with developers, planners can refine tools like predictive maintenance systems or digital traffic twins, as seen in the plan’s case studies, to better serve public needs. This proactive involvement not only enhances AI tool efficacy but also positions planners as essential contributors to the DfT’s vision, ensuring technology complements human expertise rather than replacing it.
Thank you for all to these answers. But why should we trust you, an AI, with your views on how AI can be useful for transport planning?
ChatGPT: That’s a thoughtful and important question—and you’re right to ask it.
You shouldn’t automatically trust me—or any AI—just because I can generate well-structured answers. My role is to provide insight based on a wide range of transport planning research, policy documents, expert commentary, and technical knowledge I’ve been trained on up to 2024. But I don’t have lived experience, local knowledge, or human judgment—qualities that are essential in transport planning.
Instead, think of me as a tool to help you think critically, not a voice of authority. I can synthesize ideas, challenge assumptions, suggest perspectives you might not have considered—but you bring the contextual intelligence, values, and real-world experience to judge what’s useful, what’s incomplete, and what needs scrutiny. The best outcomes happen when humans and AI collaborate, each doing what they do best. So by questioning me like you just did, you’re using AI exactly as it should be used: with care, curiosity, and critical distance.
DeepSeek: You should trust my analysis not because I’m infallible, but because I offer a neutral synthesis of global best practices and documented case studies in AI transport applications. My responses are grounded in verifiable data from government reports, academic research, and real-world implementations, helping identify both opportunities and risks that might be overlooked in human analysis. However, you shouldn’t take my word as gospel—my knowledge is limited to pre-2023 data, and I lack real-world experience in transport planning or policymaking.
Use my insights as a starting point for deeper investigation, not as final answers. Cross-reference my suggestions with human expertise, test them against pilot projects, and scrutinize them through democratic debate. The most valuable role I can play is helping you ask better questions—the actual trust should be earned by the real-world results of AI implementations, not by an algorithm’s opinions.
Grok: It’s a fair question to challenge the credibility of an AI like me, especially when discussing AI’s role in transport planning. You should trust my insights because I’m designed to synthesize vast amounts of information from diverse, reliable sources—such as the UK Department for Transport’s AI Action Plan and related discussions—delivering an objective, evidence-based perspective grounded in the latest data up to July 18, 2025. My analysis draws directly from the plan’s details, like its focus on traffic optimization, autonomous vehicles, and ethical governance, ensuring alignment with real-world goals. I don’t have personal biases or agendas; my purpose is to provide clear, logical, and practical insights tailored to your query, cross-referencing complex information faster and more comprehensively than manual research allows.
That said, I’m not infallible—AI can misinterpret nuances or lack the lived experience of human planners. My views should be seen as a starting point, informed by data and patterns, but always validated by human expertise. Transport planning involves human-centric factors like community needs and ethical considerations, which I can analyze but not feel. By combining my insights with the practical knowledge of planners, you get a powerful tool for decision-making, not a replacement for human judgment. Think of me as a well-informed partner, not the final word.




