Paula Claytonsmith looks at how trust is now the make-or-break factor for intelligent transport and infrastructure technology. In this article, she sets out what closes the trust gap in practice and what happens when governance and communication lag behind deployment.
In my previous article, I argued that the future of intelligent systems depends not on technical brilliance alone, but on whether people genuinely trust them. That gap is already shaping which innovations succeed and which quietly disappear after the pilot phase. This follow-up explores what closing it looks like in practice, with examples of where mistrust has cost us, and where getting it right has made a difference.
Trust breaks fastest when systems work technically, but feel unpredictable, unaccountable, or imposed.
Adaptive traffic signal control systems in urban areas across the UK offer a familiar pattern. Technically, these systems perform well, reducing journey times, smoothing peak flows and cutting emissions.
And yet, in multiple cases (though not all), residents have pushed back hard. Signals change in ways that feel unpredictable and local knowledge is seemingly overridden without explanation. Where authorities have not communicated what the systems do, or how complaints are handled, residents fill that vacuum with suspicion. Systems have been scaled back not because they failed technically, but because the narrative around them collapsed.
European C-ITS and connected vehicle trials show a similar dynamic. Deployments under the EU-funded SHOW and InterCor programmes have reported safety-focused outcomes and promising results in pilots. But where public communication is treated as secondary, minor incidents can become major trust events; the story shifts rapidly from "innovation" to "risk".
Where organisations publish data, document decisions and communicate limits, scrutiny becomes manageable rather than fatal.
Transport for London’s open data approach offers a counterpoint. TfL publishes extensive operational performance data, including reliability, passenger volumes, road collisions and payment use.
It also sets out transparency expectations for commercial partnerships, including routinely publishing relevant contracts. The framework is not perfect, but TfL’s consistency means it has something to stand on when scrutiny arrives, governance in place before the questions.
Tampere, Finland, has taken a similarly deliberate approach to autonomous shuttles. Since 2020, the city has run successive pilots, integrating automated vehicles as tram feeder services and working openly with its transport operator, Nysse and Tampere University. In November 2025, Tampere launched Finland’s first commercially operated automated bus service. Operational challenges (harsh winter conditions and kerbside and parking constraints) have been discussed openly rather than buried in vendor communications. That honesty is why confidence has grown.
If authorities cannot explain how automated decisions are made and corrected, public confidence is undermined before benefits are realised.
Intelligent systems are often procured faster than the governance frameworks that should accompany them. The technology moves at commercial pace; accountability moves at institutional pace. In my view, that gap is where trust goes to die.
AI-assisted road monitoring tools are accelerating across the UK and Europe, yet many local authorities have limited visibility into how the algorithms make decisions, or, more importantly, how errors are corrected. When contracts lack transparency requirements and procurement teams cannot interrogate vendor claims, trust failure is baked in.
The EU AI Act’s tiered risk framework, adopted in 2024, is a step forward, but implementation will take years. It also gives the UK much to think about. Decisions made now will shape public confidence long before the regime is fully embedded.
A system that protects only the people with the right device is not a public safety system.
These gaps are not theoretical. They show up in live networks when systems fail, and organisations are forced to explain themselves after the fact.
Many V2X deployments assume users carry compatible devices. A child walking to school does not necessarily carry a V2X device; an elderly pedestrian does not either.
When smart systems serve some people better than others, communities draw reasonable conclusions about whose safety is being prioritised and those conclusions are hard to reverse. Research through programmes such as SmartDENM has demonstrated prototype infrastructure-side detection approaches that can identify pedestrians without device dependency (including in controlled testing). The challenge now is procurement commitment, not technical feasibility.
When failures happen, the deciding factor is not whether the system was perfect, but whether the operator is transparent and accountable.
The National Highways smart motorway story is instructive (and still painful for many). Between June 2022 and February 2024, Freedom of Information data obtained by BBC Panorama showed nearly 400 incidents in which smart motorway technology lost power, sometimes leaving key safety systems (including stopped-vehicle detection) unavailable for days. In February 2023, National Highways reported a software outage that froze signs and signals and disabled stopped-vehicle detection across multiple motorways. The Office of Rail and Road also flagged stopped-vehicle detection performance shortfalls in December 2022. Much of this detail entered the public domain through FOI and media reporting. When Panorama’s investigation aired in April 2024, National Highways was responding from a largely reactive footing; this is one of the hardest positions from which to rebuild trust even if there is mitigation in place that place safety highly.
These are practical requirements that can be written into procurement, governance and communications, not optional extras.
The next wave of deployment will succeed only if trust is treated as an engineering and management deliverable.
The UK and Europe do not need to win the technology race; they need to win the trust race. That means treating trust as infrastructure: specifying transparency in contracts, designing for people without devices and communicating failures before headlines do. The systems that will define intelligent transport are being deployed now and public confidence will be won or lost, project by project. We already know what good looks like. The question is whether authorities, suppliers and regulators will make it the default.
Tampere autonomous shuttle programme: CIVITAS (November 2025), ‘Tampere Launches Finland’s First Commercial Automated Bus Service’; Interreg Europe Good Practices (February 2024); metaCCAZE project documentation (SHOW mega-site: Tampere).
Transport for London: TfL Transparency Commitment; The Data Economy Lab (2020), ‘Cities & Data Sharing – Part 1: London’.
Device-independent pedestrian detection: SmartDENM project; MDPI Electronics (March 2025).
National Highways smart motorways: Office of Rail and Road, Smart Motorway Safety Report (December 2022); BBC Panorama investigation (April 2024); Fleet News (February 2023), ‘National Highways investigating smart motorway software failure’.
EU AI Act: European Parliament and Council Regulation (EU) 2024/1689, adopted 13 June 2024; entered into force 1 August 2024.
Paula Claytonsmith, Writer | Innovation & Policy Strategist | Speaker
Paula Claytonsmith is a leading writer, policy and innovation strategist working at the nexus of transport, advanced technology and policy in the UK. She advises public authorities, industry and policymakers on highway maintenance, AI, intelligent systems and the real-world adoption and implications of emerging technologies.
Click the buttons below to see more articles:
See all ArticlesIndustry InsightEventsITS Thought LeadershipITS Educational