x
Close
AI - September 22, 2025

Public Trust Deficit in Generative AI Puts Break on Promised Revolution, Report Reveals

Public Trust Deficit in Generative AI Puts Break on Promised Revolution, Report Reveals

Amidst political enthusiasm surrounding artificial intelligence (AI) as a driver of growth and efficiency, a recent study unveils a significant trust deficit in the technology among the general public. This skepticism poses a substantial challenge to government plans, with many viewing AI with suspicion rather than excitement.

The report, jointly conducted by the Tony Blair Institute for Global Change (TBI) and Ipsos, quantifies this sense of unease, revealing that a lack of trust is the primary reason for the public’s reluctance to engage with generative AI. This apprehension isn’t merely theoretical; it serves as a tangible barrier hindering the AI revolution that policymakers are so eager to promote.

The research highlights an intriguing disparity in public perception of AI. On one hand, over half of the population has experimented with generative AI tools within the past year—an impressive rate of adoption for a technology that was scarcely recognized by the general public just a few years ago.

On the other hand, almost half the country remains untouched by AI, whether at home or in their workplaces. This divide creates a wide chasm in public sentiment towards AI and its development. The data suggests that familiarity fosters comfort—those who haven’t had positive experiences with AI are more likely to believe sensationalist headlines about the technology.

The report also reveals a generational split in views on AI, with younger individuals generally expressing optimism and older generations displaying caution. Professionals in tech-related fields seem prepared for AI’s advancements, whereas those in sectors like healthcare and education exhibit less confidence, despite their jobs being potentially more affected by AI growth.

The study also sheds light on the role of context in shaping public opinion about AI. While people are open to AI solving problems such as managing traffic congestion or improving cancer detection—issues where they can see tangible benefits—they become wary when considering applications like workplace performance monitoring or political ad targeting. This highlights that our concerns don’t primarily revolve around the growth of AI itself, but rather its purposes and implications.

We want to ensure that AI is utilized ethically and responsibly, with regulations in place to prevent big tech companies from having unchecked control over the technology. To address this trust deficit, the TBI report offers a clear roadmap for fostering what it terms “justified trust.”

Firstly, governments need to communicate the benefits of AI in a manner that resonates with individuals—focusing on how AI can streamline healthcare services, expedite appointment scheduling, and simplify public service usage. Emphasizing practical applications rather than abstract economic growth promises is crucial.

Secondly, success stories involving AI implementation in public services must be shared to demonstrate tangible improvements for ordinary people, not just increased efficiency as measured by technical benchmarks.

Lastly, strong regulations and training are vital to ensure that AI is used responsibly and safely. Governments must empower regulators with the necessary authority and expertise, while providing accessible training to help everyone use these new tools effectively. The ultimate goal is to create an environment where AI becomes a collaborative tool rather than a force imposed upon us.

By building trust in the people and institutions managing AI, governments can potentially win over public support for its growth. If policymakers can demonstrate their commitment to ensuring that AI benefits everyone, they may just convince the public to embark on this technological journey together.