Thales urges focus on education & security for the success of AI

0
111

AI is prominent on almost every business agenda right now, and nearly every industry is working to identify ways they can harness its potential. According to Thales, the benefits can be significant, but they can only be realised through having both a clear sense of how it will be used, the outcomes it will be driving, as well as adequately securing the AI itself.

When electricity first entered homes in the late 1800s, it reshaped society in ways no one could have fully anticipated. Today, Thales argues that Artificial Intelligence (AI) stands at a similar threshold. No longer an emerging technology, AI is becoming as transformative to our daily lives as electricity once was, powering everything from medical diagnoses to mission-critical defence systems.

Yet, while public discourse often centres on ethics, misinformation, and the future of work, one vital issue remains underexplored, and it is this issue that the company urges us to consider – the security of AI itself.

Earlier this month, at the World Knowledge Forum in Seoul, Thales had the opportunity to contribute to this critical conversation—highlighting not only the challenges but also the opportunities that lie ahead.

The benefits of AI are undeniable. However, the pace of adoption has outstripped the global readiness to secure it. Consider the findings from Gartner’s 2024 Guardians of Trust Survey:
● 83% of banking cyber security executives admit they cannot keep up with AI-powered cyber criminals.
● 74% of leaders are aware of sensitive data being fed into public AI models.
● Yet only 20% of organisations feel very prepared to defend against AI-driven attacks.

This is the readiness gap we must urgently close. The question, according to Thales, is no longer whether AI will shape the future. It already is. The real question is: can it be secured enough to trust?

Governments are moving quickly, but differently. Europe’s AI Act is value-driven and strict, with penalties of up to 7% of global turnover. The U.S. is pursuing a fragmented, sectoral approach. The UK has opted for pragmatism, though with less clarity.

This patchwork reflects geopolitical realities and nuances, but it also creates complexity for global enterprises. One constant remains: security and sovereignty will be non-negotiable.
Thales urges us to consider that without robust protections, AI is a gamble. With them, it becomes a catalyst for innovation. For example:
● Enterprise AI Assistants can boost productivity by reading emails, analysing documents, and generating meeting minutes—provided they are built with strict data classification and encryption controls.
● Identity Protection is critical as AI-generated morphing attacks on passports become a reality. Detection algorithms must outpace attackers.
● Agentic AI – AI that acts autonomously – raises profound challenges around authentication, trust, and control.

The company maintains that technology alone cannot secure the future of AI. Human error remains the Achilles’ heel of any cyber security system. That’s why education is indispensable.

Organisations must empower employees to use AI responsibly, recognising sensitive data, applying classification rules, and critically evaluating outputs. Continuous training is essential, as AI evolves too rapidly for static compliance checklists. At a societal level, citizens must be equipped to distinguish between AI-generated content and genuine information. Without this literacy, the resilience of the digital ecosystem is at risk.

Governments and institutions also need to invest in cross-disciplinary expertise. Policies and regulations are only as strong as the understanding behind them. Thales calls this vision Trusted AI, interpreted as intelligence that is powerful yet explainable, innovative yet ethical, and above all, secure by design. Achieving this depends as much on people as it does on algorithms.

AI and cyber security are not separate disciplines, they are deeply interwoven. As the ancient Greek philosopher Anaxagoras said, “everything is in everything.” That is why the future of AI is not simply a technological challenge: it is a human one.

According to Thales, the readiness gap can be closed. The regulatory complexity can be navigated. The technical opportunities are immense. But without education, all progress remains fragile. The real question is not whether AI will shape the future. It will. The question is: will we prepare people, as well as systems, to ensure AI remains a tool for resilience rather than a vector of risk?