Why a privacy-first culture is your strongest defense in the AI era 

As generative AI, legacy systems, and tightening regulations converge, UK businesses are facing a perfect storm of data protection challenges.

Despite the growing cost of non-compliance, recent data shows that 52% of British businesses admit to experiencing data-related issues since the inception of the GDPR seven years ago. 

The study also revealed that 21% of business leaders are still relying on outdated systems, often citing budget constraints as a barrier to upgrading their security measures. While these are certainly important factors, businesses must recognise that critical gaps in data security aren’t always technological. While new tools are important, they’re no substitute for an organisation-wide, privacy-first mindset. In a world where large amounts of data are collected daily and the risk landscape continues to evolve, fostering a strong data protection culture is one of the most effective defences against malicious actors and unforeseen vulnerabilities.

Privacy isn’t a legal requirement; it’s an organisation-wide responsibility.

Almost every team across an organisation handles sensitive data. Product managers analyse user feedback, marketers manage consent, procurement teams vet vendors, and developers build data flows into platforms. With the increased use of AI, this data handling is rising significantly as employees adopt AI tools that can generate and ingest data at scale, often without fully understanding the implications.

While many businesses are creating policies for data handling and AI use, these efforts alone won’t drive lasting change unless a culture of shared responsibility is embedded. Data policies shouldn’t be treated as a mere tick-box exercise or seen as the sole responsibility of the IT team to digest and understand. Every individual within a business, from leadership to individual contributors, plays a role in protecting data and upholding security practices. 

Without a strong sense of shared accountability, even the most sophisticated frameworks are unlikely to gain traction. That’s why it’s more important than ever for employees to understand the reasoning behind policies.

If you can’t explain the ‘why’, your policy won’t stick.

Policies are only effective when people engage and agree with them. Too often, data protection guidelines sit in a shared drive, referenced during onboarding or audits, but rarely internalised. When that happens, privacy becomes a task to avoid, not a fundamental value to uphold.

To build buy-in, leaders must anchor every privacy decision in its wider purpose. Why do we need explicit consent here? What does transparency mean to our customers? How can proper vetting of our vendors increase customer trust? What’s ultimately at stake if we get this wrong?

Making the ‘why’ visible—whether through brief, plain-language explanations during policy rollouts or by introducing ‘privacy moments’ in regular team meetings—helps close knowledge gaps and encourages more active participation. Creating dedicated space for real conversations or discussing hypothetical scenarios can also make policies feel more relevant and actionable. Regular pulse surveys can then provide valuable insights into which policies are being understood and which may require further education or training.

Role-specific AI training must evolve with the risk landscape.

Training is essential, but only if it reflects the realities of each team’s data exposure. While annual general training is a start—with 40% of businesses now identifying AI and machine learning as top data privacy threats as they become more embedded in workflows—the need for continuous, role-specific education has never been clearer.

Teams should be offered tailored training based on their unique data responsibilities. For example, high-risk roles, such as data scientists, developers building AI features, and teams handling personal data in marketing, should receive bi-annual refreshers. These should be supplemented by scenario-based exercises such as simulated phishing attacks or breach response drills to build practical resilience.

This approach doesn’t just improve compliance; it boosts confidence and prepares teams for new regulations, like the EU AI Act, which applies to UK businesses operating in Europe. With 70% of UK companies already developing AI-related data privacy policies, forward-thinking businesses are recognising that proactive training is a competitive advantage, not merely a compliance burden.

Trust is built and broken internally.

A strong privacy culture does far more than just protect against risk. It earns and cements customer trust. In fact, 92% of UK business leaders say robust data protection practices have strengthened their market position. But that trust is hard-won and easily lost. 

Privacy isn’t something you can simply buy or delegate. It’s a set of shared behaviours that must be lived and demonstrated every single day. Whether you’re a nimble start-up or a rapidly scaling enterprise, embedding this culture early, through clarity, accountability, and continuous training, will create an organisation that’s not just compliant but truly trusted.

Now is the time to move beyond the checklist. Strong privacy starts with people.


About the Author

Sally-Anne Hinfey, PhD, VP Legal at SurveyMonkey. SurveyMonkey is the world’s most popular platform for surveys and forms, built for business—loved by users. We combine powerful capabilities with intuitive design, effectively serving every use case, from customer experience to employee engagement, market research to payment and registration forms. With built-in research expertise and AI-assisted technology, it’s like having a team of expert researchers right at your fingertips. Trusted by millions—from startups to Fortune 500 companies—SurveyMonkey helps teams gather insights and information that inspire better decisions, create experiences people love, and drive business growth. Discover how at surveymonkey.com.

Featured image: fauziEv8

more insights