Zero Trust in Generative AI: An Overview
The smartest AI strategies start with a simple principle: never trust, always verify 💡

Introduction
Generative AI, especially large language models (LLMs), is rapidly being adopted in enterprises globally for boosting productivity and unlocking new capabilities. According to the 2025 Stanford AI Index Report and McKinsey’s State of AI 2025 survey, around 78% of organisations now use AI in at least one business function — up from 55% just a year earlier. Generative AI adoption has similarly surged, with 71% of enterprises reporting regular use in 2024, a dramatic leap from 33% in 2023 Stanford HAI 2025 ; McKinsey 2025
South African businesses mirror these global trends in AI adoption, reflecting no exception to the growing enthusiasm for generative AI to enhance business operations. The rise in adoption is accompanied by strategic investments and operational scaling, which highlights the importance for organisations to develop comprehensive AI strategies to reap benefits while managing associated risks.
These trends are confirmed by leading industry research — the Stanford AI Index Report (2025) and McKinsey’s State of AI (2025) both document a sharp rise in enterprise AI adoption, with most companies now applying AI across multiple functions rather than isolated pilots. CIO budgets for AI have expanded accordingly, reflecting the move from experimentation to full-scale operational integration.
Therefore, enterprises in South Africa and worldwide are embracing generative AI at a fast pace, aiming to leverage its productivity potential and innovative capabilities, with expected continued growth in adoption rates through 2026 and beyond.
Companies are experimenting with tools like OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini to gain competitive advantage. But alongside the opportunities come significant security and privacy concerns.
Business leaders are now grappling with critical questions:
- How do we enable generative AI for value creation while safeguarding sensitive data?
- How do we comply with regulations like POPIA and GDPR?
- How do we prevent AI from becoming a source of leaks, compliance violations, or reputational damage?
One answer lies in applying Zero Trust principles to AI. The idea of “never trust, always verify” every user, device, and application becomes crucial when an AI system might access or output confidential information.
This article explores how Zero Trust applies to Generative AI:
- The core principles adapted to AI systems
- Key risks and real-world incidents
- Regulatory pressures in South Africa and globally
- Best practices and safeguards for leaders
Zero Trust Principles for Generative AI
As Check Point Software explains, the Zero Trust model assumes that no user, device, or component is inherently trustworthy. Every access request must be verified, and the principle of least privilege enforced.
In the context of AI, this means treating LLMs as untrusted actors - systems capable of producing valuable outputs but also of introducing risk if not properly isolated and monitored. As Check Point notes, “LLM-powered applications blur the line between user and application.” An AI agent may autonomously fetch data, execute actions, or generate outputs that influence business decisions — all of which demand continuous verification and containment.
Key Zero Trust principles for AI include:
- Continuous Verification
- Validate inputs and outputs.
- Inspect prompts and completions to prevent leakage or misuse.
- Apply filters and human review before outputs are trusted.
- Least Privilege
- Give AI agents only the access they need, no more.
- Configure chatbots with the same limits you’d give to a junior employee.
- Strict Isolation
- Sandbox AI environments.
- Block internet access unless absolutely required.
- Apply URL filtering for browsing-enabled models.
- Assume Breach & Monitor Behaviour
- Log AI activity in real time.
- Flag anomalies (e.g., sudden mass downloads).
- Deploy AI-specific monitoring tools.
💡 Think of your AI as a powerful but naïve intern. You wouldn’t give them full access to everything on day one.
The Risks of Generative AI
Generative AI introduces unique risks across data security, IP, compliance, and integrity. Let’s break them down.
1. Sensitive Data Leakage
- User input risk: Employees may paste confidential code or client data into public AI tools. In 2023, Samsung engineers leaked proprietary code into ChatGPT while troubleshooting (Mashable).
- Training data memorisation: LLMs can regurgitate personal or copyrighted data from training sets (Future of Privacy Forum).
- Provider breaches: In March 2023, a ChatGPT bug exposed users’ chat history and billing info (Help Net Security)
📌 From a POPIA/GDPR perspective, these count as data breaches.
2. Intellectual Property Loss
- Employee misuse: Well-meaning staff can leak trade secrets, as in the Samsung case.
- Model outputs: AI might reproduce copyrighted code or text without attribution. GitHub Copilot has faced criticism for this.
- Industry response: Banks like JPMorgan and Goldman Sachs banned ChatGPT for compliance and IP protection
💡 If you wouldn’t email your IP externally, don’t paste it into a chatbot.
3. Malicious Use and Integrity Attacks
- Prompt injection: Attackers trick AI into revealing secrets or bypassing rules.
- Data poisoning: Attackers seed training data with backdoors.
- Model vulnerabilities: Bugs or unsafe plugins can lead to compromise.
Real-world caution: A lawyer used ChatGPT for legal research, only to submit fabricated case law to court – a reputational and professional disaster.
4. Compliance Pressures
- South Africa: POPIA restricts cross-border data transfers and enforces strict data security.
- Europe: GDPR plus the upcoming EU AI Act, which introduces obligations for foundation models.
- Global: OECD AI Principles, U.S. AI Bill of Rights, and sectoral rules in finance and healthcare.
Managing Risks: Cloud vs On-Premises
Organisations face a choice in deploying AI: trust cloud providers, go private, or take a hybrid approach.
Cloud AI (AWS, Azure, Google)
- Privacy by design: No training on customer data.
- Data residency: Regional hosting for compliance.
- Isolation: Confidential computing and encryption (AWS Nitro, Azure Confidential Computing).
- Shared responsibility: Providers secure the base service; you secure integration.
On-Premises AI
- Maximum control: Data never leaves your environment.
- Customisable: Use open-source LLMs (e.g., LLaMA 2) fine-tuned on proprietary data.
- High cost: Significant infrastructure and expertise needed.
- Hybrid trend: Sensitive workloads stay private; less sensitive tasks go cloud.
Real-World Incidents
- Samsung (2023): Employees leaked source code into ChatGPT (Mashable).
- ChatGPT bug (2023): Exposed chat history + billing info (Help Net Security)
- Italy ban (2023): Privacy regulator blocked ChatGPT until fixes were made (Business Insider).
- Apple & JPMorgan: Banned staff from using ChatGPT for sensitive tasks (COSMICO)
📌 Lesson: Most incidents weren’t hackers – they were insiders misusing tools without guidance.
Safeguards and Best Practices
To responsibly deploy GenAI, organisations need policy, training, and technical controls.
Policy & Governance
- Draft AI Acceptable Use Policies (no personal/financial data in prompts).
- Use data classification to guide what can/can’t be shared.
- Treat AI vendors as critical suppliers – review contracts and security guarantees.
Training & Awareness
- Educate employees: AI ≠ safe space.
- Share real examples (Samsung leak, hallucinating lawyer).
- Encourage incident reporting without fear.
Technical Controls
- DLP to prevent sensitive data leaving via AI.
- Logging & monitoring of prompts/responses.
- Strong IAM (MFA, role-based access).
- Encrypt stored and transmitted data.
- Sandbox AI integrations.
Future Tools
- Differential privacy for training.
- Content watermarking & authenticity markers.
- Confidential computing (AWS Nitro, Azure Confidential).
Conclusion
Generative AI is transformative, but trust in AI must be earned, not assumed.
By applying Zero Trust principles, South African and global organisations can:
- Reduce risks of data leaks and IP loss
- Strengthen compliance with POPIA, GDPR, and beyond
- Confidently harness AI for growth without sacrificing customer trust
Key takeaways for leaders:
- Treat AI as an untrusted actor until verified.
- Combine policy + training + technology for resilience.
- Learn from global incidents and adapt fast.
- Cloud can help, but hybrid is often the sweet spot.
👉 The organisations that thrive in the GenAI era will balance innovation with governance, ensuring security and trust remain at the heart of adoption.