![]()
Zama’s Jeremy Bradley discusses why the AI surge is forcing businesses to take privacy more seriously and how new technologies are responding.
Published last month, 2026’s International AI Safety Report made two things clear: while AI deployments were still pilots or limited-scope tools just one to two years back, AI capability has advanced at lighting speed, and adoption – even faster. In fact, more than 700m people now use leading AI systems every week, which is a rate of uptake that far outpaces earlier technologies like the personal computer.
For companies witnessing this shift – in both the tech’s capabilities and demand for it – the lure to capitalise on AI is strong. Delivered largely through cloud-based data processing, AI opens the door to everything from automating decisions and extracting value from data at scale to moving far faster than competitors if embraced early doors. These pros have seen many already embed it into core workflows – including pricing, decisioning, R&D, legal, healthcare and finance.
The transparency paradox
However, as soon as AI touches core IP and regulated data, businesses hit a roadblock. Dealing with systems that are open by design, they struggle to support real-world use cases involving any kind of confidential data (payroll, identity, enterprise finance etc).
This hasn’t necessarily slowed AI adoption, but it has made it uneven. Specifically, we’ve seen rapid experimentation at the edges, where AI is low-risk, but caution when it comes to AI systems training on or interacting with sensitive, regulated or proprietary data. This has seen many limit production use to narrowly scoped tasks; rely on thinner, sanitised, or synthetic datasets; or keep high-value workloads out of shared cloud environments entirely.
This all of course stems from the risk of data exposure, whether that’s via third-party infrastructure, data being reused in ways that are hard to audit, or information being incorporated into models that are difficult to inspect or unwind. Recent high-profile failures to have hit the headlines (data leaks, model-inversion attacks, regulatory enforcement, AI misuse scandals etc) have only strengthened concerns around what happens to sensitive data once it enters an AI system.
Coupled with this, AI governance is moving from abstract policy to fiduciary responsibility. The UK Information Commissioner’s Office (ICO)’s position on AI compliance, for example, is that any organisations using AI to process personal data must comply with data protection law, regardless of how complex or opaque the system is. The European Data Protection Board (EDPB) has much the same stance too.
How confidential AI will become standard
In practice, the above raises huge questions – particularly for regulated industries – around accountability, data residency and compliance with privacy law. But it also leaves businesses balancing pressure to adopt AI quickly against the risk of exposing data they can’t afford to lose or misuse. And it is this gap that privacy-preserving technologies – fully homomorphic encryption (FHE) specifically – are beginning to address.
FHE has long-existed as a mathematical theory, promising the ability to compute on encrypted data without ever decrypting it. Until recently, however, its use was limited; implementations were slow, resource-intensive and difficult for developers to integrate into real-world systems.
Several recent breakthroughs, however, have pushed FHE closer to large-scale deployment and developer-ready technology. These have included new cryptographic schemes like CKKS optimisations, which support approximate arithmetic and are more efficient for AI tasks; improved algorithms that have enhanced bootstrapping processes, drastically reducing the time needed to refresh ciphertexts; and libraries such as TenSEAL and Concrete being further optimised, making it easier for developers to deploy FHE at scale. Additionally, hardware acceleration through GPUs and FPGAs has reduced computational demands, while more developer-friendly APIs have made integration into existing workflows more seamless.
All of this means that now – for the first time – developers can actually design AI pipelines where confidentiality is guaranteed by the architecture itself, rather than enforced externally. This makes it feasible to extend AI into areas such as payroll, healthcare, finance and other regulated domains, all without compromising privacy – a development that will see confidential AI become standard.
Who is set to benefit the most?
The companies that succeed in this next phase of digital transformation will not be those that believe they are already doing enough. It also won’t be those that assume privacy can be retrofitted at a later point, and certainly won’t be those that misread today’s relative quiet around privacy as lack of demand (as soon as viable solutions exist, expectations reset very quickly).
Instead, it will be those that treat confidential data as a strategic asset from day one, and who embed privacy by design.
For those in the latter camp, they will be the first to unlock:
- Access to richer, more sensitive, higher-signal data, simply because customers trust them with it.
- Speed of deployment in sensitive environments, thanks to fewer legal reviews, fewer bespoke controls, fewer internal vetoes. This is when time-to-value becomes a real differentiator.
- Depth of integration and collaboration. Privacy-preserving systems unlock collaboration across organisational boundaries (partners, suppliers, jurisdictions) that were previously impossible. This expands addressable markets, not just improves margins.
What will happen to privacy by the year’s end?
As of now, the technology is ready and the benefits are clear, but that alone is not enough to flip privacy from a ‘nice to have’ to a board-level requirement for many businesses. For that to happen, a series of pressures will converge.
First, we’ll start to see a few major enterprises and public-sector actors set privacy-preserving architectures as default requirements. This will see the market tip, and the rest will follow quickly through supply chains and platforms.
At the same time, demand for AI will continue to grow, alongside the need for it to operate on sensitive data. In turn, this will see AI regulation continue to mature, and boards won’t ask “is privacy nice to have?” but “can we prove data never leaked?”
Finally, competitive pressure will do the rest. Companies that delay adopting privacy-by-design approaches will begin to see rivals move faster, unlock higher-value data and close deals that remain out of reach.
All of this, I believe, will happen by the end of 2026. By 2027, expectations will have caught up with capability, and the cost of not embedding privacy by design will be visible, measurable and strategic.
Jeremy Bradley is the chief operating officer at Zama. He is a cross-functional and highly tactical leader who has worked with many organisations to shape strategy, drive communications and partnerships, and lead policy and process.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.


