Privacy Breakthrough: New AI Model Processes Encrypted Data at Near Plaintext Speeds

In a landmark move for data security and artificial intelligence, scalable computing firm Cornami and privacy-enhancing technology startup DESILO have announced the first practical deployment of a large language model (LLM) based on Fully Homomorphic Encryption (FHE). Unveiled at the AI Infra Summit 2025, this solution allows AI to process and analyze highly sensitive data while it remains completely encrypted, solving a long-standing conflict between data privacy and performance.

The Achievement: Solving the Security Dilemma

For years, using sensitive data for AI training and analysis required decryption, creating a significant vulnerability. This announcement signals a major shift, proving that robust privacy and high-performance AI can coexist. The breakthrough finally makes FHE, a technology once considered too slow for real-world use, a viable solution for enterprises.

  • Privacy-Preserving AI: The model performs AI inference directly on encrypted data, meaning sensitive information is never exposed.
  • Unprecedented Speed: It operates at near plaintext speeds, overcoming the massive performance overhead that has historically crippled FHE applications.
  • Zero-Trust Ready: This “never decrypt” principle is fundamental to modern zero-trust security architectures and data sovereignty compliance.
  • Post-Quantum Security: The underlying cryptographic methods provide security against threats posed by future quantum computers.

Technical Details

The core of this innovation lies in the synergy between hardware and software. Cornami’s unique computing fabric provides the necessary processing power to handle the intensive calculations required by FHE.

The key algorithmic innovation is plaintext ciphertext matrix multiplication (PCMM), a technique pioneered by Cornami’s Chief Scientist, Dr. Craig Gentry, who is widely regarded as the “father of FHE”. This method dramatically accelerates the matrix multiplication operations that constitute over 90% of an LLM’s computational workload, closing the performance gap between encrypted and unencrypted processing.

Impact and Applications

This technology unlocks the potential of sensitive datasets in industries where privacy is non-negotiable. In healthcare, researchers can analyze confidential patient records and clinical trial data to derive insights without ever compromising individual privacy. Financial institutions can use it to detect fraud across encrypted transaction data, and government agencies can collaborate securely on sensitive intelligence.

Future Outlook

This successful deployment moves privacy-preserving AI from a theoretical concept to a practical tool. DESILO is already planning to integrate this capability into its upcoming HARVEST platform, which is designed to support global healthcare partners. The collaboration proves that enterprises no longer need to choose between extracting value from their data and upholding the highest security standards.

The ability to compute on encrypted data is set to become a foundational element of the next generation of AI infrastructure. This breakthrough not only enhances security but also enables safer data collaboration, paving the way for more powerful and responsible AI systems across every industry.