Note: Cyber Protection Magazine rarely writes about individual products unless we cannot find similar products to compare. In this case, we have not found any company providing similar benefits.
DataKrypto, a California cybersecurity company, may be able to stop AI data theft and LLM poisoning using fully homomorphic encryption (FHE) combining trusted execution environment, without FHE’s processing limitations.
Since the emergence of artificial intelligence, intellectual property (IP) and personal identity information (PII) theft have been huge concerns to industry and individuals.
The hole in encryption
Data can be protected by encryption in most situations. Almost 95% of data is encrypted in transit, and about half of all data is encrypted while in storage. But the data in use, while being created or analyzed, is not. Training AI models is a data-in-use process. Without encryption, both the data and the training model can be stolen or compromised during that process by bad actors. That causes multiple problems.
AI learns from proprietary information it is trained on, including copyrighted content, trade secrets, or confidential code. This creates risks of reverse engineering, accelerated design theft, and data poisoning, and allows malicious actors to exploit systems.
FHE protects data in use, so a bad actor monitoring a system cannot read or act on the data unless they have access to the encryption key. Until Datakrypto’s breakthrough, that protection came at an enormous cost. FHE operations are 1000x-10000x slower than unencrypted operations. That is impractical for many real-time applications. It also increases data storage needs by orders of magnitude.
No apparent latency
Datakrypto’s FHEnom technology, however, has no noticeable increase in computational latency or data storage, according to its founder and CTO, Luigi Caramico.
It is difficult to steal intellectual property when it is fragmented across multiple silos, especially when it requires stitching to maintain context. “You might need to provide context to a prompt that includes private or proprietary data,” Caramico said. “Once an AI model learns from the data and it resides in a central location, securing that AI becomes mission-critical. Using our technology, during the training process, if the data is stolen it is useless to them. Our system encrypts that prompt, so the AI does not know what you are asking. Its response is also encrypted, so it doesn’t know what it is saying.”
This is useful in the healthcare arena, a primary market for Datakrypto. A patient providing context about a particular condition while using an AI-driven diagnostic tool, FHEnom immediately encrypts the prompt and the response, while still training the AI.
Quantum resistance
Probably more importantly, FHEnom effectively protects encryption from the mythical quantum computing decryption. The technology encrypts everything as it is created and places it in a single “bucket.” That requires a quantum computer to decrypt everything in the bucket, rather than specific documents. Fujitsu researchers estimated a 10,000 qubit fault-tolerant quantum computer would take 104 days to crack a 2048-bit RSA key for one document. It would take decades to decrypt the contents of a data storage unit. Currently, IBM has the largest quantum computer in the world at 1124 qubits.
That makes Datakrypto technology “quantum resistant”, which disrupts the entire PQC industry niche.
Datakrypto will demonstrate their technology at the RSA Conference Early Stage Expo, booth 32.
Lou Covey is the Chief Editor for Cyber Protection Magazine. In 50 years as a journalist he covered American politics, education, religious history, women’s fashion, music, marketing technology, renewable energy, semiconductors, avionics. He is currently focused on cybersecurity and artificial intelligence. He published a book on renewable energy policy in 2020 and is writing a second one on technology aptitude. He hosts the Crucial Tech podcast.