Confidential AI
Community Certifications
Audited by Imperial College London
Confidential AI Inference
An implementation of Confidential AI inference with LightGBM on Klave.
Description
By leveraging Klave for your AI use cases, you ensure that your IP is fully protected while being able to provide AI services on the cloud. Klave leverages confidential computing, guaranteeing that your proprietary AI models remain secure and inaccessible, even during inference, safeguarding your intellectual property. Additionally, it ensures in a provable manner that all user inputs remain entirely confidential, providing a robust, secure environment for AI services in the cloud. Klave allows companies to confidently deploy their AI capabilities, knowing their data and models are protected from leaks or unauthorised access.
This contract implements a single method `getExposureRisk` that take user input, load lightGBM model, infer from user input, unload LightGBM model and return results to the user. We use as an example the model we have created for determinating COVID contamination risk - Paper.
Features
- Load AI Model
- Infer Model
- Extract and return results
Authors
This library is created by Klave and Secretarium team members, with contributions from:
- Jeremie Labbe (@jlabbeklavo) - Klave | Secretarium
- Nicolas Marie (@Akhilleus20) - Klave | Secretarium
- Etienne Bosse (@Gosu14) - Klave | Secretarium