Klave logo
Back to Templatessecretarium/klave-ai-inference

Confidential AI

LanguagesWebAssembly / AssemblyScript
Use CaseModel Inference
Cost to deploy
~1.4 GBP
Cost to transact
~0.17 GBP
Deploy on Klave

Community Certifications

Audited by Imperial College London

Confidential AI app

Confidential AI Inference

An implementation of Confidential AI inference with LightGBM on Klave.

Description

By leveraging Klave for your AI use cases, you ensure that your IP is fully protected while being able to provide AI services on the cloud. Klave leverages confidential computing, guaranteeing that your proprietary AI models remain secure and inaccessible, even during inference, safeguarding your intellectual property. Additionally, it ensures in a provable manner that all user inputs remain entirely confidential, providing a robust, secure environment for AI services in the cloud. Klave allows companies to confidently deploy their AI capabilities, knowing their data and models are protected from leaks or unauthorized access.

This contract implements a single method `getExposureRisk` that take user input, load lightGBM model, infer from user input, unload LightGBM model and return results to the user. We use as an example the model we have created for determinating COVID contamination risk - Paper.

Features

  • Load AI Model
  • Infer Model
  • Extract and return results

Authors

This library is created by Klave and Secretarium team members, with contributions from: