Security
Enterprise-grade security for your agentic applications
Above all, Inferable is committed to the security of your data. We take security very seriously and have implemented several measures to put your mind at ease.
Fully Open-source and Self-hostable
Inferable is completely open-source and self-hostable. This means that you can audit the code, and self-host the Inferable Control Plane on your own infrastructure. Inferable cloud uses the same control-plane as the open-source version, with high-availability and guaranteed SLAs.
No Incoming Connections
The Inferable SDKs use long-polling to receive instructions from the control-plane. This means that your machines do not open any ports to the outside world, and the control-plane never establishes a connection to your machines. (Your machines establish a connection to the control-plane)
Shutting down the machine running the SDKs will immediately stop the long-polling, and the control-plane will not be able to send any instructions to the machine.
AI Model and Data Security
Inferable uses commercial AI models to generate instructions for your Runs. The data that you send to Inferable Cloud is never stored by the AI model, and in use only for generating instructions within the context of your workflow.
The AI model and control-plane only has access to what your functions are explicitly returning to Inferable. They do not ever have access to the data that your functions are processing.
On-Premise Execution
The services and functions that you register with Inferable are only executed on your machines. Inferable does not have access to any of your source code, or the runtime environment. This means that sensitive data such as secrets that reside in environment variables are not visible to Inferable.
Related Pages
Was this page helpful?