Security
Security at Inferable
Above all, Inferable is committed to the security of your data. We take security very seriously and have implemented several measures to put your mind at ease.
Zero-Retention of Private Data
Sentinel is an open-source, low-latency tokenization proxy designed to achieve complete data localization when interacting with external servers. It acts as a secure intermediary, masking sensitive data transparently on its way to the destination and replacing the tokenized data on its way back.
With Sentinel, you have the ability to:
- Mask all data sent to Inferable control-plane
- Guarantee zero-retention of private data
- Audit all traffic to and from your services and the Inferable API
Open Source SDKs
Inferable SDK is open-source. This allows us to accept contributions from the community and ensures that the code is auditable by anyone. Since the SDK is the only part of our system that runs on your infrastructure, you can audit the code to ensure that it meets your security standards.
Transparent End-to-End Encryption
Inferable provides a masked
mode for sensitive data that you’re not comfortable sending to the control-plane. When you enable this mode, Inferable will encrypt the data on your machine before sending it to the control-plane, and everything outside of your machine will only see the encrypted data.
When control-plane sends the data back to your machine, Inferable will transparently decrypt the data for you.
For more information, see masked.
No Incoming Connections
Inferable SDKs use long-polling to receive instructions from the control-plane. This means that your machines do not have to open any ports to the outside world, and the control-plane never establishes a connection to your machines. (Your machines establish a connection to the control-plane)
Shutting down the machine running the SDKs will immediately stop the long-polling, and the control-plane will not be able to send any instructions to the machine.
AI Model and Data Security
Inferable uses a fine-tuned commercial AI model to generate instructions for your workflows. The data that you send to Inferable is never stored by the AI model, and in use only for generating instructions within the context of your workflow.
The AI model and control-plane only has access to what your functions are explicitly returning to Inferable. They do not ever have access to the data that your functions are processing.
On-Premise Execution
The services and functions that you register with Inferable are only executed on your machines. Inferable does not have access to any of your source code, or the runtime environment. This means that sensitive data such as secrets that reside in environment variables are not visible to Inferable.
Was this page helpful?