Enterprise readiness
Inferable is designed to be enterprise-ready from the ground up.
Overview
We understand that enterprise customers pursuing AI adoption have three key requirements:
- Data localization
- Compute localization
- Reduced network surface area for ingress (private networking)
Inferable is designed to meet all these requirements.
On-Premise Execution and Data Localization
Inferable offers robust on-premise execution, giving you full control over your compute and data:
- Local Runtime Environment: Inferable SDK sets up a dedicated runtime environment within your own infrastructure.
- Workflow Execution: All function calls are executed entirely within your local environment.
- Secret Management: Sensitive information like API keys and passwords remain within your infrastructure.
This approach addresses concerns about data sovereignty and compliance with regional data protection regulations.
Private Networking
Inferable’s architecture allows your compute to run on private networks without requiring any incoming connections:
- Outbound Connections Only: The Inferable SDK initiates all connections from within your infrastructure to the Inferable control plane.
- Long-Polling Mechanism: Your services periodically check for new tasks, eliminating the need for open inbound ports.
Benefits include:
- Reduced attack surface
- Mitigation of Man-in-the-Middle (MITM) attacks
- Simplified firewall rules
- Ability to deploy in private subnets
Zero-Retention of Private Data with Sentinel
For enterprise customers requiring an additional layer of data protection, our Sentinel feature offers:
- Complete data localization when interacting with external servers
- Masking of all data sent to Inferable control-plane
- Guaranteed zero-retention of private data
- Ability to audit all traffic to and from your services and the Inferable API
Sentinel acts as a secure intermediary, tokenizing sensitive data before it leaves your infrastructure and detokenizing it upon return.
Bring Your Own Model (BYOM)
With Sentinel, enterprise customers can bring their own models. Inferable is model-agnostic, but requires a MoM (mixture of models) that are available commercially via AWS Bedrock. (Support for other providers is in the works.)
SOC2 Certification
We are currently undergoing SOC2 Type I and Type II certification processes.
Was this page helpful?