Enterprise readiness
Inferable is designed to be enterprise-ready from the ground up.
Overview
We understand that enterprise customers pursuing AI adoption have three key requirements:
- Data localization
- Compute localization
- Reduced network surface area for ingress (private networking)
Inferable is designed to meet all these requirements.
On-Premise Tool Execution and Data Localization
Inferable offers robust on-premise functions execution, giving you full control over your compute and data:
- Local Runtime Environment: Inferable SDK sets up a dedicated runtime environment within your own infrastructure.
- Workflow Execution: All function calls are executed entirely within your local environment.
- Secret Management: Sensitive information like API keys and passwords remain within your infrastructure.
This approach addresses concerns about data sovereignty and compliance with regional data protection regulations.
Private Networking
Inferable’s architecture allows your functions to run on private networks without requiring any incoming connections:
- Outbound Connections Only: The Inferable SDK initiates all connections from within your infrastructure to the Inferable control plane.
- Long-Polling Mechanism: Your services periodically check for new tasks, eliminating the need for open inbound ports.
Benefits include:
- Reduced attack surface
- Mitigation of Man-in-the-Middle (MITM) attacks
- Simplified firewall rules
- Ability to deploy in private subnets
Self-hosting and Bring Your Own Model (BYOM)
Inferable is designed to be self-hosted and BYOM. For complete control over your data and compute, you can self-host the Inferable Control Plane on your own infrastructure. Enterprise customers receive guaranteed SLAs on support for on-premise deployments.
SOC2 Certification
We are currently undergoing SOC2 Type I and Type II certification processes.
Was this page helpful?