Overview
We understand that enterprise customers pursuing AI adoption have three key requirements:- Data localization
- Compute localization
- Reduced network surface area for ingress (private networking)
On-Premise Execution and Data Localization
Inferable offers robust on-premise execution, giving you full control over your compute and data:- Local Runtime Environment: Inferable SDK sets up a dedicated runtime environment within your own infrastructure.
- Workflow Execution: All function calls are executed entirely within your local environment.
- Secret Management: Sensitive information like API keys and passwords remain within your infrastructure.
Private Networking
Inferable’s architecture allows your compute to run on private networks without requiring any incoming connections:- Outbound Connections Only: The Inferable SDK initiates all connections from within your infrastructure to the Inferable control plane.
- Long-Polling Mechanism: Your services periodically check for new tasks, eliminating the need for open inbound ports.
- Reduced attack surface
- Mitigation of Man-in-the-Middle (MITM) attacks
- Simplified firewall rules
- Ability to deploy in private subnets