On-premise execution is the default execution mode for all customers.

On-Premise Execution

Inferable offers a robust solution for organizations that require complete control over their data and workflows through on-premise execution. This approach ensures that sensitive information, including data in the compute runtime and secret keys, remains within your organization’s infrastructure at all times.

How Inferable Ensures On-Premise Execution

  1. Local Runtime Environment: Inferable SDK sets up a dedicated runtime environment within your own compute.

  2. Workflow Execution: When you initiate a workflow, Inferable executes any function calls entirely within your local environment, and only returns the output of your functions.

  3. Secret Management: Sensitive information such as API keys, passwords, and other secrets are not provided to Inferable. Inferable only sees the output of your functions, not the execution context.

Connections to external SaaS services

Inferable does not connect directly to external SaaS services. Rather, inferable encourages you to create a thin service layer that sits between Inferable and your external services. This service layer is deployed within your on-premise infrastructure, and is responsible for connecting to external services.

Inferable provides a proxy and tooling to convert any REST API or a GraphQL API into a compatible function.

This means that any API keys or secrets used by these services are not provided to Inferable, and you only allow Inferable to execute functions that you have created.