On-premise execution is the default execution mode for all customers.

On-Premise Execution

Inferable offers a robust patern for on-premise execution that ensures that sensitive information, including data in the compute runtime and secret keys, remains within your organization’s infrastructure at all times.

How Inferable Ensures On-Premise Execution

  1. Local Runtime Environment: The Inferable SDK executes within your runtime environment / infrastructure. This is responsible for listning for new function calls

  2. Run Execution: When you initiate a Run, Inferable executes any function calls within your local environment, and only returns the output to Inferable.

  3. Secret Management: Sensitive information such as API keys, passwords, and other secrets are not provided to Inferable. Inferable only sees the output of your functions, not the execution context.

Connections to external SaaS services

Inferable does not connect directly to external SaaS services to integrate. Rather, inferable encourages you to create wrapper function that call the external services.

Inferable provides a proxy and tooling to convert any REST API or a GraphQL API into a compatible function.

This means that any API keys or secrets used by these services are not provided to Inferable, and you only allow Inferable to execute functions that you have created.