Introduction
Welcome to the Inferable.ai documentation
What is Inferable?
Inferable is a developer platform that allows you to build secure, agentic workflows from your existing code via Functions or Rest / GraphQL APIs.
How It Works
Imagine you have a set of internal services that power your product.
You can use Inferable to leverage these to create multi-step, LLM-driven workflows.
Functions are wrapped using the Inferable SDK
Functions within your code are wrapped using one of the Inferable SDKs.
This provides Inferable with context about the functions and registers them as message queue consumers for use in Runs.
Create a Run with a prompt and optional structured output
Runs can be triggered via prompt using the Inferable Web UI, CLI or API.
Issue the prompt with the Inferable web application.
Inferable control plane orchestrates execution of your functions
When a Run occurs, the Inferable Control Plane uses LLMs to perfom the request using the available functions.
- Runs are “agentic”. Multiple steps can be performed in order to complete the request, evaluating the results and repeating as necessary.
- The decision making process is orchestrated by the .
- The control plane will make a plan, and run an evaluation loop recursively until the objective is met.
- All the functions run on your infrastructure, and are not exposed via HTTP due to the long polling nature of the SDK.
Get a structured output back
A Run can be provided a ensuring that the result conforms to your desired schema from Step 2.
A run may take some time to complete, and the output will be returned as a JSON object with a result
field and a status
field. See the Run API for more details.
If the run completes successfully, Inferable provides a guarantee that either:
- The output conforms to the provided schema.
- The output is null.
Learn More
Was this page helpful?