Agents
Learn how to mix autonomous LLM-based reasoning engines
Agents are autonomous LLM-based reasoning engines that may use zero or more tools to iteratively achieve a pre-defined goal.
Agents exhibit characteristics of a long-running durable task in a distributed system, except that it uses an LLM to model its internal control flow. Inferable will run the agent in a durable way and return the structured result of the agent to the workflow.
Agent Types
The ctx.agents
object is used to interact with agents. Inferable provides a default agent type called react
that is a good starting point for most use cases.
React Agent
The ctx.agents.react
object is used to interact with the React agent. It takes in 5 arguments:
name
: The name of the agent.instructions
: The instructions for the agent. (This gets passed to the system prompt of the LLM)input
: Stringified input for the agent.tools
: The tools that the agent can use. (These are registered with the workflow context)resultSchema
: The schema of the result that the agent will return.
Internally, this agent will:
- Call the
searchUsers
tool with the search query. - Inspect the response from the tool to examine the results.
- If no results are found, call the
searchUsers
tool again with a new search query. - Repeat step 2 and 3 until a result is found or the agent is stopped.
- Return the user ID.
Semantics
- The agent will either resolve with a result or throw an error.
- If the agent throws an error, the workflow will stop and the error will be surfaced.
- If the agent returns a result, the workflow will continue to execute the next step.
Tool Usage
Tools are functions that are used to interact with external systems. They are defined with your workflow context and are called by the agent.
A tool takes in the following arguments:
name
: The name of the tool.schema
: The schema of the tool.func
: The function that is called when the tool is used.config
: Optional configuration for the tool.
Model Selection
By default ctx.llm.agents
will use the Claude Sonnet 3.5 model provided by Inferable.
However, this is intended for testing only. You can specify your own Anthropic API details when calling ctx.llm.agents
.
Currently ctx.agents
only supports the Anthropic API and the following model Ids:
Configuration
retryCountOnStall
: The number of times to retry the tool call if it stalls.timeoutSeconds
: The timeout for the tool call in seconds.
Semantics
- When you call
workflow.listen()
, it will automatically register all the tools defined and make them available to the agent.
Was this page helpful?