While this guide uses NodeJS. The same steps and concepts are applicable for the other SDKs.

Prerequisites

Make sure you have Node.js installed and have set up your as an environment variable (INFERABLE_API_SECRET).

You can get your API secret by running:

mkdir inferable-example
cd inferable-example
inf auth keys create example_key --env # this will write the secret to your .env file

Creating a minimal Inferable application

1

Install dependencies

Install the necessary packages. We use zod to define the schema of our functions, and dotenv to load the environment variables from the .env file.

npm init -y
npm install inferable zod dotenv
2

Initialize the Inferable client

Create a new file called index.ts and add the following code to initialize the :

// index.ts
import { Inferable } from "inferable";

const client = new Inferable({
  apiSecret: process.env.INFERABLE_API_SECRET,
});
3

Register a function

Now, let’s add a that returns system information of the running machine:

// index.ts
import { z } from "zod";
import os from "os";

// Define the systemInformation function
client.default.register({
  name: "systemInformation",
  description: "Returns system information of the machine",
  schema: {
    input: z.object({
      type: z.enum(["cpus", "freemem", "arch"]),
    }),
  },
  func: async (input) => {
    const info = os[input.type](); // this is fully typesafe
    return info; // this will be serialized by the SDK
  },
});
4

Start the service

Finally, let’s start the :

// index.ts
client.default.start().then(() => {
  console.log("System information service is running...");
})
5

Run the project

To run this example, save the file as app.ts and execute it using tsx. -r dotenv/config is used to load the environment variables from the .env file.

npx tsx -r dotenv/config index.ts
6

Construct a run

Now that you have the service running and your functions registered, you can interact with it in three different ways:

Let’s create a run.ts file:

  1. It will take a question as an argument and return the answer.
  2. It will return the result of the function as an object containing answer.
// run.ts
import { Inferable } from "inferable";
import { z } from "zod";

const client = new Inferable({
  apiSecret: process.env.INFERABLE_API_SECRET,
});

client
  .run({
    initialPrompt: `Answer the following question: ${process.argv[2]}`,
    resultSchema: z.object({
      answer: z.string(),
    }),
  })
  .then((r) => r.poll())
  .then((r) => console.log(r?.result))
  .catch((e) => console.error(e));

To execute this example, save the file as run.ts and execute it in another terminal. (Make sure the service is running in the first terminal.)

Note that we didn’t explicity specify the function to use in the run. Inferable will automatically infer the function to use based on the context of the question. You can also specify the function to use in the run via the run.attachedFunctions field.

npx tsx -r dotenv/config run.ts "How many CPUs does this machine have?"

What happened?

The that comes out of the box with Inferable uses an LLM to plan, select the best function, execute it, and then use the result to plan again.

This is an illustration of how the Re-Act loop would have worked in this instance:

Finishing up

That’s it! You’ve created a minimal Inferable application.

The complete working example of this guide is available here.

If you need help at any point, join our Community Slack and join the #support channel.