Blobs
Efficiently handle large data payloads in Inferable
Blobs in Inferable provide an efficient way to handle large data payloads without incurring the cost and latency of processing them through the language model. When functions need to return substantial amounts of data directly to users or APIs, blobs ensure this data bypasses model processing while maintaining data integrity.
What are Blobs?
Blobs are base64 encoded strings that can be returned by functions but are not processed by the language model. They are particularly useful when:
- Returning large amounts of data
- Handling binary data (like images)
- Needing to preserve exact data without risk of model hallucination
- Optimizing for cost and latency
Benefits
- Cost Efficiency: By bypassing model processing, blobs reduce token usage and associated costs
- Lower Latency: Direct data transfer without model processing means faster response times
- Data Integrity: Eliminates the risk of model hallucination or data modification
- Binary Data Support: Easily handle non-text data like images or documents
Drawbacks
- Opaque Data: Blobs make the wrapped data completely opaque to the language model. If the data is required at inference time, it should be included in the function’s output, and kept out of the blob.
Language Support
Language | Support |
---|---|
Nodel.js | ✅ |
Golang | ⛔️ |
.NET | ⛔️ |
Using Blobs
Any function result can contain blob data by including a blob()
decorated field in the response. Here’s how to structure blob data:
In the above example, the function returns an article with an image. The image is returned as a blob and is not processed by the language model. However, the title and description are processed by the language model and included in the agent context.
Supported Blob Types
Currently, Inferable supports the following blob types:
application/json
: For JSON dataimage/png
: For PNG imagesimage/jpeg
: For JPEG images
Example Use Cases
1. Image Processing
2. Large JSON Responses
Was this page helpful?