Documentation Index
Fetch the complete documentation index at: https://docs.lilbots.io/llms.txt
Use this file to discover all available pages before exploring further.
The lil’bots platform provides seamless integration with Large Language Models (LLMs). This functionality allows your bots to leverage powerful language models without needing to manage API keys or implement authentication. When your bot uses an LLM, the usage is automatically billed through the lilbots platform as credits, which depend on the specific model used and the number of tokens consumed (input + output).
Configuring LLM Access for Your Bot
To enable LLM functionality in your bot, you need to declare it in the bot manifest file:
{
"runtime": "deno",
"main": "script.mjs",
"llm": true,
// other bot configuration...
}
You can also specify a default model by providing the model ID instead of just true:
{
"runtime": "deno",
"main": "script.mjs",
"llm": "openai/gpt-4o-mini",
// other bot configuration...
}
When a bot with LLM functionality is used, bot users will be able to select an LLM from a dropdown, allowing your bot to benefit from future model releases without code changes.
Using LLMs in Your Bot Code
JavaScript Runtime
In JavaScript-based bots, you can use the LLM class provided by the lilbots library:
import { LLM } from 'lilbots';
export async function main(inputs, params) {
// Create an LLM instance that connects to the user-selected model
const llm = new LLM();
// Use the LLM to generate content
const response = await llm.chat.completions.create({
messages: [
{ role: 'developer', content: 'You are a helpful AI that explains technical concepts clearly.' },
{ role: 'user', content: inputs.question }
]
});
const generatedContent = response.choices[0].message.content;
return [{
title: "AI Response",
message: generatedContent
}];
}
Python Runtime
In Python-based bots, you can use the LLM class from the lilbots library:
from lilbots import LLM
def main(inputs, params):
# Create an LLM instance that connects to the user-selected model
llm = LLM()
# Use the LLM to generate content
completion = llm.chat.completions.create(
messages=[
{"role": "developer", "content": "You are a helpful AI that explains technical concepts clearly."},
{"role": "user", "content": inputs.get("question")}
]
)
response = completion.choices[0].message.content
return [{
"title": "AI Response",
"message": response
}]
API Compatibility
The LLM class is designed to be compatible with the OpenAI API client, making it easy to integrate into existing code. The interface follows the same patterns and methods as the official OpenAI SDK.
Important Notes
- The model cannot be customized in the code. It is determined by the user’s selection or the default model specified in the bot manifest.
- The
developer role can be used in messages to provide system-level instructions to the LLM.
- Credits are consumed based on the model used and the number of tokens processed (both input and output).
Supported Models and Pricing
The following models are supported on the lilbots platform:
| Model ID | Provider | Model | Credits per 100 tokens |
|---|
| openai/gpt-4o | OpenAI | gpt-4o | 5 |
| openai/gpt-4.1 | OpenAI | gpt-4.1 | 4 |
| openai/o3 | OpenAI | o3 | 20 |
| openai/o4-mini | OpenAI | o4-mini | 2 |
| openai/gpt-4.1-mini | OpenAI | gpt-4.1-mini | 1 |
| openai/gpt-4o-mini | OpenAI | gpt-4o-mini | 1 |
Example: Building a Content Generator Bot
Here’s a complete example of a bot that uses an LLM to generate content based on user input:
import { LLM } from 'lilbots';
export async function main(inputs, params) {
const llm = new LLM();
// Prepare context based on user input
const contentType = inputs.type || "blog post";
const topic = inputs.topic;
const response = await llm.chat.completions.create({
messages: [
{
role: 'developer',
content: `You are a professional content creator specializing in ${contentType} creation.
Produce high-quality, engaging content that is informative and well-structured.`
},
{
role: 'user',
content: `Create a ${contentType} about "${topic}". Keep it concise but comprehensive.`
}
]
});
return [{
title: `Generated ${contentType.charAt(0).toUpperCase() + contentType.slice(1)}`,
message: response.choices[0].message.content
}];
}