Integrating Large Language Models in your bots
The lil’bots platform provides seamless integration with Large Language Models (LLMs). This functionality allows your bots to leverage powerful language models without needing to manage API keys or implement authentication. When your bot uses an LLM, the usage is automatically billed through the lilbots platform as credits, which depend on the specific model used and the number of tokens consumed (input + output).
To enable LLM functionality in your bot, you need to declare it in the bot manifest file:
You can also specify a default model by providing the model ID instead of just true
:
When a bot with LLM functionality is used, bot users will be able to select an LLM from a dropdown, allowing your bot to benefit from future model releases without code changes.
In JavaScript-based bots, you can use the LLM
class provided by the lilbots library:
In Python-based bots, you can use the LLM
class from the lilbots library:
The LLM
class is designed to be compatible with the OpenAI API client, making it easy to integrate into existing code. The interface follows the same patterns and methods as the official OpenAI SDK.
developer
role can be used in messages to provide system-level instructions to the LLM.The following models are supported on the lilbots platform:
Model ID | Provider | Model | Credits per 100 tokens |
---|---|---|---|
openai/gpt-4o | OpenAI | gpt-4o | 5 |
openai/gpt-4.1 | OpenAI | gpt-4.1 | 4 |
openai/o3 | OpenAI | o3 | 20 |
openai/o4-mini | OpenAI | o4-mini | 2 |
openai/gpt-4.1-mini | OpenAI | gpt-4.1-mini | 1 |
openai/gpt-4o-mini | OpenAI | gpt-4o-mini | 1 |
Here’s a complete example of a bot that uses an LLM to generate content based on user input:
Integrating Large Language Models in your bots
The lil’bots platform provides seamless integration with Large Language Models (LLMs). This functionality allows your bots to leverage powerful language models without needing to manage API keys or implement authentication. When your bot uses an LLM, the usage is automatically billed through the lilbots platform as credits, which depend on the specific model used and the number of tokens consumed (input + output).
To enable LLM functionality in your bot, you need to declare it in the bot manifest file:
You can also specify a default model by providing the model ID instead of just true
:
When a bot with LLM functionality is used, bot users will be able to select an LLM from a dropdown, allowing your bot to benefit from future model releases without code changes.
In JavaScript-based bots, you can use the LLM
class provided by the lilbots library:
In Python-based bots, you can use the LLM
class from the lilbots library:
The LLM
class is designed to be compatible with the OpenAI API client, making it easy to integrate into existing code. The interface follows the same patterns and methods as the official OpenAI SDK.
developer
role can be used in messages to provide system-level instructions to the LLM.The following models are supported on the lilbots platform:
Model ID | Provider | Model | Credits per 100 tokens |
---|---|---|---|
openai/gpt-4o | OpenAI | gpt-4o | 5 |
openai/gpt-4.1 | OpenAI | gpt-4.1 | 4 |
openai/o3 | OpenAI | o3 | 20 |
openai/o4-mini | OpenAI | o4-mini | 2 |
openai/gpt-4.1-mini | OpenAI | gpt-4.1-mini | 1 |
openai/gpt-4o-mini | OpenAI | gpt-4o-mini | 1 |
Here’s a complete example of a bot that uses an LLM to generate content based on user input: