The lil’bots platform provides seamless integration with Large Language Models (LLMs). This functionality allows your bots to leverage powerful language models without needing to manage API keys or implement authentication. When your bot uses an LLM, the usage is automatically billed through the lilbots platform as credits, which depend on the specific model used and the number of tokens consumed (input + output).

Configuring LLM Access for Your Bot

To enable LLM functionality in your bot, you need to declare it in the bot manifest file:

{
  "runtime": "deno",
  "main": "script.mjs",
  "llm": true,
  // other bot configuration...
}

You can also specify a default model by providing the model ID instead of just true:

{
  "runtime": "deno",
  "main": "script.mjs",
  "llm": "openai/gpt-4o-mini",
  // other bot configuration...
}

When a bot with LLM functionality is used, bot users will be able to select an LLM from a dropdown, allowing your bot to benefit from future model releases without code changes.

Using LLMs in Your Bot Code

JavaScript Runtime

In JavaScript-based bots, you can use the LLM class provided by the lilbots library:

import { LLM } from 'lilbots';

export async function main(inputs, params) {
  // Create an LLM instance that connects to the user-selected model
  const llm = new LLM();
  
  // Use the LLM to generate content
  const response = await llm.chat.completions.create({
    messages: [
      { role: 'developer', content: 'You are a helpful AI that explains technical concepts clearly.' },
      { role: 'user', content: inputs.question }
    ]
  });
  
  const generatedContent = response.choices[0].message.content;
  
  return [{
    title: "AI Response",
    message: generatedContent
  }];
}

Python Runtime

In Python-based bots, you can use the LLM class from the lilbots library:

from lilbots import LLM

def main(inputs, params):
    # Create an LLM instance that connects to the user-selected model
    llm = LLM()
    
    # Use the LLM to generate content
    completion = llm.chat.completions.create(
        messages=[
            {"role": "developer", "content": "You are a helpful AI that explains technical concepts clearly."},
            {"role": "user", "content": inputs.get("question")}
        ]
    )
    
    response = completion.choices[0].message.content
    
    return [{
        "title": "AI Response",
        "message": response
    }]

API Compatibility

The LLM class is designed to be compatible with the OpenAI API client, making it easy to integrate into existing code. The interface follows the same patterns and methods as the official OpenAI SDK.

Important Notes

  • The model cannot be customized in the code. It is determined by the user’s selection or the default model specified in the bot manifest.
  • The developer role can be used in messages to provide system-level instructions to the LLM.
  • Credits are consumed based on the model used and the number of tokens processed (both input and output).

Supported Models and Pricing

The following models are supported on the lilbots platform:

Model IDProviderModelCredits per 100 tokens
openai/gpt-4oOpenAIgpt-4o5
openai/gpt-4.1OpenAIgpt-4.14
openai/o3OpenAIo320
openai/o4-miniOpenAIo4-mini2
openai/gpt-4.1-miniOpenAIgpt-4.1-mini1
openai/gpt-4o-miniOpenAIgpt-4o-mini1

Example: Building a Content Generator Bot

Here’s a complete example of a bot that uses an LLM to generate content based on user input:

import { LLM } from 'lilbots';

export async function main(inputs, params) {
  const llm = new LLM();
  
  // Prepare context based on user input
  const contentType = inputs.type || "blog post";
  const topic = inputs.topic;
  
  const response = await llm.chat.completions.create({
    messages: [
      { 
        role: 'developer', 
        content: `You are a professional content creator specializing in ${contentType} creation.
                  Produce high-quality, engaging content that is informative and well-structured.` 
      },
      { 
        role: 'user', 
        content: `Create a ${contentType} about "${topic}". Keep it concise but comprehensive.` 
      }
    ]
  });
  
  return [{
    title: `Generated ${contentType.charAt(0).toUpperCase() + contentType.slice(1)}`,
    message: response.choices[0].message.content
  }];
}