Custom LLMs and Plugins

In this project, we have a class-based API that allows you to extend the default LLM and add extra methods to your own LLM, enabling better fit for the problem you are trying to solve.

Extending the default LLM

This project is built on top of dialog-lib, our library that provides a class-based API to create LLM models based on Langchain.

If you want to extend the default LLM, you can create a new class that inherits from AbstractLLM (as the default one does, available in the file `src/dialog/llm/agents/default.py) and override the methods you want to change and add behavior.

To use your custom LLM, you just need to add the environment variable LLM_CLASS to your .env file:

LLM_CLASS=plugins.my_llm.MyLLMClass

Is it possible to use LangChain's LCEL as my agent?

You can user the default LangChain LCEL approach! It's totally up to you, but the LCEL must return a LangChain Runnable object in order to work and this object must be importable from the path you set in the LLM_CLASS environment variable.

We have an example of an LCEL implementation inside the file src/dialog/llm/agents/lcel.py. You can use it by setting the LLM_CLASS environment variable to dialog.llm.agents.lcel.runnable.

What happens if my class doesn't implement the AbstractLLM from Dialog Lib or doesn't return a Runnable object?

If your class doesn't implement the AbstractLLM or doesn't return a Runnable object, the project will raise an error and load the default agent available in the file src/dialog/llm/agents/default.py.

Adding extra routes to the project

Dialog is pretty extensible, being a FastAPI based project allows you to be very creative.

Adding new models to the project through settings

In the release v0.1.3, we enabled users to create multiple endpoints using different models with just a simple tweak in the prompt config config file (the prompt file).

The default model is still configured in the same way as the previous versions: you need to define the environment variable 'LLM_CLASS' to use a model that implements AbstractLLM class (any of the models available in dialog-lib implements this class and are ready to use) of your choice and use it on the /chats/{chat_id} or /ask endpoints.

To add a new model, you need to implement a new [endpoint] in the toml, just as shown below:

[model]
model_name = "gpt-4o"
temperature = 0.1

... some other settings over here ...

[[endpoint]]
path = "/my-awesome-new-model"
model_name = "newmodel"
model_class_path = "the.importable.path.to.your.ModelClass"

Writing a new plugin without a PyPI Package.

To add new endpoints or features, you need to create a package inside the src/plugins folder and, inside the new package folder, add the following file:

  • __init__.py: the default package initializer from Python (this file can be empty), but we recommend you to create the router here.

Inside the __init__.py file, you need to create a FastAPI router that will be loaded into the app dynamically:

from fastapi import APIRouter

router = APIRouter()

# add your routes here

The variable that instantiates APIRouter must be called router.

After creating the plugin, to run it, add the environment variable PLUGINS to your .env file:

PLUGINS=plugins.your_plugin_name # or PLUGINS=plugins.your_plugin_name.file_name if there is another file to be used as entrypoint

WhatsApp Text to Audio Synthesis

We already made a WhatsApp plugin that converts the LLM processed output from the message an user sent, into an audio file and sends it back to the user.

To use this plugin, you need to clone the WhatsApp Audio Synth repo inside the plugins folder of this repo and add the following environment variables to your .env file:

WHATSAPP_API_TOKEN=
WHATSAPP_ACCOUNT_NUMBER=
PLUGINS=plugins.whats_audio_synth.main,

Available Plugins

Last updated