ChatLocalAI

LocalAI Setup

LocalAI is a drop-in replacement REST API that’s compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format.

To use ChatLocalAI within LangFlux, follow the steps below:

  1. git clone https://github.com/go-skynet/LocalAI
  2. cd LocalAI
  3. # copy your models to models/
    cp your-model.bin models/

For example:

Download one of the models from gpt4all.io

# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j

In the /models folder, you should be able to see the downloaded model in there:

Refer here for list of supported models.

  1. docker-compose up -d --pull always
  2. Now API is accessible at localhost:8080

# Test API
curl http://localhost:8080/v1/models
# {"object":"list","data":[{"id":"ggml-gpt4all-j.bin","object":"model"}]}

LangFlux Setup

Drag and drop a new ChatLocalAI component to canvas:

Fill in the fields:

  • Base Path: The base url from LocalAI such as http://localhost:8080/v1

  • Model Name: The model you want to use. Note that it must be inside /models folder of LocalAI directory. For instance: ggml-gpt4all-j.bin

That's it! For more information, refer to LocalAI docs.

Watch how you can use LocalAI on Flowise

Last updated

Was this helpful?