docs.langflux.space
  • Welcome to LangFlux
  • Using LangFlux
    • API
    • Streaming
    • Embed
    • Variables
  • Configuration
    • Auth
      • Chatflow Level
    • Rate Limit
  • Integrations
    • Cache
      • InMemory Cache
    • Chains
      • Conversational Retrieval QA Chain
      • Vectara QA Chain
    • Document Loaders
      • S3 File Loader
      • PDF Files
    • Chat Models
      • Azure ChatOpenAI
      • ChatLocalAI
      • Google VertexAI
    • Embeddings
      • Azure OpenAI Embeddings
      • LocalAI Embeddings
    • Memory
      • Short Term Memory
      • Long Term Memory
        • Zep Memory
      • Threads
    • Text Splitters
      • Character Text Splitter
    • Tools
      • Custom Tool
    • Vector Stores
      • Chroma
      • Pinecone
      • Elastic
      • Qdrant
      • SingleStore
      • Supabase
      • Vectara
    • Utilities
      • Set/Get Variable
      • If Else
    • External Integrations
      • Zapier Zaps
  • Use Cases
    • Web Scrape QnA
    • Webhook Tool
Powered by GitBook
On this page
  • BufferMemory
  • BufferWindowMemory
  • ConversationSummaryMemory
  • Separate conversations for multiple users
  • UI & Embedded Chat
  • Prediction API
  • Message API

Was this helpful?

  1. Integrations
  2. Memory

Short Term Memory

PreviousMemoryNextLong Term Memory

Last updated 1 year ago

Was this helpful?

Short Term Memory in LangFlux refers to ephemeral memory nodes that are only capable of storing past conversations in RAM. It simply stores the conversations in an array. When Flowise instance got restarted, everything will be lost.

There are 3 short term memory nodes in LangFlux:

  • BufferMemory

  • BufferWindowMemory

  • ConversationSummaryMemory

BufferMemory

The simplest amongst all. Store conversations into an array, and later pass it on to LLM.

BufferWindowMemory

Sometimes when conversations are too long, you might face issues where token limit exceeded. This is because there is simply too much text to fit into a limited context size of LLM.

Instead of storing all conversations, store only K number of conversations. This uses a sliding window implementation to get the most recent K interactions.

ConversationSummaryMemory

This uses a LLM to create a summary of the conversations. It is useful for condensing information from the conversation over time.

Separate conversations for multiple users

UI & Embedded Chat

By default, UI and Embedded Chat will automatically separate different users conversations. This is done by providing a list of history to the API. That logic is handled under the hood by LangFlux.

Prediction API

You can separate the conversations for multiple users by providing a list of history:

In the /api/v1/prediction/{your-chatflowid} POST body request, specify the history array:

{
    "question": "hello!",
    "history": [
        {
            "message": "Hello, how can I assist you?",
            "type": "apiMessage"
        },
        {
            "type": "userMessage",
            "message": "Hello I am Bob"
        },
        {
            "type": "apiMessage",
            "message": "Hello Bob! how can I assist you?"
        }
    ]
}

Message API

  • GET /api/v1/chatmessage/{your-chatflowid}

  • DELETE /api/v1/chatmessage/{your-chatflowid}

Query Param
Type
Value

sort

enum

ASC or DESC

startDate

string

endDate

string

All conversations can be visualized and managed from UI as well: