docs.langflux.space
  • Welcome to LangFlux
  • Using LangFlux
    • API
    • Streaming
    • Embed
    • Variables
  • Configuration
    • Auth
      • Chatflow Level
    • Rate Limit
  • Integrations
    • Cache
      • InMemory Cache
    • Chains
      • Conversational Retrieval QA Chain
      • Vectara QA Chain
    • Document Loaders
      • S3 File Loader
      • PDF Files
    • Chat Models
      • Azure ChatOpenAI
      • ChatLocalAI
      • Google VertexAI
    • Embeddings
      • Azure OpenAI Embeddings
      • LocalAI Embeddings
    • Memory
      • Short Term Memory
      • Long Term Memory
        • Zep Memory
      • Threads
    • Text Splitters
      • Character Text Splitter
    • Tools
      • Custom Tool
    • Vector Stores
      • Chroma
      • Pinecone
      • Elastic
      • Qdrant
      • SingleStore
      • Supabase
      • Vectara
    • Utilities
      • Set/Get Variable
      • If Else
    • External Integrations
      • Zapier Zaps
  • Use Cases
    • Web Scrape QnA
    • Webhook Tool
Powered by GitBook
On this page

Was this helpful?

  1. Integrations

Memory

Memory allow you to chat with AI as if AI has the memory of previous conversations.

Human: hi i am bob

AI: Hello Bob! It's nice to meet you. How can I assist you today?

Human: what's my name?

AI: Your name is Bob, as you mentioned earlier.

Under the hood, these conversations are stored in arrays or databases, and provided as context to LLM. For example:

You are an assistant to a human, powered by a large language model trained by OpenAI.

Whether the human needs help with a specific question or just wants to have a conversation about a particular topic, you are here to assist.

Current conversation:
{history}

There are 2 main ways to store conversations:

  • Short Term Memory

  • Long Term Memory

For OpenAI Assistant, Threads will be used to store conversations.

PreviousLocalAI EmbeddingsNextShort Term Memory

Last updated 1 year ago

Was this helpful?