docs.langflux.space
  • Welcome to LangFlux
  • Using LangFlux
    • API
    • Streaming
    • Embed
    • Variables
  • Configuration
    • Auth
      • Chatflow Level
    • Rate Limit
  • Integrations
    • Cache
      • InMemory Cache
    • Chains
      • Conversational Retrieval QA Chain
      • Vectara QA Chain
    • Document Loaders
      • S3 File Loader
      • PDF Files
    • Chat Models
      • Azure ChatOpenAI
      • ChatLocalAI
      • Google VertexAI
    • Embeddings
      • Azure OpenAI Embeddings
      • LocalAI Embeddings
    • Memory
      • Short Term Memory
      • Long Term Memory
        • Zep Memory
      • Threads
    • Text Splitters
      • Character Text Splitter
    • Tools
      • Custom Tool
    • Vector Stores
      • Chroma
      • Pinecone
      • Elastic
      • Qdrant
      • SingleStore
      • Supabase
      • Vectara
    • Utilities
      • Set/Get Variable
      • If Else
    • External Integrations
      • Zapier Zaps
  • Use Cases
    • Web Scrape QnA
    • Webhook Tool
Powered by GitBook
On this page
  • Definitions
  • Inputs
  • Parameters
  • Outputs

Was this helpful?

  1. Integrations
  2. Chains

Conversational Retrieval QA Chain

PreviousChainsNextVectara QA Chain

Last updated 1 year ago

Was this helpful?

A chain for performing question-answering tasks with a retrieval component.

Definitions

A retrieval-based question-answering chain, which integrates with a retrieval component and allows you to configure input parameters and perform question-answering tasks. Retrieval-Based Chatbots: Retrieval-based chatbots are chatbots that generate responses by selecting pre-defined responses from a database or a set of possible responses. They "retrieve" the most appropriate response based on the input from the user. QA (Question Answering): QA systems are designed to answer questions posed in natural language. They typically involve understanding the question and searching for or generating an appropriate answer.

Inputs

Parameters

Name
Description

Return Source Documents

To return citations/sources that were used to build up the response

System Message

An instruction for LLM on how to answer query

Chain Option

Outputs

Name
Description

ConversationalRetrievalQAChain

Final node to return response

Method on how to summarize, answer questions, and extract information from documents. Read

Language Model
Vector Store Retriever
Memory (optional)
more