docs.langflux.space
  • Welcome to LangFlux
  • Using LangFlux
    • API
    • Streaming
    • Embed
    • Variables
  • Configuration
    • Auth
      • Chatflow Level
    • Rate Limit
  • Integrations
    • Cache
      • InMemory Cache
    • Chains
      • Conversational Retrieval QA Chain
      • Vectara QA Chain
    • Document Loaders
      • S3 File Loader
      • PDF Files
    • Chat Models
      • Azure ChatOpenAI
      • ChatLocalAI
      • Google VertexAI
    • Embeddings
      • Azure OpenAI Embeddings
      • LocalAI Embeddings
    • Memory
      • Short Term Memory
      • Long Term Memory
        • Zep Memory
      • Threads
    • Text Splitters
      • Character Text Splitter
    • Tools
      • Custom Tool
    • Vector Stores
      • Chroma
      • Pinecone
      • Elastic
      • Qdrant
      • SingleStore
      • Supabase
      • Vectara
    • Utilities
      • Set/Get Variable
      • If Else
    • External Integrations
      • Zapier Zaps
  • Use Cases
    • Web Scrape QnA
    • Webhook Tool
Powered by GitBook
On this page
  • Definitions
  • Inputs
  • Parameters
  • Outputs

Was this helpful?

  1. Integrations
  2. Chains

Vectara QA Chain

PreviousConversational Retrieval QA ChainNextDocument Loaders

Last updated 1 year ago

Was this helpful?

A chain for performing question-answering tasks with Vectara.

Definitions

A retrieval-based question-answering chain, which integrates with a Vectara retrieval component and allows you to configure input parameters and perform question-answering tasks.

Inputs

Parameters

Name
Description

Summarizer Prompt Name

model to be used in generating the summary

Response Language

desired language for the response

Max Summarized Results

number of top results to use in summarization (defaults to 7)

Outputs

Name
Description

VectaraQAChain

Final node to return response

Vectara Store