docs.langflux.space
  • Welcome to LangFlux
  • Using LangFlux
    • API
    • Streaming
    • Embed
    • Variables
  • Configuration
    • Auth
      • Chatflow Level
    • Rate Limit
  • Integrations
    • Cache
      • InMemory Cache
    • Chains
      • Conversational Retrieval QA Chain
      • Vectara QA Chain
    • Document Loaders
      • S3 File Loader
      • PDF Files
    • Chat Models
      • Azure ChatOpenAI
      • ChatLocalAI
      • Google VertexAI
    • Embeddings
      • Azure OpenAI Embeddings
      • LocalAI Embeddings
    • Memory
      • Short Term Memory
      • Long Term Memory
        • Zep Memory
      • Threads
    • Text Splitters
      • Character Text Splitter
    • Tools
      • Custom Tool
    • Vector Stores
      • Chroma
      • Pinecone
      • Elastic
      • Qdrant
      • SingleStore
      • Supabase
      • Vectara
    • Utilities
      • Set/Get Variable
      • If Else
    • External Integrations
      • Zapier Zaps
  • Use Cases
    • Web Scrape QnA
    • Webhook Tool
Powered by GitBook
On this page
  • Prerequisite
  • Setup
  • Resources

Was this helpful?

  1. Integrations
  2. Vector Stores

Supabase

PreviousSingleStoreNextVectara

Last updated 1 year ago

Was this helpful?

Prerequisite

  1. Register an account for

  2. Click New project\

  3. Input required fields Name, name of the project to be created. (e.g. LangFlux) Database Password, password to your postgres database. (e.g. click Generate a password)\

  4. Click Create new project and wait for the project to finish setting up

  5. Click SQL Editor\

  6. Click New query\

  7. Copy & Paste and run it by Ctrl + Enter or click RUN Table name: documents Query name: match_documents

    -- Enable the pgvector extension to work with embedding vectors
    create extension vector;
    
    -- Create a table to store your documents
    create table documents (
      id bigserial primary key,
      content text, -- corresponds to Document.pageContent
      metadata jsonb, -- corresponds to Document.metadata
      embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed
    );
    
    -- Create a function to search for documents
    create function match_documents (
      query_embedding vector(1536),
      match_count int DEFAULT null,
      filter jsonb DEFAULT '{}'
    ) returns table (
      id bigint,
      content text,
      metadata jsonb,
      similarity float
    )
    language plpgsql
    as $$
    #variable_conflict use_column
    begin
      return query
      select
        id,
        content,
        metadata,
        1 - (documents.embedding <=> query_embedding) as similarity
      from documents
      where metadata @> filter
      order by documents.embedding <=> query_embedding
      limit match_count;
    end;
    $$;
    

Setup

  1. Click Project Settings\

  2. Get your Project URL & API Key

Resources

Copy & Paste each details (API Key, URL, Table Name, Query Name) into Supabase Upsert Document node or Supabase Load Existing node

Document can be connected with any node under category

Embeddings can be connected with any node under category

Document Loader
Embeddings
LangChain JS Supabase
Supabase Blog Post
Supabase
query