Notebooks
L
LanceDB
LangChain And Autogen

LangChain And Autogen

agentsllmsvector-databaselancedbgptopenaiAImultimodal-aimachine-learningembeddingssaas_examplesfine-tuningexamplesdeep-learningpython_notebookgpt-4-visionllama-indexragmultimodallangchainlancedb-recipes

๐Ÿง  Multi-Agent QA System with RAG and Multi-Agent Collaboration

๐Ÿš€ If you havenโ€™t signed up for LanceDB Cloud yet, click here to get started!

This notebook demonstrates an end-to-end question answering system that combines retrieval-augmented generation (RAG) with multi-agent collaboration to achieve accurate, context-aware answers. Built on the SQuAD v2 benchmark, The implementation uses the following key components:

  • LanceDB-powered context retrieval
  • AutoGen-managed agent validation cycles
  • LangChain-optimized document processing

๐Ÿ’ก Example Output

	QUESTION: What is the capital of France?
INSTRUCTIONS: Provide verified answer using the context retrieval workflow

--------------------------------------------------------------------------------

Next speaker: QA_Specialist

QA_Specialist (to chat_manager):

1. Context 1: "Paris is the most populous city in France and the capital of the country."
   Context 2: "The capital and the most populous city of France is Paris, often called the City of Light."
   Context 3: "France, officially the French Republic, has its capital in Paris."

2. Answers from Contexts:
    -From Context 1: Paris
    -From Context 2: Paris
    -From Context 3: Paris

3. Final Answer: Paris.

--------------------------------------------------------------------------------

Next speaker: Fact_Checker

Fact_Checker (to chat_manager):

Paris

--------------------------------------------------------------------------------

Next speaker: Coordinator

Coordinator (to chat_manager):

TERMINATE

--------------------------------------------------------------------------------

Question: What is the capital of France?
Answer: paris
Exact Match: 10000%
F1: 10000%


๐Ÿ› ๏ธ What You'll Build

  • RAG Pipeline with LanceDB vector store (all-MiniLM-L6-v2 embeddings)
  • AutoGen GroupChat with specialized agent roles (retrieve โ†’ generate โ†’ verify)
  • Conversation Visualizer showing agent interactions

Step 1: Install Required Libraries

[ ]

Step 2: Obtain the API key from the dashboard

  • Get the db uri

db uri starts with db://, which can be obtained from the project page on the dashboard. In the following example, db uri is db://test-sfifxz.

db-uri.png

  • Get the API Key Obtain a LanceDB Cloud API key by clicking on the GENERATE API KEY from the table page.

๐Ÿ’ก Copy the code block for connecting to LanceDB Cloud that is shown at the last step of API key generation. image.png

[66]
[67]

paste your OPEN_AI_KEY

[68]

Step 3: Import libraries

[69]

Step 4: Load SQuAD dataset and chunck contexts

Note: We loaded the first 1000 data from the training set to speed up this example.

[70]

Step 5: Create embeddings and ingest them to LanceDB Cloud

This step might have a few minutes for generating embeddings.

[71]

Step 6: Configure Agent

In the setup, we have:

  1. QA agent: to generate answers from context
  2. Review agent: to validate answers
  3. User proxy: to manage the conversation flow
[72]

Step 7 : Orchestrate workflow

The agents collaborate to: generate the initial answer -> verify against context -> refine the answer if needed.

[73]

Step 8 : Set up evaluation pipeline

[ ]

Step 9 : Let's test!

[76]
Coordinator (to chat_manager):

QUESTION: What is the capital of France?
INSTRUCTIONS: Provide verified answer using the context retrieval workflow

--------------------------------------------------------------------------------

Next speaker: QA_Specialist

QA_Specialist (to chat_manager):

1. Context 1: "Paris is the most populous city in France and the capital of the country."
   Context 2: "The capital and the most populous city of France is Paris, often called the City of Light."
   Context 3: "France, officially the French Republic, has its capital in Paris."

2. Answers from Contexts: 
    -From Context 1: Paris
    -From Context 2: Paris
    -From Context 3: Paris

3. Final Answer: Paris.

--------------------------------------------------------------------------------

Next speaker: Fact_Checker

Fact_Checker (to chat_manager):

Paris

--------------------------------------------------------------------------------

Next speaker: Coordinator

Coordinator (to chat_manager):

TERMINATE

--------------------------------------------------------------------------------

Question: What is the capital of France?
Answer: paris
Exact Match: 10000%
F1: 10000%
-----------
Coordinator (to chat_manager):

QUESTION: Who wrote Romeo and Juliet?
INSTRUCTIONS: Provide verified answer using the context retrieval workflow

--------------------------------------------------------------------------------

Next speaker: QA_Specialist

QA_Specialist (to chat_manager):

1. Retrieved Context 1: "Romeo and Juliet is a tragedy written by William Shakespeare early in his career about two young star-crossed lovers whose deaths ultimately reconcile their feuding families."
   Retrieved Context 2: "The Shakespearean work Romeo and Juliet is among the most popularly known pieces by the playwright."
   Retrieved Context 3: "Shakespeare became famous for several plays, including Romeo and Juliet which remains one of his most performed and well-known works."

2. Exact Answer from Context: William Shakespeare

3. Final Answer: William Shakespeare

--------------------------------------------------------------------------------

Next speaker: Fact_Checker

Fact_Checker (to chat_manager):

William Shakespeare

--------------------------------------------------------------------------------

Next speaker: Coordinator

Coordinator (to chat_manager):

TERMINATE

--------------------------------------------------------------------------------

Question: Who wrote Romeo and Juliet?
Answer: william shakespeare
Exact Match: 10000%
F1: 10000%
-----------