Evaluation With Phoenix
Evaluation with Arize Pheonix
This guide demonstrates how to use Arize Pheonix to evaluate a Retrieval-Augmented Generation (RAG) pipeline built upon Milvus.
The RAG system combines a retrieval system with a generative model to generate new text based on a given prompt. The system first retrieves relevant documents from a corpus using Milvus, and then uses a generative model to generate new text based on the retrieved documents.
Arize Pheonix is a framework that helps you evaluate your RAG pipelines. There are existing tools and frameworks that help you build these pipelines but evaluating it and quantifying your pipeline performance can be hard. This is where Arize Pheonix comes in.
Prerequisites
Before running this notebook, make sure you have the following dependencies installed:
If you are using Google Colab, to enable dependencies just installed, you may need to restart the runtime (click on the "Runtime" menu at the top of the screen, and select "Restart session" from the dropdown menu).
We will use OpenAI as the LLM in this example. You should prepare the api key OPENAI_API_KEY as an environment variable.
Define the RAG pipeline
We will define the RAG class that use Milvus as the vector store, and OpenAI as the LLM.
The class contains the load method, which loads the text data into Milvus, the retrieve method, which retrieves the most similar text data to the given question, and the answer method, which answers the given question with the retrieved knowledge.
Let's initialize the RAG class with OpenAI and Milvus clients.
As for the argument of
MilvusClient:
- Setting the
urias a local file, e.g../milvus.db, is the most convenient method, as it automatically utilizes Milvus Lite to store all data in this file.- If you have large scale of data, you can set up a more performant Milvus server on docker or kubernetes. In this setup, please use the server uri, e.g.
http://localhost:19530, as youruri.- If you want to use Zilliz Cloud, the fully managed cloud service for Milvus, adjust the
uriandtoken, which correspond to the Public Endpoint and Api key in Zilliz Cloud.
Run the RAG pipeline and get results
We use the Milvus development guide to be as the private knowledge in our RAG, which is a good data source for a simple RAG pipeline.
Download it and load it into the rag pipeline.
Creating embeddings: 100%|██████████| 47/47 [00:12<00:00, 3.84it/s]
Let's define a query question about the content of the development guide documentation. And then use the answer method to get the answer and the retrieved context texts.
('The hardware requirements specification to build and run Milvus from source code are:\n\n- 8GB of RAM\n- 50GB of free disk space',
, ['Hardware Requirements\n\nThe following specification (either physical or virtual machine resources) is recommended for Milvus to build and run from source code.\n\n```\n- 8GB of RAM\n- 50GB of free disk space\n```\n\n##',
, 'Building Milvus on a local OS/shell environment\n\nThe details below outline the hardware and software requirements for building on Linux and MacOS.\n\n##',
, "Software Requirements\n\nAll Linux distributions are available for Milvus development. However a majority of our contributor worked with Ubuntu or CentOS systems, with a small portion of Mac (both x86_64 and Apple Silicon) contributors. If you would like Milvus to build and run on other distributions, you are more than welcome to file an issue and contribute!\n\nHere's a list of verified OS types where Milvus can successfully build and run:\n\n- Debian/Ubuntu\n- Amazon Linux\n- MacOS (x86_64)\n- MacOS (Apple Silicon)\n\n##"]) Now let's prepare some questions with its corresponding ground truth answers. We get answers and contexts from our RAG pipeline.
/Users/eureka/miniconda3/envs/zilliz/lib/python3.9/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm Answering questions: 100%|██████████| 3/3 [00:03<00:00, 1.04s/it]
Evaluation with Arize Phoenix
We use Arize Phoenix to evaluate our retrieval-augmented generation (RAG) pipeline, focusing on two key metrics:
-
Hallucination Evaluation: Determines if the content is factual or hallucinatory (information not grounded in context), ensuring data integrity.
- Hallucination Explanation: Explains why a response is factual or not.
-
QA Evaluation: Assesses the accuracy of model answers to input queries.
- QA Explanation: Details why an answer is correct or incorrect.
Phoenix Tracing Overview
Phoenix provides OTEL-compatible tracing for LLM applications, with integrations for frameworks like Langchain, LlamaIndex, and SDKs such as OpenAI and Mistral. Tracing captures the entire request flow, offering insights into:
- Application Latency: Identify and optimize slow LLM invocations and component performance.
- Token Usage: Break down token consumption for cost optimization.
- Runtime Exceptions: Capture critical issues like rate-limiting.
- Retrieved Documents: Analyze document retrieval, score, and order.
By utilizing Phoenix’s tracing, you can identify bottlenecks, optimize resources, and ensure system reliability across various frameworks and languages.
🌍 To view the Phoenix app in your browser, visit http://localhost:6006/ 📖 For more information on how to use Phoenix, check out https://docs.arize.com/phoenix

run_evals |██████████| 6/6 (100.0%) | ⏳ 00:03<00:00 | 1.64it/s