Notebooks
d
deepset
Evaluating Ai With Haystack

Evaluating Ai With Haystack

agentic-aiagenticagentsgenaiAIhaystack-cookbookgenai-usecaseshaystack-ainotebooksPythonragai-tools

Evaluating AI with Haystack

by Bilge Yucel (X, Linkedin)

In this cookbook, we walk through the Evaluators in Haystack, create an evaluation pipeline and try different Evaluation Frameworks like FlowJudge.

πŸ“š Useful Resources:

πŸ“Ί Watch Along

[ ]

1. Building your pipeline

ARAGOG

This dataset is based on the paper Advanced Retrieval Augmented Generation Output Grading (ARAGOG). It's a collection of papers from ArXiv covering topics around Transformers and Large Language Models, all in PDF format.

The dataset contains:

  • 13 PDF papers.
  • 107 questions and answers generated with the assistance of GPT-4, and validated/corrected by humans.

We have:

  • ground-truth answers
  • questions

Get the dataset here

[ ]

Indexing Pipeline

[ ]
[11]
691

RAG

[12]
[ ]
<haystack.core.pipeline.pipeline.Pipeline object at 0x309fa13d0>
,πŸš… Components
,  - query_embedder: SentenceTransformersTextEmbedder
,  - retriever: InMemoryEmbeddingRetriever
,  - chat_prompt_builder: ChatPromptBuilder
,  - chat_generator: OpenAIChatGenerator
,πŸ›€οΈ Connections
,  - query_embedder.embedding -> retriever.query_embedding (List[float])
,  - retriever.documents -> chat_prompt_builder.documents (List[Document])
,  - chat_prompt_builder.prompt -> chat_generator.messages (List[ChatMessage])

2. Human Evaluation

[ ]
[15]
[16]
107
107
[17]
[18]
How were the questions for the multitask test sourced, and what was the criteria for their inclusion?
Questions were manually collected by graduate and undergraduate students from freely available online sources, including practice questions for standardized tests and undergraduate courses, ensuring a wide representation of difficulty levels and subjects.
[19]
{'chat_generator': {'replies': [ChatMessage(_role=<ChatRole.ASSISTANT: 'assistant'>, _content=[TextContent(text='The questions for the multitask test were manually collected by graduate and undergraduate students from freely available sources online. These sources included practice questions for tests such as the Graduate Record Examination and the United States Medical Licensing Examination, as well as questions designed for undergraduate courses and readers of Oxford University Press books. The criteria for inclusion involved ensuring that each subject contained a sufficient number of test examples, with each subject having a minimum of 100 test examples. Tasks that were either too challenging for humans without extensive training or too easy for the machine baselines were filtered out.')], _name=None, _meta={'model': 'gpt-4o-mini-2024-07-18', 'index': 0, 'finish_reason': 'stop', 'usage': {'completion_tokens': 110, 'prompt_tokens': 4550, 'total_tokens': 4660, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}})]}}

3. Deciding on Metrics

  • Semantic Answer Similarity: SASEvaluator compares the embedding of a generated answer against a ground-truth answer based on a common embedding model.
  • ContextRelevanceEvaluator will assess the relevancy of the retrieved context to answer the query question
  • FaithfulnessEvaluator evaluates whether the generated answer can be derived from the context

4. Building an Evaluation Pipeline

[ ]

5. Running Evaluation

Run the RAG Pipeline

[21]

Run the Evaluation

[22]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 15/15 [00:10<00:00,  1.43it/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 15/15 [00:33<00:00,  2.23s/it]

6. Analyzing Results

[23]
{'metrics': ['context_relevance', 'faithfulness', 'sas'],
, 'score': [0.26666666666666666, 0.7, 0.5344941093275944]}
[ ]

Evaluation Frameworks

[ ]
[26]
[28]
[ ]
[ ]