Self Rag
Self RAG
SELF-RAG is a method that improves the accuracy and quality of text generated by a language model (LM). It does this by using retrieval to find relevant information and allowing the model to reflect on its output.
The model generates text with the help of retrieved passages, and then it checks its own response by creating reflection tokens. These tokens tell the model if it needs more information or if the answer is complete and supported by the retrieved data.
Research Paper: Self RAG
Initial Setup
Indexing
Retriever
Document Grader
The document grader evaluates whether a document is relevant to the given query.
binary_score='yes'
RAG Chain
"Points on a mortgage, also known as discount points, mortgage points, or simply points, are a form of pre-paid interest that can be paid by a borrower to a lender when arranging a mortgage. One point is equal to one percent of the loan amount. By paying points, a borrower can effectively reduce the interest rate on the loan, resulting in a lower monthly payment. Points can also be used to qualify for a loan based on monthly income versus the monthly loan payment. It's important to note that points are different from origination fees, mortgage arrangement fees, or broker fees."
Hallucination Grader
The hallucination grader checks whether the answer is grounded in or supported by the given set of facts
GradeHallucinations(binary_score='yes')
Answer Grader
The answer grader evaluates whether an answer effectively addresses the given question.
GradeAnswer(binary_score='yes')
Create Graph
Define Graph State
Build Graph
---RETRIEVE---
/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py:119: LangChainDeprecationWarning: The method `BaseRetriever.get_relevant_documents` was deprecated in langchain-core 0.1.46 and will be removed in 0.3.0. Use invoke instead. warn_deprecated(
"Node 'retrieve':"
'\n---\n'
---CHECK DOCUMENT RELEVANCE TO QUESTION---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---ASSESS GRADED DOCUMENTS---
---DECISION: GENERATE---
"Node 'grade_documents':"
'\n---\n'
---GENERATE---
---CHECK HALLUCINATIONS---
---DECISION: GENERATION IS GROUNDED IN DOCUMENTS---
---GRADE GENERATION vs QUESTION---
---DECISION: GENERATION ADDRESSES QUESTION---
"Node 'generate':"
'\n---\n'
('Points on a mortgage, also known as discount points, are a form of pre-paid '
'interest that borrowers can pay to a lender when arranging a mortgage in the '
'United States. One point equals one percent of the loan amount. By paying '
'points, a borrower can reduce the interest rate on the loan, resulting in a '
'lower monthly payment. Points can also be used to qualify for a loan based '
"on monthly income versus the monthly loan payment. It's important to note "
'that points are different from origination fees, mortgage arrangement fees, '
'or broker fees.')
---RETRIEVE--- "Node 'retrieve':" '\n---\n' ---CHECK DOCUMENT RELEVANCE TO QUESTION--- ---GRADE: DOCUMENT NOT RELEVANT--- ---GRADE: DOCUMENT NOT RELEVANT--- ---GRADE: DOCUMENT NOT RELEVANT--- ---GRADE: DOCUMENT NOT RELEVANT--- ---ASSESS GRADED DOCUMENTS--- ---DECISION: ALL DOCUMENTS ARE NOT RELEVANT TO QUESTION--- "Node 'grade_documents':" '\n---\n' 'No relevant documents found or no generation produced.'
Preparing Data for Evaluation
---RETRIEVE--- ---CHECK DOCUMENT RELEVANCE TO QUESTION--- ---GRADE: DOCUMENT RELEVANT--- ---GRADE: DOCUMENT RELEVANT--- ---GRADE: DOCUMENT RELEVANT--- ---GRADE: DOCUMENT RELEVANT--- ---ASSESS GRADED DOCUMENTS--- ---DECISION: GENERATE--- ---GENERATE--- ---CHECK HALLUCINATIONS--- ---DECISION: GENERATION IS GROUNDED IN DOCUMENTS--- ---GRADE GENERATION vs QUESTION--- ---DECISION: GENERATION ADDRESSES QUESTION---
Evaluation in Athina AI
We will use Does Response Answer Query eval here. It Checks if the response answer the user's query. To learn more about this. Please refer to our documentation for further details.
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_generate_schema.py:386: UserWarning: <built-in function any> is not a Python type (it may be an instance of an object), Pydantic will allow any object with no validation since we cannot even enforce that the input is an instance of the given type. To get rid of this error wrap the type with `pydantic.SkipValidation`. warn(
You can view your dataset at: https://app.athina.ai/develop/7ad4a012-9202-4a5e-a131-769f0e6c620c