Openai Text 3 Emebdding
Using OpenAI Latest Embeddings In A RAG System With MongoDB
OpenAI recently released new embeddings and moderation models. This article explores the step-by-step implementation process of utilizing one of the new embedding models: text-embedding-3-small within a Retrieval Augmented Generation(RAG) System powered by MongoDB Atlas Vector Database.
Step 1: Libraries Installation
Below are brief explanations of the tools and libraries utilised within the implementation code:
-
datasets: This library is part of the Hugging Face ecosystem. By installing 'datasets', we gain access to a number of pre-processed and ready-to-use datasets, which are essential for training and fine-tuning machine learning models or benchmarking their performance.
-
pandas: A data science library that provides robust data structures and methods for data manipulation, processing and analysis.
-
openai: This is the official Python client library for accessing OpenAI's suite of AI models and tools, including GPT and embedding models.italicised text
-
pymongo: PyMongo is a Python toolkit for MongoDB. It enables interactions with a MongoDB database.
Collecting datasets
Downloading datasets-2.16.1-py3-none-any.whl (507 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 507.1/507.1 kB 5.5 MB/s eta 0:00:00
Requirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (1.5.3)
Collecting openai
Downloading openai-1.10.0-py3-none-any.whl (225 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 225.1/225.1 kB 24.8 MB/s eta 0:00:00
Collecting pymongo
Downloading pymongo-4.6.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (677 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 677.1/677.1 kB 39.3 MB/s eta 0:00:00
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from datasets) (3.13.1)
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from datasets) (1.23.5)
Requirement already satisfied: pyarrow>=8.0.0 in /usr/local/lib/python3.10/dist-packages (from datasets) (10.0.1)
Requirement already satisfied: pyarrow-hotfix in /usr/local/lib/python3.10/dist-packages (from datasets) (0.6)
Collecting dill<0.3.8,>=0.3.0 (from datasets)
Downloading dill-0.3.7-py3-none-any.whl (115 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 115.3/115.3 kB 12.5 MB/s eta 0:00:00
Requirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.10/dist-packages (from datasets) (2.31.0)
Requirement already satisfied: tqdm>=4.62.1 in /usr/local/lib/python3.10/dist-packages (from datasets) (4.66.1)
Requirement already satisfied: xxhash in /usr/local/lib/python3.10/dist-packages (from datasets) (3.4.1)
Collecting multiprocess (from datasets)
Downloading multiprocess-0.70.16-py310-none-any.whl (134 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 134.8/134.8 kB 14.3 MB/s eta 0:00:00
Requirement already satisfied: fsspec[http]<=2023.10.0,>=2023.1.0 in /usr/local/lib/python3.10/dist-packages (from datasets) (2023.6.0)
Requirement already satisfied: aiohttp in /usr/local/lib/python3.10/dist-packages (from datasets) (3.9.1)
Requirement already satisfied: huggingface-hub>=0.19.4 in /usr/local/lib/python3.10/dist-packages (from datasets) (0.20.3)
Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from datasets) (23.2)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from datasets) (6.0.1)
Requirement already satisfied: python-dateutil>=2.8.1 in /usr/local/lib/python3.10/dist-packages (from pandas) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas) (2023.3.post1)
Requirement already satisfied: anyio<5,>=3.5.0 in /usr/local/lib/python3.10/dist-packages (from openai) (3.7.1)
Requirement already satisfied: distro<2,>=1.7.0 in /usr/lib/python3/dist-packages (from openai) (1.7.0)
Collecting httpx<1,>=0.23.0 (from openai)
Downloading httpx-0.26.0-py3-none-any.whl (75 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 75.9/75.9 kB 8.3 MB/s eta 0:00:00
Requirement already satisfied: pydantic<3,>=1.9.0 in /usr/local/lib/python3.10/dist-packages (from openai) (1.10.14)
Requirement already satisfied: sniffio in /usr/local/lib/python3.10/dist-packages (from openai) (1.3.0)
Collecting typing-extensions<5,>=4.7 (from openai)
Downloading typing_extensions-4.9.0-py3-none-any.whl (32 kB)
Collecting dnspython<3.0.0,>=1.16.0 (from pymongo)
Downloading dnspython-2.5.0-py3-none-any.whl (305 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 305.4/305.4 kB 28.2 MB/s eta 0:00:00
Requirement already satisfied: idna>=2.8 in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3.5.0->openai) (3.6)
Requirement already satisfied: exceptiongroup in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3.5.0->openai) (1.2.0)
Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (23.2.0)
Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (6.0.4)
Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (1.9.4)
Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (1.4.1)
Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (1.3.1)
Requirement already satisfied: async-timeout<5.0,>=4.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (4.0.3)
Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from httpx<1,>=0.23.0->openai) (2023.11.17)
Collecting httpcore==1.* (from httpx<1,>=0.23.0->openai)
Downloading httpcore-1.0.2-py3-none-any.whl (76 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 76.9/76.9 kB 9.3 MB/s eta 0:00:00
Collecting h11<0.15,>=0.13 (from httpcore==1.*->httpx<1,>=0.23.0->openai)
Downloading h11-0.14.0-py3-none-any.whl (58 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.3/58.3 kB 3.0 MB/s eta 0:00:00
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.8.1->pandas) (1.16.0)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19.0->datasets) (3.3.2)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19.0->datasets) (2.0.7)
INFO: pip is looking at multiple versions of multiprocess to determine which version is compatible with other requirements. This could take a while.
Collecting multiprocess (from datasets)
Downloading multiprocess-0.70.15-py310-none-any.whl (134 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 134.8/134.8 kB 14.2 MB/s eta 0:00:00
Installing collected packages: typing-extensions, h11, dnspython, dill, pymongo, multiprocess, httpcore, httpx, openai, datasets
Attempting uninstall: typing-extensions
Found existing installation: typing_extensions 4.5.0
Uninstalling typing_extensions-4.5.0:
Successfully uninstalled typing_extensions-4.5.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
llmx 0.0.15a0 requires cohere, which is not installed.
llmx 0.0.15a0 requires tiktoken, which is not installed.
tensorflow-probability 0.22.0 requires typing-extensions<4.6.0, but you have typing-extensions 4.9.0 which is incompatible.
Successfully installed datasets-2.16.1 dill-0.3.7 dnspython-2.5.0 h11-0.14.0 httpcore-1.0.2 httpx-0.26.0 multiprocess-0.70.15 openai-1.10.0 pymongo-4.6.1 typing-extensions-4.9.0
Step 2: Data Loading
Load the dataset titled "AIatMongoDB/embedded_movies". This dataset is a collection of movie-related details that include attributes such as the title, release year, cast, plot and more. A unique feature of this dataset is the plot_embedding field for each movie. These embeddings are generated using OpenAI's text-embedding-ada-002 model.
/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_token.py:88: UserWarning: The secret `HF_TOKEN` does not exist in your Colab secrets. To authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session. You will be able to reuse this secret in all of your notebooks. Please note that authentication is recommended but still optional to access public models or datasets. warnings.warn(
Downloading readme: 0%| | 0.00/2.71k [00:00<?, ?B/s]
Downloading data: 0%| | 0.00/42.3M [00:00<?, ?B/s]
Generating train split: 0 examples [00:00, ? examples/s]
Step 3: Data Cleaning and Preparation
The next step cleans the data and prepares it for the next stage, which creates a new embedding data point using the new OpenAI embedding model.
Columns: Index(['writers', 'cast', 'plot', 'countries', 'directors', 'poster', 'genres',
'imdb', 'num_mflix_comments', 'runtime', 'fullplot', 'languages',
'title', 'awards', 'type', 'plot_embedding', 'rated', 'metacritic'],
dtype='object')
Number of rows and columns: (1500, 18)
Basic Statistics for numerical data:
num_mflix_comments runtime metacritic
count 1500.000000 1485.000000 572.000000
mean 6.071333 111.977104 51.646853
std 27.378982 42.090386 16.861996
min 0.000000 6.000000 9.000000
25% 0.000000 96.000000 40.000000
50% 0.000000 106.000000 51.000000
75% 1.000000 121.000000 63.000000
max 158.000000 1256.000000 97.000000
Number of missing values in each column:
writers 13
cast 1
plot 27
countries 0
directors 13
poster 89
genres 0
imdb 0
num_mflix_comments 0
runtime 15
fullplot 48
languages 1
title 0
awards 0
type 0
plot_embedding 28
rated 308
metacritic 928
dtype: int64
Number of missing values in each column after removal: writers 13 cast 1 plot 0 countries 0 directors 13 poster 78 genres 0 imdb 0 num_mflix_comments 0 runtime 14 fullplot 21 languages 1 title 0 awards 0 type 0 plot_embedding 1 rated 284 metacritic 903 dtype: int64
Step 4: Create embeddings with OpenAI
This stage focuses on generating new embeddings using OpenAI's advanced model. This demonstration utilises a Google Colab Notebook, where environment variables are configured explicitly within the notebook's Secret section and accessed using the user data module. In a production environment, the environment variables that store secret keys are usually stored in a '.env' file or equivalent.
An OpenAI API key is required to ensure the successful completion of this step. More details on OpenAI's embedding models can be found on the official site.
Step 5: Vector Database Setup and Data Ingestion
MongoDB acts as both an operational and a vector database. It offers a database solution that efficiently stores, queries and retrieves vector embeddings—the advantages of this lie in the simplicity of database maintenance, management and cost.
To create a new MongoDB database, set up a database cluster:
-
Head over to MongoDB official site and register for a free MongoDB Atlas account, or for existing users, sign into MongoDB Atlas.
-
Select the 'Database' option on the left-hand pane, which will navigate to the Database Deployment page, where there is a deployment specification of any existing cluster. Create a new database cluster by clicking on the "+Create" button.
-
Select all the applicable configurations for the database cluster. Once all the configuration options are selected, click the “Create Cluster” button to deploy the newly created cluster. MongoDB also enables the creation of free clusters on the “Shared Tab”.
Note: Don’t forget to whitelist the IP for the Python host or 0.0.0.0/0 for any IP when creating proof of concepts.
-
After successfully creating and deploying the cluster, the cluster becomes accessible on the ‘Database Deployment’ page.
-
Click on the “Connect” button of the cluster to view the option to set up a connection to the cluster via various language drivers.
-
This tutorial only requires the cluster's URI(unique resource identifier). Grab the URI and copy it into the Google Colabs Secrets environment in a variable named
MONGO_URIor place it in a .env file or equivalent.
Connection to MongoDB successful
DeleteResult({'n': 2946, 'electionId': ObjectId('7fffffff0000000000000002'), 'opTime': {'ts': Timestamp(1706777337, 131), 't': 2}, 'ok': 1.0, '$clusterTime': {'clusterTime': Timestamp(1706777337, 136), 'signature': {'hash': b'I}`\x92v\x00\n\x1e\x00_\x13}\x875O\xa1[\xb4\xf6\x18', 'keyId': 7330233208207835141}}, 'operationTime': Timestamp(1706777337, 131)}, acknowledged=True) Data ingestion into MongoDB completed
Step 6: Create a Vector Search Index
At this point make sure that your vector index is created via MongoDB Atlas. Follow instructions here:
This next step is mandatory for conducting efficient and accurate vector-based searches based on the vector embeddings stored within the documents in the ‘movie_collection’ collection. Creating a Vector Search Index enables the ability to traverse the documents efficiently to retrieve documents with embeddings that match the query embedding based on vector similarity. Go here to read more about MongoDB Vector Search Index.
Step 7: Perform Vector Search on User Queries
This step combines all the activities in the previous step to provide the functionality of conducting vector search on stored records based on embedded user queries.
This step implements a function that returns a vector search result by generating a query embedding and defining a MongoDB aggregation pipeline. The pipeline, consisting of the $vectorSearch and $project stages, queries using the generated vector and formats the results to include only required information like plot, title, and genres while incorporating a search score for each result.
This selective projection enhances query performance by reducing data transfer and optimizes the use of network and memory resources, which is especially critical when handling large datasets. For AI Engineers and Developers considering data security at an early stage, the chances of sensitive data leaked to the client side can be minimized by carefully excluding fields irrelevant to the user's query.
Step 8: Handling User Query and Result
The final step in the implementation phase focuses on the practical application of our vector search functionality and AI integration to handle user queries effectively.
The handle_user_query function performs a vector search on the MongoDB collection based on the user's query and utilizes OpenAI's GPT-3.5 model to generate context-aware responses.
Response: Based on the given context, the best romantic movie recommendation would be "Gorgeous". This movie combines romance and the story of a girl searching for love with a kind-hearted professional fighter. Source Information: Title: Run, Plot: This action movie is filled with romance and adventure. As Abhisek fights for his life against the forces of crime and injustice, he meets Bhoomika, who captures his heart. Title: China Girl, Plot: A modern day Romeo & Juliet story is told in New York when an Italian boy and a Chinese girl become lovers, causing a tragic conflict between ethnic gangs. Title: Gorgeous, Plot: A romantic girl travels to Hong Kong in search of certain love but instead meets a kind-hearted professional fighter with whom she begins to fall for instead. Title: Once a Thief, Plot: A romantic and action packed story of three best friends, a group of high end art thieves, who come into trouble when a love-triangle forms between them. Title: House of Flying Daggers, Plot: A romantic police captain breaks a beautiful member of a rebel group out of prison to help her rejoin her fellows, but things are not what they seem.