Mongodb With Aws Bedrock Agent
MongoDB with Bedrock agent quick tutorial
MongoDB Atlas and Amazon Bedrock have joined forces to streamline the development of generative AI applications through their seamless integration. MongoDB Atlas, a robust cloud-based database service, now offers native support for Amazon Bedrock, AWS's managed service for generative AI. This integration leverages Atlas's vector search capabilities, enabling the effective utilization of enterprise data to augment the foundational models provided by Bedrock, such as Anthropic's Claude and Amazon's Titan. The combination ensures that the generative AI models have access to the most relevant and up-to-date data, significantly improving the accuracy and reliability of AI-driven applications with MongoDB.
This integration simplifies the workflow for developers aiming to implement retrieval-augmented generation (RAG). RAG helps mitigate the issue of hallucinations in AI models by allowing them to fetch and utilize specific data from a predefined knowledge base, in this case, MongoDB Atlas Developers can easily set up this workflow by creating a vector search index in Atlas, which stores the vector embeddings and metadata of the text data. This setup not only enhances the performance and reliability of AI applications but also ensures data privacy and security through features like AWS PrivateLink.
This notebook demonstrates how to interact with a predefined agent using AWS Bedrock in a Google Colab environment. It utilizes the boto3 library to communicate with the AWS Bedrock service and allows you to input prompts and receive responses directly within the notebook.
Key Features:
- Secure Handling of AWS Credentials: The
getpassmodule is used to securely enter your AWS Access Key and Secret Key. - Session Management: Each session is assigned a random session ID to maintain continuity in conversations.
- Agent Invocation: The notebook sends user prompts to a predefined agent and streams the responses back to the user.
Requirements:
- AWS Access Key and Secret Key with appropriate permissions.
- Boto3 and Requests libraries for interacting with AWS services and fetching data from URLs.
Setting up MongoDB Atlas
- Follow the getting started with Atlas guide and setup your cluster with
0.0.0.0/0allowed connection for this notebook. - Predefined an Atlas Vector Index on database
bedrockcollectionagenda, this collection will host the data for the AWS summit agenda and will serve as a context store for the agent: Index name:vector_index
{
"fields": [
{
"type": "vector",
"path": "embedding",
"numDimensions": 1024,
"similarity": "cosine"
},
{
"type" : "filter",
"path" : "metadata"
},
{
"type" : "filter",
"path" : "text"
},
]
}
Setup AWS Bedrock
We will use US-EAST-1 AWS region for this notebook
Follow our official tutorial to enable a bedrock knowledge base against the created database and collection in MongoDB Atlas. This guide highlight a detailed step of action to build the knowledge base and agent.
For this notebook, we will perform the following tasks according to the guide:
- Go to the bedrock console and enable
- Amazon Titan Text Embedding model (
amazon.titan-embed-text-v2:0) - Claude 3 Sonnet Model (The LLM(
- Upload the following source data about the AWS summit agenda to your S3 bucket:
- https://s3.amazonaws.com/bedrocklogs.pavel/ocr_db.aws_events.json
- https://s3.amazonaws.com/bedrocklogs.pavel/ocr_db.aws_sessions.json
This will be our source data listing the events happening in the summit.
- Go to Secrets Manager on the AWS console and create credentials to our atlas cluster via "Other type of secret":
- key : username , value :
<ATLAS_USERNAME> - key : password , value :
<ATLAS_PASSWORD>
- Follow the setup of the knowledge base wizard to connect Bedrock models with Atlas :
- Click "Create Knowledge Base" and input:
| input | value |
|---|---|
| Name | <NAME> |
| Chose | Create and use a new service role |
| Data source name | <NAME> |
| S3 URI | Browse for the S3 bucket hosting the 2 uploaded source files |
| Embedding Model | Titan Text Embeddings v2 |
- let's choose MongoDB Atlas in the "Vector Database" choose the "Choose a vector store you have created" section:
| input | value |
|---|---|
| Select your vector store | MongoDB Atlas |
| Hostname | Your atlas srv hostname eg. cluster0.abcd.mongodb.net |
| Database name | bedrock |
| Collection name | agenda |
| Credentials secret ARN | Copy the created credentials from the "Secrets manager" |
| Vector search index name | vector_index |
| Vector embedding field path | embedding |
| Text field path | text |
| Metadata field path | metadata |
-
Click Next, review the details and "Create Knowledge Base".
-
Once the knowledge base is marked with "Status : Ready", go to
Data sourcesection, choose the one datasource we have and click the "Sync" button on its right upper corner. This operation should load the data to Atlas if everything was setup correctly.
Setting up an agenda agent
We can now set up our agent, who will work with a set of instructions and our knowledge base.
- Go to the "Agents" tab in the bedrock UI.
- Click "Create Agent" and give it a meaningful name (e.g. agenda_assistant)
- Input the following data in the agent builder:
| input | value |
|---|---|
| Agent Name | agenda_assistant |
| Agent resource role | Create and use a new service role |
| Select model | Anthropic - Claude 3 Sonnet |
| Instructions for the Agent | You are a friendly AI chatbot that helps users find and build agenda Items for AWS Summit Tel Aviv. elaborate as much as possible on the response. |
| Agent Name | agenda_assistant |
| Knowledge bases | Choose your Knowledge Base |
| Aliases | Create a new Alias |
And now, we have a functioning agent that can be tested via the console. Let's move to the notebook.
Take note of the Agent ID and create an Agent Alias ID for the notebook
Interacting with the agent
To interact with the agent, we need to install the AWS python SDK:
Collecting boto3
Downloading boto3-1.34.129-py3-none-any.whl (139 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 139.2/139.2 kB 1.0 MB/s eta 0:00:00
Collecting botocore<1.35.0,>=1.34.129 (from boto3)
Downloading botocore-1.34.129-py3-none-any.whl (12.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.3/12.3 MB 38.9 MB/s eta 0:00:00
Collecting jmespath<2.0.0,>=0.7.1 (from boto3)
Downloading jmespath-1.0.1-py3-none-any.whl (20 kB)
Collecting s3transfer<0.11.0,>=0.10.0 (from boto3)
Downloading s3transfer-0.10.1-py3-none-any.whl (82 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 82.2/82.2 kB 6.7 MB/s eta 0:00:00
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /usr/local/lib/python3.10/dist-packages (from botocore<1.35.0,>=1.34.129->boto3) (2.8.2)
Requirement already satisfied: urllib3!=2.2.0,<3,>=1.25.4 in /usr/local/lib/python3.10/dist-packages (from botocore<1.35.0,>=1.34.129->boto3) (2.0.7)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil<3.0.0,>=2.1->botocore<1.35.0,>=1.34.129->boto3) (1.16.0)
Installing collected packages: jmespath, botocore, s3transfer, boto3
Successfully installed boto3-1.34.129 botocore-1.34.129 jmespath-1.0.1 s3transfer-0.10.1
Let's place the credentials for our AWS account.
Enter your AWS Access Key: ·········· Enter your AWS Secret Key: ··········
Now, we need to initialise the boto3 client and get the agent ID and alias ID input.
Enter your agent ID·········· Enter your agent Alias ID··········
Let's build the helper function to interact with the agent.
We can now interact with the agent using the application code.
Enter your prompt (or type 'exit' to quit): What agenda items are present in the AWS summit Agent Response: The AWS Summit agenda items include sessions on digital transformation, generative AI, multi-cloud management, machine learning, vector databases, and OpenSearch services. Other agenda items cover topics like scaling AI within organizations, application resilience with AWS, Amazon Q for GenAI, and leveraging LLM-based AI agents. Enter your prompt (or type 'exit' to quit): exit
Here you go! You have a powerful bedrock agent with MongoDB Atlas.
Conclusions The integration of MongoDB Atlas with Amazon Bedrock represents a significant advancement in the development and deployment of generative AI applications. By leveraging Atlas's vector search capabilities and the powerful foundational models available through Bedrock, developers can create applications that are both highly accurate and deeply informed by enterprise data. This seamless integration facilitates the retrieval-augmented generation (RAG) workflow, enabling AI models to access and utilize the most relevant data, thereby reducing the likelihood of hallucinations and improving overall performance.
The benefits of this integration extend beyond just technical enhancements. It also simplifies the generative AI stack, allowing companies to rapidly deploy scalable AI solutions with enhanced privacy and security features, such as those provided by AWS PrivateLink. This makes it an ideal solution for enterprises with stringent data security requirements. Overall, the combination of MongoDB Atlas and Amazon Bedrock provides a robust, efficient, and secure platform for building next-generation AI applications .