NIM Tool Call HumanInTheLoop MultiAgents
Incorporating human-in-the-loop in agentic logic via LangGraph
Prerequisites
To run this notebook, you need to follow the steps from here and generate an API key from NVIDIA API Catalog.
Please ensure you have the following dependencies installed :
- langchain
- jupyterlab==4.0.8
- langchain-core
- langchain-nvidia-ai-endpoints==0.2.0
- markdown
- colorama
you will also need to install the following -
This notebook will walk you though how to incoporate human-in-the-loop into a multi-agents pipeline in a minimalistic examples.
The cognitive agentic architecture will look like the below :

We will first construct the 2 agents in the middle :
-
Using meta/llama-3.1-405b-instruct to construct the 2 agents, each will be created with LCEL expression
-
then we will give each agent one tool to use to achieve the task
The task at hand is creating promotion assets with text and image for social medial promotion. We are aiming for something similar to the below ...
Just like in real world, a human in charge of the task will delegate tasks to specalist writer to writ the promotion text and assign a digital artist for the artworks.
In this scenario, we will let human assign an agent ( either ContentCreator or DigitalArtist ) just like the flow depicted above.
Note: As one can see, since we are using NVIDIA AI Catalog as an API, there is no further requirement in the prerequisites about GPUs as compute hardware
We will prepare the 2 agents , each is made out of LCEL expression
For simplicity , each agent will be given one tool to use.
- a content_creator agent which will create promotion message per input product_desc
- an digital_artist agent what is able to create visually appealing image from the promotion title
Step 1 : construct content_creator agent
in order to construct the content_creator agent we need the following :
-
system prompt which anchor the task for the agent
-
provide a seeded product desc
-
a powerful LLM llama3.1-405b from NVIDIA NIM
-
using with_structured_output for formatting
Step 2 : we will now create digital_artist agent
We will equip the digital_artist with the following :
- a text-to-image model stableXL-turbo from NVIDIA NIM
- wrap this tool into llm with llm.bind_tools
- construct our digital_artist agent with LCEL expression
a text-to-image model stableXL-turbo from NVIDIA NIM
Wrap the tool into llm with llm.bind_tools
creating digital_artist using LCEL chain
Step 3 - Embed Human-in-the-loop agentic logic with LangGraph
- construct a get_human_input function to integrate into the first node of LangGraph putting Human-in-the-loop deciding which tool to use
- establish State to keep track of the internal states
- create functions as graph nodes for LangGraph
- compose the agentic cognitive logic in langGraph by connecting the nodes and edges
construct a get_human_input function to integrate into the first node of LangGraph
putting Human-in-the-loop deciding which tool to use