Elasticsearch Mcp Server For Chatgpt
Elasticsearch MCP Server for ChatGPT
This notebook demonstrates how to deploy an MCP (Model Context Protocol) server that connects ChatGPT to Elasticsearch, enabling natural language queries over internal GitHub issues and pull requests.
What You'll Build
An MCP server that allows ChatGPT to search and retrieve information from your Elasticsearch index using natural language queries, combining semantic and keyword search for optimal results.
Steps
- Install Dependencies: Set up required Python packages (fastmcp, elasticsearch, pyngrok, pandas)
- Configure Environment: Set up Elasticsearch credentials and ngrok token
- Initialize Elasticsearch: Connect to your Elasticsearch cluster
- Create Index: Define mappings with semantic_text field for ELSER
- Load Sample Data: Import GitHub issues/PRs dataset
- Ingest Documents: Bulk index documents into Elasticsearch
- Define MCP Tools: Create search and fetch functions for ChatGPT
- Deploy Server: Start MCP server with ngrok tunnel
- Connect to ChatGPT: Get public URL for ChatGPT connector setup
Prerequisites
- Elasticsearch cluster with ELSER model deployed
- Ngrok account with auth token
- Python 3.8+
Install Dependencies
This cell installs all required Python packages: fastmcp for the MCP server framework, elasticsearch for connecting to Elasticsearch, pyngrok for creating a public tunnel, and pandas for data manipulation.
Alternative: You can also install dependencies using the provided 'requirements.txt' file.
Dependencies installed
Import Libraries
Import all necessary Python libraries for building and running the MCP server, including FastMCP for the server framework, Elasticsearch client for database connections, and pyngrok for tunneling.
Libraries imported successfully
Setup Configuration
Load required credentials from environment variables or prompt for manual input. You'll need:
- ELASTICSEARCH_URL: Your Elasticsearch cluster endpoint
- ELASTICSEARCH_API_KEY: API key with read/write access
- NGROK_TOKEN: Free token from ngrok.com
- ELASTICSEARCH_INDEX: Index name (defaults to 'github_internal')
Initialize Elasticsearch Client
Create an Elasticsearch client using your credentials and verify the connection by pinging the cluster. This ensures your credentials are valid before proceeding.
Create Index with Mappings
Create an Elasticsearch index with optimized mappings for hybrid search. The key field is text_semantic which uses ELSER (.elser-2-elasticsearch) for semantic search, while other fields enable traditional keyword search.
Load Sample Dataset
Load the sample GitHub dataset containing 15 documents with issues, pull requests, and RFCs. The dataset includes realistic content with descriptions, comments, assignees, priorities, and relationships between issues and PRs.
Loaded 15 documents from dataset
Ingest Documents to Elasticsearch
Bulk index all documents into Elasticsearch. The code copies the text field to text_semantic for ELSER processing, then waits 15 seconds for semantic embeddings to be generated before verifying the document count.
Define MCP Server
Define the MCP server with two tools that ChatGPT will use:
- search(query): Hybrid search combining semantic (ELSER) and keyword (BM25) search using RRF (Reciprocal Rank Fusion). Returns top 10 results with id, title, and url.
- fetch(id): Retrieves complete document details by ID, returning all fields including full text content and metadata.
MCP server defined successfully
Start Ngrok Tunnel
Create a public HTTPS tunnel using ngrok to expose your local MCP server on port 8000. This allows ChatGPT to connect to your server from anywhere. Copy the displayed URL (ending in /sse) to use in ChatGPT's connector settings.
Run MCP Server
Start the MCP server in a background thread using SSE (Server-Sent Events) transport. The server runs on 0.0.0.0:8000 and stays alive to handle requests from ChatGPT via the ngrok tunnel. Keep this cell running while using the connector.
Starting MCP server... Server is running. To stop: Runtime > Interrupt execution Server started successfully! Your ngrok URL is ready to use in ChatGPT Keep this cell running...
INFO: Started server process [47952] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) INFO:pyngrok.process.ngrok:t=2025-11-13T11:37:09-0300 lvl=info msg="join connections" obj=join id=2f547f1e02b9 l=127.0.0.1:8000 r=191.233.196.115:8612
INFO: 191.233.196.115:0 - "POST /sse HTTP/1.1" 405 Method Not Allowed
INFO:pyngrok.process.ngrok:t=2025-11-13T11:37:10-0300 lvl=info msg="join connections" obj=join id=f157e39aac9d l=127.0.0.1:8000 r=191.233.196.120:47762
INFO: 191.233.196.120:0 - "GET /sse HTTP/1.1" 200 OK
INFO:pyngrok.process.ngrok:t=2025-11-13T11:37:10-0300 lvl=info msg="join connections" obj=join id=5a9192136cfb l=127.0.0.1:8000 r=191.233.196.117:53796
INFO: 191.233.196.117:0 - "POST /messages/?session_id=a8b8863d0264414f8cadb3694f26e121 HTTP/1.1" 202 Accepted INFO: 191.233.196.117:0 - "POST /messages/?session_id=a8b8863d0264414f8cadb3694f26e121 HTTP/1.1" 202 Accepted INFO: 191.233.196.117:0 - "POST /messages/?session_id=a8b8863d0264414f8cadb3694f26e121 HTTP/1.1" 202 Accepted
INFO:mcp.server.lowlevel.server:Processing request of type ListToolsRequest
INFO:pyngrok.process.ngrok:t=2025-11-13T11:47:43-0300 lvl=info msg="received stop request" obj=app stopReq="{err:<nil> restart:false}"
Server stopped
Example: ChatGPT Interaction
Here's an example of ChatGPT using the Elasticsearch connector to search through GitHub issues:
-
Search tool:

-
Fetch tool:

Cleanup (Optional)
Delete the Elasticsearch index to remove all demo data. WARNING: This permanently deletes all documents in the index. Only run this if you want to start fresh or clean up after the demo.