Notebooks
E
Elastic
Load Embedding Model

Load Embedding Model

openai-chatgptlangchain-pythonchatgptgenaielasticsearchelasticopenaiAIhomecraft-vertexchatlogvectordatabasePythonsearchgenaistacksupporting-blog-contentvectorelasticsearch-labslangchainapplications

Open In Colab

ElasticDocs GPT Blog

Loading an embedding from Hugging Face into Elasticsearch

This code will show you how to load a supported embedding model from Hugging Face into an elasticsearch cluster in Elastic Cloud

Setup

Install and import required python libraries

Elastic uses the eland python library to download modesl from Hugging Face hub and load them into elasticsearch

[ ]
[ ]

Configure elasticsearch authentication.

The recommended authentication approach is using the Elastic Cloud ID and a cluster level API key

You can use any method you wish to set the required credentials. We are using getpass in this example to prompt for credentials to avoide storing them in github.

[ ]

Connect to Elastic Cloud

[ ]

Load the model From Hugging Face into Elasticsearch

Here we specify the model id from Hugging Face. The easiest way to get this id is clicking the copy the model name icon next to the name on the model page.

When calling TransformerModel you specify the HF model id and the task type. You can try specifying auto and eland will attempt to determine the correct type from info in the model config. This is not always possible so a list of specific task_type values can be viewed in the following code: Supported values

[ ]

Starting the Model

View information about the model

This is not required but can be handy to get a model overivew

[ ]

Deploy the model

This will load the model on the ML nodes and start the process(es) making it available for the NLP task

[ ]

Verify the model started without issue

Should output -> {'routing_state': 'started'}

[ ]