Notebooks
E
Elastic
Pipeline Inference Example

Pipeline Inference Example

navigating-an-elastic-vector-databaseopenai-chatgptlangchain-pythonchatgptgenaielasticsearchelasticopenaiAIchatlogvectordatabasenotebooksPythonsearchgenaistacksupporting-blog-contentvectorelasticsearch-labslangchainapplications
[ ]
[ ]
[ ]

1. Create an Ingestion Pipeline

We will create an inference ingestion pipeline to have Elasticsearch create embeddings of the book_description when the document is indexed into Elasticsearch. This frees our hardware from needing to embed vectors.

Note that if there is any kind of failure, the documents will be placed in a failed-books index and will include helpful error messages.

[ ]

2. Create an index

Now lets create an index in Elasticsearch. We will not need to map our description_embedding vector data type as the ingestion pipeline will provide that for us.

[ ]

3. Bulk Indexing many documents

Now that we have created an index in Elasticsearch, we can index our local book objects. This bulk_ingest_books method will make indexing documents much faster than if we were to run an index function on each individual book.

[ ]

4. Indexing one document.

Lets also add one book, as this would be a standard function as you add new books to your vector database

[8]
Successfully indexed book: Shattered World - Result: created
[ ]