Notebooks
O
OpenAI
Generative Search With Weaviate And Openai

Generative Search With Weaviate And Openai

Using Weaviate with Generative OpenAI module for Generative Search

This notebook is prepared for a scenario where:

  • Your data is already in Weaviate
  • You want to use Weaviate with the Generative OpenAI module (generative-openai).

Prerequisites

This cookbook only coveres Generative Search examples, however, it doesn't cover the configuration and data imports.

In order to make the most of this cookbook, please complete the Getting Started cookbook first, where you will learn the essentials of working with Weaviate and import the demo data.

Checklist:

===========================================================

Prepare your OpenAI API key

The OpenAI API key is used for vectorization of your data at import, and for running queries.

If you don't have an OpenAI API key, you can get one from https://beta.openai.com/account/api-keys.

Once you get your key, please add it to your environment variables as OPENAI_API_KEY.

[ ]
[ ]

Connect to your Weaviate instance

In this section, we will:

  1. test env variable OPENAI_API_KEYmake sure you completed the step in #Prepare-your-OpenAI-API-key
  2. connect to your Weaviate with your OpenAI API Key
  3. and test the client connection

The client

After this step, the client object will be used to perform all Weaviate-related operations.

[ ]

Generative Search

Weaviate offers a Generative Search OpenAI module, which generates responses based on the data stored in your Weaviate instance.

The way you construct a generative search query is very similar to a standard semantic search query in Weaviate.

For example:

  • search in "Articles",
  • return "title", "content", "url"
  • look for objects related to "football clubs"
  • limit results to 5 objects
	    result = (
        client.query
        .get("Articles", ["title", "content", "url"])
        .with_near_text("concepts": "football clubs")
        .with_limit(5)
        # generative query will go here
        .do()
    )

Now, you can add with_generate() function to apply generative transformation. with_generate takes either:

  • single_prompt - to generate a response for each returned object,
  • grouped_task – to generate a single response from all returned objects.
[ ]
[ ]
[79]
[ ]

Thanks for following along, you're now equipped to set up your own vector databases and use embeddings to do all kinds of cool things - enjoy! For more complex use cases please continue to work through other cookbook examples in this repo.