Notebooks
O
OpenAI
Embedding Wikipedia Articles For Search

Embedding Wikipedia Articles For Search

Embedding Wikipedia articles for search

This notebook shows how we prepared a dataset of Wikipedia articles for search, used in Question_answering_using_embeddings.ipynb.

Procedure:

  1. Prerequisites: Import libraries, set API key (if needed)
  2. Collect: We download a few hundred Wikipedia articles about the 2022 Olympics
  3. Chunk: Documents are split into short, semi-self-contained sections to be embedded
  4. Embed: Each section is embedded with the OpenAI API
  5. Store: Embeddings are saved in a CSV file (for large datasets, use a vector database)

0. Prerequisites

Import libraries

[2]

Install any missing libraries with pip install in your terminal. E.g.,

	pip install openai

(You can also do this in a notebook cell with !pip install openai.)

If you install any libraries, be sure to restart the notebook kernel.

Set API key (if needed)

Note that the OpenAI library will try to read your API key from the OPENAI_API_KEY environment variable. If you haven't already, set this environment variable by following these instructions.

1. Collect documents

In this example, we'll download a few hundred Wikipedia articles related to the 2022 Winter Olympics.

[3]
Found 179 article titles in Category:2022 Winter Olympics.

2. Chunk documents

Now that we have our reference documents, we need to prepare them for search.

Because GPT can only read a limited amount of text at once, we'll split each document into chunks short enough to be read.

For this specific example on Wikipedia articles, we'll:

  • Discard less relevant-looking sections like External Links and Footnotes
  • Clean up the text by removing reference tags (e.g., ), whitespace, and super short sections
  • Split each article into sections
  • Prepend titles and subtitles to each section's text, to help GPT understand the context
  • If a section is long (say, > 1,600 tokens), we'll recursively split it into smaller sections, trying to split along semantic boundaries like paragraphs
[5]
[6]
Found 1838 sections in 179 pages.
[7]
Filtered out 89 sections, leaving 1749 sections.
[8]
['Concerns and controversies at the 2022 Winter Olympics']
'{{Short description|Overview of concerns and controversies surrounding the Ga...'
['Concerns and controversies at the 2022 Winter Olympics', '==Criticism of host selection==']
'American sportscaster [[Bob Costas]] criticized the [[International Olympic C...'
['Concerns and controversies at the 2022 Winter Olympics', '==Organizing concerns and controversies==', '===Cost and climate===']
'Several cities withdrew their applications during [[Bids for the 2022 Winter ...'
['Concerns and controversies at the 2022 Winter Olympics', '==Organizing concerns and controversies==', '===Promotional song===']
'Some commentators alleged that one of the early promotional songs for the [[2...'
['Concerns and controversies at the 2022 Winter Olympics', '== Diplomatic boycotts or non-attendance ==']
'<section begin=boycotts />\n[[File:2022 Winter Olympics (Beijing) diplomatic b...'

Next, we'll recursively split long sections into smaller sections.

There's no perfect recipe for splitting text into sections.

Some tradeoffs include:

  • Longer sections may be better for questions that require more context
  • Longer sections may be worse for retrieval, as they may have more topics muddled together
  • Shorter sections are better for reducing costs (which are proportional to the number of tokens)
  • Shorter sections allow more sections to be retrieved, which may help with recall
  • Overlapping sections may help prevent answers from being cut by section boundaries

Here, we'll use a simple approach and limit sections to 1,600 tokens each, recursively halving any sections that are too long. To avoid cutting in the middle of useful sentences, we'll split along paragraph boundaries when possible.

[10]
[11]
1749 Wikipedia sections split into 2052 strings.
[12]
Concerns and controversies at the 2022 Winter Olympics

==Criticism of host selection==

American sportscaster [[Bob Costas]] criticized the [[International Olympic Committee]]'s (IOC) decision to award the games to China saying "The IOC deserves all of the disdain and disgust that comes their way for going back to China yet again" referencing China's human rights record.

After winning two gold medals and returning to his home country of Sweden skater [[Nils van der Poel]] criticized the IOC's selection of China as the host saying "I think it is extremely irresponsible to give it to a country that violates human rights as blatantly as the Chinese regime is doing." He had declined to criticize China before leaving for the games saying "I don't think it would be particularly wise for me to criticize the system I'm about to transition to, if I want to live a long and productive life."

3. Embed document chunks

Now that we've split our library into shorter self-contained strings, we can compute embeddings for each.

(For large embedding jobs, use a script like api_request_parallel_processor.py to parallelize requests while throttling to stay under rate limits.)

[13]
Batch 0 to 999
Batch 1000 to 1999
Batch 2000 to 2999

4. Store document chunks and embeddings

Because this example only uses a few thousand strings, we'll store them in a CSV file.

(For larger datasets, use a vector database, which will be more performant.)

[14]