Notebooks
U
Unsloth
Kaggle Gpt Oss (20B) 500K Context Fine Tuning

Kaggle Gpt Oss (20B) 500K Context Fine Tuning

unsloth-notebooksunslothnb

To run this, press "Runtime" and press "Run all" on a free Tesla T4 Google Colab instance!

Join Discord if you need help + ⭐ Star us on Github

To install Unsloth on your local device, follow our guide. This notebook is licensed LGPL-3.0.

You will learn how to do data prep, how to train, how to run the model, & how to save it

News

Train MoEs - DeepSeek, GLM, Qwen and gpt-oss 12x faster with 35% less VRAM. Blog

You can now train embedding models 1.8-3.3x faster with 20% less VRAM. Blog

Ultra Long-Context Reinforcement Learning is here with 7x more context windows! Blog

3x faster LLM training with 30% less VRAM and 500K context. 3x faster500K Context

New in Reinforcement Learning: FP8 RLVision RLStandbygpt-oss RL

Visit our docs for all our model uploads and notebooks.

Installation

[ ]

Unsloth

We're about to demonstrate the power of the new OpenAI GPT-OSS 20B model through a finetuning example. To use our MXFP4 inference example, use this notebook instead.

[3]
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
==((====))==  Unsloth 2025.11.4: Fast Gpt_Oss patching. Transformers: 4.56.2.
   \\   /|    NVIDIA A100-SXM4-80GB. Num GPUs = 1. Max memory: 79.318 GB. Platform: Linux.
O^O/ \_/ \    Torch: 2.9.0+cu126. CUDA: 8.0. CUDA Toolkit: 12.6. Triton: 3.5.0
\        /    Bfloat16 = TRUE. FA [Xformers = None. FA2 = False]
 "-____-"     Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
model.safetensors.index.json: 0.00B [00:00, ?B/s]
model-00001-of-00004.safetensors:   0%|          | 0.00/4.00G [00:00<?, ?B/s]
model-00002-of-00004.safetensors:   0%|          | 0.00/4.00G [00:00<?, ?B/s]
model-00003-of-00004.safetensors:   0%|          | 0.00/3.37G [00:00<?, ?B/s]
model-00004-of-00004.safetensors:   0%|          | 0.00/1.16G [00:00<?, ?B/s]
Loading checkpoint shards:   0%|          | 0/4 [00:00<?, ?it/s]
generation_config.json:   0%|          | 0.00/165 [00:00<?, ?B/s]
tokenizer_config.json: 0.00B [00:00, ?B/s]
tokenizer.json:   0%|          | 0.00/27.9M [00:00<?, ?B/s]
special_tokens_map.json:   0%|          | 0.00/446 [00:00<?, ?B/s]
chat_template.jinja: 0.00B [00:00, ?B/s]

We now add LoRA adapters for parameter efficient finetuning - this allows us to only efficiently train 1% of all parameters.

[4]
Unsloth: Making `model.base_model.model.model` require gradients

Data Prep

We'll be using the https://www.gutenberg.org free Ebooks database. We'll get the most popular titles sorted by downloads as listed here: https://www.gutenberg.org/ebooks/search/?sort_order=downloads

We then will mimic long context training by combining "Moby Dick" and "Pride and Prejudice" together

[10]
The book 'Moby Dick' has context length = 310,651
The book 'Pride and Prejudice' has context length = 175,836
[11]
486487

We combine the datasets and use gpt-oss's formatting:

[20]
[24]
486607

Train the model

Now let's train our model. We do 1 step to speed things up, but you can set num_train_epochs=1 for a full run, and turn off max_steps=None.

[25]
num_proc must be <= 1. Reducing num_proc to 1 for dataset of size 1.
WARNING:datasets.arrow_dataset:num_proc must be <= 1. Reducing num_proc to 1 for dataset of size 1.
Unsloth: Tokenizing ["text"] (num_proc=1):   0%|          | 0/1 [00:00<?, ? examples/s]
[26]
GPU = NVIDIA A100-SXM4-80GB. Max memory = 79.318 GB.
19.354 GB of memory reserved.

Let's train the model! To resume a training run, set trainer.train(resume_from_checkpoint = True)


NOTE

Long context training can take VERY long to even do 1 step. Expect to wait 40 minutes for 500K context lengths or more.


[27]
==((====))==  Unsloth - 2x faster free finetuning | Num GPUs used = 1
   \\   /|    Num examples = 1 | Num Epochs = 1 | Total steps = 1
O^O/ \_/ \    Batch size per device = 1 | Gradient accumulation steps = 1
\        /    Data Parallel GPUs = 1 | Total batch size (1 x 1 x 1) = 1
 "-____-"     Trainable parameters = 3,981,312 of 20,918,738,496 (0.02% trained)
Unsloth: Will smartly offload gradients to save VRAM!
[28]
2397.1358 seconds used for training.
39.95 minutes used for training.
Peak reserved memory = 70.451 GB.
Peak reserved memory for training = 51.097 GB.
Peak reserved memory % of max memory = 88.821 %.
Peak reserved memory for training % of max memory = 64.42 %.

Saving, loading finetuned models

To save the final model as LoRA adapters, either use Hugging Face's push_to_hub for an online save or save_pretrained for a local save.

[NOTE] Currently finetunes can only be loaded via Unsloth in the meantime - we're working on vLLM and GGUF exporting!

[29]

Saving to float16 for VLLM or mxfp4

We also support saving to float16 or mxfp4 directly. Select merged_16bit for float16. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens. See our docs for more deployment options.

[30]

And we're done! If you have any questions on Unsloth, we have a Discord channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!

Some other resources:

  1. Train your own reasoning model - Llama GRPO notebook Free Colab
  2. Saving finetunes to Ollama. Free notebook
  3. Llama 3.2 Vision finetuning - Radiography use case. Free Colab
  4. See notebooks for DPO, ORPO, Continued pretraining, conversational finetuning and more on our documentation!

Join Discord if you need help + ⭐️ Star us on Github ⭐️

This notebook and all Unsloth notebooks are licensed LGPL-3.0.