Kaggle Llama3.3 (70B) A100 Conversational
To run this, press "Runtime" and press "Run all" on your A100 Google Colab Pro instance!
To install Unsloth on your local device, follow our guide. This notebook is licensed LGPL-3.0.
You will learn how to do data prep, how to train, how to run the model, & how to save it
News
Train MoEs - DeepSeek, GLM, Qwen and gpt-oss 12x faster with 35% less VRAM. Blog
You can now train embedding models 1.8-3.3x faster with 20% less VRAM. Blog
Ultra Long-Context Reinforcement Learning is here with 7x more context windows! Blog
3x faster LLM training with 30% less VRAM and 500K context. 3x faster • 500K Context
New in Reinforcement Learning: FP8 RL • Vision RL • Standby • gpt-oss RL
Visit our docs for all our model uploads and notebooks.
Installation
Unsloth
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning. 🦥 Unsloth Zoo will now patch everything to make training faster! ==((====))== Unsloth 2025.9.1: Fast Llama patching. Transformers: 4.55.4. \\ /| NVIDIA A100-SXM4-80GB. Num GPUs = 1. Max memory: 79.318 GB. Platform: Linux. O^O/ \_/ \ Torch: 2.8.0+cu126. CUDA: 8.0. CUDA Toolkit: 12.6. Triton: 3.4.0 \ / Bfloat16 = TRUE. FA [Xformers = 0.0.32.post2. FA2 = False] "-____-" Free license: http://github.com/unslothai/unsloth Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
model.safetensors.index.json: 0.00B [00:00, ?B/s]
Fetching 8 files: 0%| | 0/8 [00:00<?, ?it/s]
model-00005-of-00008.safetensors: 0%| | 0.00/4.98G [00:00<?, ?B/s]
model-00003-of-00008.safetensors: 0%| | 0.00/4.98G [00:00<?, ?B/s]
model-00006-of-00008.safetensors: 0%| | 0.00/4.98G [00:00<?, ?B/s]
model-00001-of-00008.safetensors: 0%| | 0.00/4.95G [00:00<?, ?B/s]
model-00002-of-00008.safetensors: 0%| | 0.00/4.98G [00:00<?, ?B/s]
model-00008-of-00008.safetensors: 0%| | 0.00/4.75G [00:00<?, ?B/s]
model-00004-of-00008.safetensors: 0%| | 0.00/4.93G [00:00<?, ?B/s]
model-00007-of-00008.safetensors: 0%| | 0.00/4.98G [00:00<?, ?B/s]
Loading checkpoint shards: 0%| | 0/8 [00:00<?, ?it/s]
generation_config.json: 0%| | 0.00/234 [00:00<?, ?B/s]
tokenizer_config.json: 0.00B [00:00, ?B/s]
tokenizer.json: 0%| | 0.00/17.2M [00:00<?, ?B/s]
special_tokens_map.json: 0%| | 0.00/454 [00:00<?, ?B/s]
We now add LoRA adapters so we only need to update 1 to 10% of all parameters!
Unsloth 2025.9.1 patched 80 layers with 80 QKV layers, 80 O layers and 80 MLP layers.
Data Prep
We now use the Llama-3.1 format for conversation style finetunes. We use Maxime Labonne's FineTome-100k dataset in ShareGPT style. But we convert it to HuggingFace's normal multiturn format ("role", "content") instead of ("from", "value")/ Llama-3 renders multi turn conversations like below:
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
Hello!<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Hey there! How are you?<|eot_id|><|start_header_id|>user<|end_header_id|>
I'm great thanks!<|eot_id|>
We use our get_chat_template function to get the correct chat template. We support zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old, phi3, llama3 and more.
README.md: 0%| | 0.00/982 [00:00<?, ?B/s]
data/train-00000-of-00001.parquet: 0%| | 0.00/117M [00:00<?, ?B/s]
Generating train split: 0%| | 0/100000 [00:00<?, ? examples/s]
We now use standardize_sharegpt to convert ShareGPT style datasets into HuggingFace's generic format. This changes the dataset from looking like:
{"from": "system", "value": "You are an assistant"}
{"from": "human", "value": "What is 2+2?"}
{"from": "gpt", "value": "It's 4."}
to
{"role": "system", "content": "You are an assistant"}
{"role": "user", "content": "What is 2+2?"}
{"role": "assistant", "content": "It's 4."}
Unsloth: Standardizing formats (num_proc=12): 0%| | 0/100000 [00:00<?, ? examples/s]
Map: 0%| | 0/100000 [00:00<?, ? examples/s]
We look at how the conversations are structured for item 5:
[{'content': 'How do astronomers determine the original wavelength of light emitted by a celestial body at rest, which is necessary for measuring its speed using the Doppler effect?',
, 'role': 'user'},
, {'content': 'Astronomers make use of the unique spectral fingerprints of elements found in stars. These elements emit and absorb light at specific, known wavelengths, forming an absorption spectrum. By analyzing the light received from distant stars and comparing it to the laboratory-measured spectra of these elements, astronomers can identify the shifts in these wavelengths due to the Doppler effect. The observed shift tells them the extent to which the light has been redshifted or blueshifted, thereby allowing them to calculate the speed of the star along the line of sight relative to Earth.',
, 'role': 'assistant'}] And we see how the chat template transformed these conversations.
[Notice] Llama 3.1 Instruct's default chat template default adds "Cutting Knowledge Date: December 2023\nToday Date: 26 July 2024", so do not be alarmed!
'<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December 2023\nToday Date: 26 July 2024\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nHow do astronomers determine the original wavelength of light emitted by a celestial body at rest, which is necessary for measuring its speed using the Doppler effect?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nAstronomers make use of the unique spectral fingerprints of elements found in stars. These elements emit and absorb light at specific, known wavelengths, forming an absorption spectrum. By analyzing the light received from distant stars and comparing it to the laboratory-measured spectra of these elements, astronomers can identify the shifts in these wavelengths due to the Doppler effect. The observed shift tells them the extent to which the light has been redshifted or blueshifted, thereby allowing them to calculate the speed of the star along the line of sight relative to Earth.<|eot_id|>'
Unsloth: Tokenizing ["text"] (num_proc=16): 0%| | 0/100000 [00:00<?, ? examples/s]
We also use Unsloth's train_on_completions method to only train on the assistant outputs and ignore the loss on the user's inputs.
Map (num_proc=12): 0%| | 0/100000 [00:00<?, ? examples/s]
We verify masking is actually done:
'<|begin_of_text|><|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December 2023\nToday Date: 26 July 2024\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nHow do astronomers determine the original wavelength of light emitted by a celestial body at rest, which is necessary for measuring its speed using the Doppler effect?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nAstronomers make use of the unique spectral fingerprints of elements found in stars. These elements emit and absorb light at specific, known wavelengths, forming an absorption spectrum. By analyzing the light received from distant stars and comparing it to the laboratory-measured spectra of these elements, astronomers can identify the shifts in these wavelengths due to the Doppler effect. The observed shift tells them the extent to which the light has been redshifted or blueshifted, thereby allowing them to calculate the speed of the star along the line of sight relative to Earth.<|eot_id|>'
' Astronomers make use of the unique spectral fingerprints of elements found in stars. These elements emit and absorb light at specific, known wavelengths, forming an absorption spectrum. By analyzing the light received from distant stars and comparing it to the laboratory-measured spectra of these elements, astronomers can identify the shifts in these wavelengths due to the Doppler effect. The observed shift tells them the extent to which the light has been redshifted or blueshifted, thereby allowing them to calculate the speed of the star along the line of sight relative to Earth.<|eot_id|>'
We can see the System and Instruction prompts are successfully masked!
GPU = NVIDIA A100-SXM4-80GB. Max memory = 79.318 GB. 39.221 GB of memory reserved.
==((====))== Unsloth - 2x faster free finetuning | Num GPUs used = 1 \\ /| Num examples = 100,000 | Num Epochs = 1 | Total steps = 60 O^O/ \_/ \ Batch size per device = 2 | Gradient accumulation steps = 4 \ / Data Parallel GPUs = 1 | Total batch size (2 x 4 x 1) = 8 "-____-" Trainable parameters = 207,093,760 of 70,760,800,256 (0.29% trained)
Unsloth: Will smartly offload gradients to save VRAM!
1001.6837 seconds used for training. 16.69 minutes used for training. Peak reserved memory = 42.789 GB. Peak reserved memory for training = 3.568 GB. Peak reserved memory % of max memory = 53.946 %. Peak reserved memory for training % of max memory = 4.498 %.
Inference
Let's run the model! You can change the instruction and input - leave the output blank!
We use min_p = 0.1 and temperature = 1.5. Read this Tweet for more information on why.
The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
['<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December 2023\nToday Date: 26 July 2024\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nContinue the fibonacci sequence: 1, 1, 2, 3, 5, 8,<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nThe next two numbers in the Fibonacci sequence would be 13 and 21.\n\nFibonacci sequence is a series of numbers where a number is the addition of the last two numbers, starting with 1 and 1.\n1, 1, 2, 3, 5, 8, 13']
You can also use a TextStreamer for continuous inference - so you can see the generation token by token, instead of waiting the whole time!
The next numbers in the Fibonacci sequence would be 13, 21, 34, 55, 89, and so on.<|eot_id|>
('lora_model/tokenizer_config.json',
, 'lora_model/special_tokens_map.json',
, 'lora_model/chat_template.jinja',
, 'lora_model/tokenizer.json') Now if you want to load the LoRA adapters we just saved for inference, set False to True:
Sure! The tall tower you might be thinking of in the capital of France is the Eiffel Tower. It is a magnificent structure that has stood tall and proud for over a century in Paris. Standing at an impressive 300 meters (984 feet) high, the Eiffel Tower is an engineering marvel that was first built for the 1889 World's Fair, also known as the Exposition Universelle in Paris. The tower was designed and constructed by Gustave Eiffel and his engineering company, Compagnie des Établissements Eiffel. It was intended to serve as the entrance arch for the World's
You can also use Hugging Face's AutoPeftModelForCausalLM. Only use this if you do not have unsloth installed. It can be hopelessly slow, since 4bit model downloading is not supported, and Unsloth's inference is 2x faster.
Saving to float16 for VLLM
We also support saving to float16 directly. Select merged_16bit for float16 or merged_4bit for int4. We also allow lora adapters as a fallback. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens. See our docs for more deployment options.
GGUF / llama.cpp Conversion
To save to GGUF / llama.cpp, we support it natively now! We clone llama.cpp and we default save it to q8_0. We allow all methods like q4_k_m. Use save_pretrained_gguf for local saving and push_to_hub_gguf for uploading to HF.
Some supported quant methods (full list on our docs page):
q8_0- Fast conversion. High resource use, but generally acceptable.q4_k_m- Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.q5_k_m- Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K.
[NEW] To finetune and auto export to Ollama, try our Ollama notebook
And we're done! If you have any questions on Unsloth, we have a Discord channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!
Some other resources:
- Looking to use Unsloth locally? Read our Installation Guide for details on installing Unsloth on Windows, Docker, AMD, Intel GPUs.
- Learn how to do Reinforcement Learning with our RL Guide and notebooks.
- Read our guides and notebooks for Text-to-speech (TTS) and vision model support.
- Explore our LLM Tutorials Directory to find dedicated guides for each model.
- Need help with Inference? Read our Inference & Deployment page for details on using vLLM, llama.cpp, Ollama etc.



