CodeGemma (7B) Conversational
To run this, press "Runtime" and press "Run all" on a free Tesla T4 Google Colab instance!
To install Unsloth on your local device, follow our guide. This notebook is licensed LGPL-3.0.
You will learn how to do data prep, how to train, how to run the model, & how to save it
News
Train MoEs - DeepSeek, GLM, Qwen and gpt-oss 12x faster with 35% less VRAM. Blog
You can now train embedding models 1.8-3.3x faster with 20% less VRAM. Blog
Ultra Long-Context Reinforcement Learning is here with 7x more context windows! Blog
3x faster LLM training with 30% less VRAM and 500K context. 3x faster • 500K Context
New in Reinforcement Learning: FP8 RL • Vision RL • Standby • gpt-oss RL
Visit our docs for all our model uploads and notebooks.
Installation
Unsloth
==((====))== Unsloth: Fast Gemma patching release 2024.4 \\ /| GPU: Tesla T4. Max memory: 14.748 GB. Platform = Linux. O^O/ \_/ \ Pytorch: 2.2.1+cu121. CUDA = 7.5. CUDA Toolkit = 12.1. \ / Bfloat16 = FALSE. Xformers = 0.0.25.post1. FA = False. "-____-" Free Apache license: http://github.com/unslothai/unsloth
We now add LoRA adapters so we only need to update 1 to 10% of all parameters!
Unsloth 2024.4 patched 28 layers with 28 QKV layers, 28 O layers and 28 MLP layers.
Data Prep
We now use the ChatML format for conversation style finetunes. We use Open Assistant conversations in ShareGPT style. ChatML renders multi turn conversations like below:
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
What's the capital of France?<|im_end|>
<|im_start|>assistant
Paris.
[NOTE] To train only on completions (ignoring the user's input) read our docs here
We use our get_chat_template function to get the correct chat template. We support zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old and our own optimized unsloth template.
Normally one has to train <|im_start|> and <|im_end|>. We instead map <|im_end|> to be the EOS token, and leave <|im_start|> as is. This requires no additional training of additional tokens.
Note ShareGPT uses {"from": "human", "value" : "Hi"} and not {"role": "user", "content" : "Hi"}, so we use mapping to map it.
For text completions like novel writing, try this notebook.
Let's see how the ChatML format works by printing the 5th element
[{'from': 'human',
, 'value': 'What is the typical wattage of bulb in a lightbox?'},
, {'from': 'gpt',
, 'value': 'The typical wattage of a bulb in a lightbox is 60 watts, although domestic LED bulbs are normally much lower than 60 watts, as they produce the same or greater lumens for less wattage than alternatives. A 60-watt Equivalent LED bulb can be calculated using the 7:1 ratio, which divides 60 watts by 7 to get roughly 9 watts.'},
, {'from': 'human',
, 'value': 'Rewrite your description of the typical wattage of a bulb in a lightbox to only include the key points in a list format.'}] <bos><|im_start|>user What is the typical wattage of bulb in a lightbox?<|im_end|> <|im_start|>assistant The typical wattage of a bulb in a lightbox is 60 watts, although domestic LED bulbs are normally much lower than 60 watts, as they produce the same or greater lumens for less wattage than alternatives. A 60-watt Equivalent LED bulb can be calculated using the 7:1 ratio, which divides 60 watts by 7 to get roughly 9 watts.<|im_end|> <|im_start|>user Rewrite your description of the typical wattage of a bulb in a lightbox to only include the key points in a list format.<|im_end|>
If you're looking to make your own chat template, that also is possible! You must use the Jinja templating regime. We provide our own stripped down version of the Unsloth template which we find to be more efficient, and leverages ChatML, Zephyr and Alpaca styles.
More info on chat templates on our wiki page!
GPU = Tesla T4. Max memory = 14.748 GB. 5.967 GB of memory reserved.
==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1 \\ /| Num examples = 9,033 | Num Epochs = 1 O^O/ \_/ \ Batch size per device = 1 | Gradient Accumulation steps = 4 \ / Total batch size = 4 | Total steps = 20 "-____-" Number of trainable parameters = 50,003,968
164.0222 seconds used for training. 2.73 minutes used for training. Peak reserved memory = 12.316 GB. Peak reserved memory for training = 6.349 GB. Peak reserved memory % of max memory = 83.51 %. Peak reserved memory for training % of max memory = 43.05 %.
<|im_start|> is already a token. Skipping. <|im_end|> is already a token. Skipping.
['<bos><|im_start|>user\nContinue the fibonacci sequence: 1, 1, 2, 3, 5, 8,<|im_end|>\n<|im_start|>assistant\nThe next number in the Fibonacci sequence is 13.<|im_end|>']
You can also use a TextStreamer for continuous inference - so you can see the generation token by token, instead of waiting the whole time!
<bos><|im_start|>user Continue the fibonacci sequence: 1, 1, 2, 3, 5, 8,<|im_end|> <|im_start|>assistant The next number in the Fibonacci sequence is 13.<|im_end|>
Now if you want to load the LoRA adapters we just saved for inference, set False to True:
<bos><|im_start|>user What is a famous tall tower in Paris?<|im_end|> <|im_start|>assistant The Eiffel Tower is a famous tall tower in Paris. It was built in 1889 to commemorate the 100th anniversary of the French Revolution. It is 324 meters tall and is one of the most recognizable landmarks in the world.<|im_end|>
You can also use Hugging Face's AutoPeftModelForCausalLM. Only use this if you do not have unsloth installed. It can be hopelessly slow, since 4bit model downloading is not supported, and Unsloth's inference is 2x faster.
Saving to float16 for VLLM
We also support saving to float16 directly. Select merged_16bit for float16 or merged_4bit for int4. We also allow lora adapters as a fallback. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens. See our docs for more deployment options.
GGUF / llama.cpp Conversion
To save to GGUF / llama.cpp, we support it natively now! We clone llama.cpp and we default save it to q8_0. We allow all methods like q4_k_m. Use save_pretrained_gguf for local saving and push_to_hub_gguf for uploading to HF.
Some supported quant methods (full list on our docs page):
q8_0- Fast conversion. High resource use, but generally acceptable.q4_k_m- Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.q5_k_m- Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K.
And we're done! If you have any questions on Unsloth, we have a Discord channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!
Some other resources:
- Looking to use Unsloth locally? Read our Installation Guide for details on installing Unsloth on Windows, Docker, AMD, Intel GPUs.
- Learn how to do Reinforcement Learning with our RL Guide and notebooks.
- Read our guides and notebooks for Text-to-speech (TTS) and vision model support.
- Explore our LLM Tutorials Directory to find dedicated guides for each model.
- Need help with Inference? Read our Inference & Deployment page for details on using vLLM, llama.cpp, Ollama etc.



