Gemma3 (4B) Vision
To run this, press "Runtime" and press "Run all" on a free Tesla T4 Google Colab instance!
To install Unsloth on your local device, follow our guide. This notebook is licensed LGPL-3.0.
You will learn how to do data prep, how to train, how to run the model, & how to save it
News
Train MoEs - DeepSeek, GLM, Qwen and gpt-oss 12x faster with 35% less VRAM. Blog
You can now train embedding models 1.8-3.3x faster with 20% less VRAM. Blog
Ultra Long-Context Reinforcement Learning is here with 7x more context windows! Blog
3x faster LLM training with 30% less VRAM and 500K context. 3x faster • 500K Context
New in Reinforcement Learning: FP8 RL • Vision RL • Standby • gpt-oss RL
Visit our docs for all our model uploads and notebooks.
Installation
Unsloth
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning. 🦥 Unsloth Zoo will now patch everything to make training faster! ==((====))== Unsloth 2025.6.6: Fast Gemma3 patching. Transformers: 4.52.4. \\ /| Tesla T4. Num GPUs = 1. Max memory: 14.741 GB. Platform: Linux. O^O/ \_/ \ Torch: 2.6.0+cu124. CUDA: 7.5. CUDA Toolkit: 12.4. Triton: 3.2.0 \ / Bfloat16 = FALSE. FA [Xformers = 0.0.29.post3. FA2 = False] "-____-" Free license: http://github.com/unslothai/unsloth Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored! Unsloth: Using float16 precision for gemma3 won't work! Using float32.
model.safetensors: 0%| | 0.00/4.38G [00:00<?, ?B/s]
generation_config.json: 0%| | 0.00/210 [00:00<?, ?B/s]
processor_config.json: 0%| | 0.00/70.0 [00:00<?, ?B/s]
preprocessor_config.json: 0%| | 0.00/570 [00:00<?, ?B/s]
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
tokenizer_config.json: 0.00B [00:00, ?B/s]
tokenizer.model: 0%| | 0.00/4.69M [00:00<?, ?B/s]
tokenizer.json: 0%| | 0.00/33.4M [00:00<?, ?B/s]
added_tokens.json: 0%| | 0.00/35.0 [00:00<?, ?B/s]
special_tokens_map.json: 0%| | 0.00/662 [00:00<?, ?B/s]
We now add LoRA adapters for parameter efficient fine-tuning, allowing us to train only 1% of all model parameters efficiently.
[NEW] We also support fine-tuning only the vision component, only the language component, or both. Additionally, you can choose to fine-tune the attention modules, the MLP layers, or both!
Unsloth: Making `base_model.model.model.vision_tower.vision_model` require gradients
README.md: 0%| | 0.00/519 [00:00<?, ?B/s]
data/train-00000-of-00001.parquet: 0%| | 0.00/344M [00:00<?, ?B/s]
data/test-00000-of-00001.parquet: 0%| | 0.00/38.2M [00:00<?, ?B/s]
Generating train split: 0%| | 0/68686 [00:00<?, ? examples/s]
Generating test split: 0%| | 0/7632 [00:00<?, ? examples/s]
Let's take an overview of the dataset. We'll examine the second image and its corresponding caption.
Dataset({
, features: ['image', 'text'],
, num_rows: 68686
,}) 'H ^ { \\prime } = \\beta N \\int d \\lambda \\biggl \\{ \\frac { 1 } { 2 \\beta ^ { 2 } N ^ { 2 } } \\partial _ { \\lambda } \\zeta ^ { \\dagger } \\partial _ { \\lambda } \\zeta + V ( \\lambda ) \\zeta ^ { \\dagger } \\zeta \\biggr \\} \\ .' We can also render LaTeX directly in the browser!
$\displaystyle \sigma ^ { \mu } \frac { \lambda ^ { a } } { 2 } A _ { \mu } ^ { a } .$To format the dataset, all vision fine-tuning tasks should follow this format:
[
{
"role": "user",
"content": [
{"type": "text", "text": instruction},
{"type": "image", "image": sample["image"]},
],
},
{
"role": "user",
"content": [
{"type": "text", "text": instruction},
{"type": "image", "image": sample["image"]},
],
},
]
Let's convert the dataset into the "correct" format for finetuning:
The first example is now structured like below:
{'messages': [{'role': 'user',
, 'content': [{'type': 'text',
, 'text': 'Write the LaTeX representation for this image.'},
, {'type': 'image',
, 'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=160x40>}]},
, {'role': 'assistant',
, 'content': [{'type': 'text',
, 'text': '{ \\frac { N } { M } } \\in { \\bf Z } , { \\frac { M } { P } } \\in { \\bf Z } , { \\frac { P } { Q } } \\in { \\bf Z }'}]}]} Lets take the Gemma 3 instruction chat template and use it in our base model
Before fine-tuning, let us evaluate the base model's performance. We do not expect strong results, as it has not encountered this chat template before.
You have set `compile_config`, but we are unable to meet the criteria for compilation. Compilation will be skipped.
model model <start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image><start_of_image>]](] )] ] ]] Write the LaTeX representation for this image. . . . Write the LaTeX representation for this image. Write the LaTeX representation for this image. Write the LaTeX representation for this image. Write the LaTeX representation for this image. Write the LaTeX representation for this image. Write the LaTeX representation for this image.
You can see it's absolutely terrible! It doesn't follow instructions at all
Train the model
Now let's train our model. We do 60 steps to speed things up, but you can set num_train_epochs=1 for a full run, and turn off max_steps=None. We also support DPOTrainer and GRPOTrainer for reinforcement learning!!
We use our new UnslothVisionDataCollator which will help in our vision finetuning setup.
Unsloth: Switching to float32 training since model cannot work with float16
GPU = Tesla T4. Max memory = 14.741 GB. 5.416 GB of memory reserved.
==((====))== Unsloth - 2x faster free finetuning | Num GPUs used = 1 \\ /| Num examples = 68,686 | Num Epochs = 1 | Total steps = 30 O^O/ \_/ \ Batch size per device = 1 | Gradient accumulation steps = 4 \ / Data Parallel GPUs = 1 | Total batch size (1 x 4 x 1) = 4 "-____-" Trainable parameters = 38,497,792/4,000,000,000 (0.96% trained) `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`.
760.6053 seconds used for training. 12.68 minutes used for training. Peak reserved memory = 6.061 GB. Peak reserved memory for training = 0.645 GB. Peak reserved memory % of max memory = 41.117 %. Peak reserved memory for training % of max memory = 4.376 %.
[
\left( { B _ { n } ^ { + } , { q } _ { 2 } , { k } _ { 2 } ^ { + } \right)
[
\left. { { q } _ { 2 } , k _ { 2 } ^ { + } { k } _ { 2 } ^ { + } , { q } _ { 2 } ^ { + } } \right]
= n B _ { n } ^ { + } , { q _ { 2 } }
image<end_of_turn> ['lora_model/processor_config.json']
Now if you want to load the LoRA adapters we just saved for inference, set False to True:
A _ { \mu } ^ { \alpha \beta } \bar { A } _ { \mu } ^ { \alpha \beta } = 0 ,<end_of_turn> Saving to float16 for VLLM
We also support saving to float16 directly. Select merged_16bit for float16. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens. See our docs for more deployment options.
And we're done! If you have any questions on Unsloth, we have a Discord channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!
Some other resources:
- Looking to use Unsloth locally? Read our Installation Guide for details on installing Unsloth on Windows, Docker, AMD, Intel GPUs.
- Learn how to do Reinforcement Learning with our RL Guide and notebooks.
- Read our guides and notebooks for Text-to-speech (TTS) and vision model support.
- Explore our LLM Tutorials Directory to find dedicated guides for each model.
- Need help with Inference? Read our Inference & Deployment page for details on using vLLM, llama.cpp, Ollama etc.



