Notebooks
U
Unsloth
Gemma3N (4B) Vision

Gemma3N (4B) Vision

unsloth-notebooksunslothoriginal_template

News

Placeholder

Installation

[ ]
[ ]

Unsloth

[1]
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
==((====))==  Unsloth 2025.7.11: Fast Gemma3N patching. Transformers: 4.53.3.
   \\   /|    Tesla T4. Num GPUs = 1. Max memory: 14.741 GB. Platform: Linux.
O^O/ \_/ \    Torch: 2.6.0+cu124. CUDA: 7.5. CUDA Toolkit: 12.4. Triton: 3.2.0
\        /    Bfloat16 = FALSE. FA [Xformers = 0.0.29.post3. FA2 = False]
 "-____-"     Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
Unsloth: Gemma3N does not support SDPA - switching to eager!
model.safetensors.index.json: 0.00B [00:00, ?B/s]
model-00001-of-00003.safetensors:   0%|          | 0.00/3.72G [00:00<?, ?B/s]
model-00002-of-00003.safetensors:   0%|          | 0.00/4.99G [00:00<?, ?B/s]
model-00003-of-00003.safetensors:   0%|          | 0.00/1.15G [00:00<?, ?B/s]
Loading checkpoint shards:   0%|          | 0/3 [00:00<?, ?it/s]
generation_config.json:   0%|          | 0.00/191 [00:00<?, ?B/s]
processor_config.json:   0%|          | 0.00/98.0 [00:00<?, ?B/s]
preprocessor_config.json: 0.00B [00:00, ?B/s]
tokenizer_config.json: 0.00B [00:00, ?B/s]
tokenizer.model:   0%|          | 0.00/4.70M [00:00<?, ?B/s]
tokenizer.json:   0%|          | 0.00/33.4M [00:00<?, ?B/s]
special_tokens_map.json:   0%|          | 0.00/769 [00:00<?, ?B/s]

We now add LoRA adapters for parameter efficient fine-tuning, allowing us to train only 1% of all model parameters efficiently.

[NEW] We also support fine-tuning only the vision component, only the language component, or both. Additionally, you can choose to fine-tune the attention modules, the MLP layers, or both!

[2]
Unsloth: Making `model.base_model.model.model.language_model` require gradients

Data Prep

We'll use a sampled dataset of handwritten math formulas. The objective is to convert these images into a computer-readable format—specifically LaTeX—so they can be rendered. This is particularly useful for complex expressions.

You can access the dataset here. The full dataset is here.

[3]

Let's take an overview of the dataset. We'll examine the second image and its corresponding caption.

[4]
Dataset({
,    features: ['image', 'text'],
,    num_rows: 68686
,})
[5]
Output
[6]
'H ^ { \\prime } = \\beta N \\int d \\lambda \\biggl \\{ \\frac { 1 } { 2 \\beta ^ { 2 } N ^ { 2 } } \\partial _ { \\lambda } \\zeta ^ { \\dagger } \\partial _ { \\lambda } \\zeta + V ( \\lambda ) \\zeta ^ { \\dagger } \\zeta \\biggr \\} \\ .'

We can also render LaTeX directly in the browser!

[7]
$\displaystyle \sigma ^ { \mu } \frac { \lambda ^ { a } } { 2 } A _ { \mu } ^ { a } .$

To format the dataset, all vision fine-tuning tasks should follow this format:

	[
    {
        "role": "user",
        "content": [
            {"type": "text", "text": instruction},
            {"type": "image", "image": sample["image"]},
        ],
    },
    {
        "role": "user",
        "content": [
            {"type": "text", "text": instruction},
            {"type": "image", "image": sample["image"]},
        ],
    },
]

[8]

Let's convert the dataset into the "correct" format for finetuning:

[9]

The first example is now structured like below:

[10]
{'messages': [{'role': 'user',
,   'content': [{'type': 'text',
,     'text': 'Write the LaTeX representation for this image.'},
,    {'type': 'image',
,     'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=160x40>}]},
,  {'role': 'assistant',
,   'content': [{'type': 'text',
,     'text': '{ \\frac { N } { M } } \\in { \\bf Z } , { \\frac { M } { P } } \\in { \\bf Z } , { \\frac { P } { Q } } \\in { \\bf Z }'}]}]}

Lets take the Gemma 3n instruction chat template and use it in our base model

[11]

Before fine-tuning, let us evaluate the base model's performance. We do not expect strong results, as it has not encountered this chat template before.

[12]
  number="1" 
  count_sym=1 
  count_sym_arg=1 
  count_sym_arg_arg=1 
  count_sym_arg_arg_arg=1 
  count_sym_arg_arg_arg_arg=1 
  count_sym_arg_arg_arg_arg_arg=1 
  count_sym_arg_arg_arg_arg_arg_arg=1 
  count_sym_arg_arg_arg_arg_arg_arg_arg=1 
  

You can see it's absolutely terrible! It doesn't follow instructions at all

Train the model

Now let's train our model. We do 60 steps to speed things up, but you can set num_train_epochs=1 for a full run, and turn off max_steps=None. We also support TRL's DPOTrainer!

We use our new UnslothVisionDataCollator which will help in our vision finetuning setup.

[13]
Unsloth: Model does not have a default image size - using 512
[ ]
GPU = Tesla T4. Max memory = 14.741 GB.
5.416 GB of memory reserved.
[14]
==((====))==  Unsloth - 2x faster free finetuning | Num GPUs used = 1
   \\   /|    Num examples = 68,686 | Num Epochs = 1 | Total steps = 60
O^O/ \_/ \    Batch size per device = 1 | Gradient accumulation steps = 4
\        /    Data Parallel GPUs = 1 | Total batch size (1 x 4 x 1) = 4
 "-____-"     Trainable parameters = 76,840,960 of 7,926,819,152 (0.97% trained)
`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`.
[ ]
760.6053 seconds used for training.
12.68 minutes used for training.
Peak reserved memory = 6.061 GB.
Peak reserved memory for training = 0.645 GB.
Peak reserved memory % of max memory = 41.117 %.
Peak reserved memory for training % of max memory = 4.376 %.

Inference

Let's run the model! You can modify the instruction and input—just leave the output blank.

We'll use the best hyperparameters for inference on Gemma: top_p=0.95, top_k=64, and temperature=1.0.

[15]
[ [ B _ { n } ^ { + } , b _ { 2 } ^ { + } ] = n B _ { n } ^ { + } , \quad [ [ B _ { n } ^ { - } , b _ { 2 } ^ { + } ] , b _ { 2 } ^ { - } ] = n B _ { n } ^ { - } .
<eos>

Saving, loading finetuned models

To save the final model as LoRA adapters, use Hugging Face’s push_to_hub for online saving, or save_pretrained for local storage.

[NOTE] This ONLY saves the LoRA adapters, and not the full model. To save to 16bit or GGUF, scroll down!

[16]
['lora_model/processor_config.json']

Now if you want to load the LoRA adapters we just saved for inference, set False to True:

[ ]

Saving to float16 for VLLM

We also support saving to float16 directly. Select merged_16bit for float16. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens.

[ ]

And we're done! If you have any questions on Unsloth, we have a Discord channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!

Some other resources:

  1. Train your own reasoning model - Llama GRPO notebook Free Colab
  2. Saving finetunes to Ollama. Free notebook
  3. Llama 3.2 Vision finetuning - Radiography use case. Free Colab
  4. See notebooks for DPO, ORPO, Continued pretraining, conversational finetuning and more on our documentation!

Join Discord if you need help + ⭐️ Star us on Github ⭐️