Kaggle Qwen2.5 VL (7B) Vision
To run this, press "Runtime" and press "Run all" on a free Tesla T4 Google Colab instance!
To install Unsloth on your local device, follow our guide. This notebook is licensed LGPL-3.0.
You will learn how to do data prep, how to train, how to run the model, & how to save it
News
Train MoEs - DeepSeek, GLM, Qwen and gpt-oss 12x faster with 35% less VRAM. Blog
You can now train embedding models 1.8-3.3x faster with 20% less VRAM. Blog
Ultra Long-Context Reinforcement Learning is here with 7x more context windows! Blog
3x faster LLM training with 30% less VRAM and 500K context. 3x faster • 500K Context
New in Reinforcement Learning: FP8 RL • Vision RL • Standby • gpt-oss RL
Visit our docs for all our model uploads and notebooks.
Installation
Unsloth
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning. 🦥 Unsloth Zoo will now patch everything to make training faster! ==((====))== Unsloth 2025.3.19: Fast Qwen2_5_Vl patching. Transformers: 4.50.0. \\ /| Tesla T4. Num GPUs = 1. Max memory: 14.741 GB. Platform: Linux. O^O/ \_/ \ Torch: 2.6.0+cu124. CUDA: 7.5. CUDA Toolkit: 12.4. Triton: 3.2.0 \ / Bfloat16 = FALSE. FA [Xformers = 0.0.29.post3. FA2 = False] "-____-" Free license: http://github.com/unslothai/unsloth Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
model.safetensors: 0%| | 0.00/5.97G [00:00<?, ?B/s]
generation_config.json: 0%| | 0.00/267 [00:00<?, ?B/s]
preprocessor_config.json: 0%| | 0.00/575 [00:00<?, ?B/s]
Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.50, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`.
tokenizer_config.json: 0%| | 0.00/7.33k [00:00<?, ?B/s]
vocab.json: 0%| | 0.00/2.78M [00:00<?, ?B/s]
merges.txt: 0%| | 0.00/1.67M [00:00<?, ?B/s]
tokenizer.json: 0%| | 0.00/11.4M [00:00<?, ?B/s]
added_tokens.json: 0%| | 0.00/605 [00:00<?, ?B/s]
special_tokens_map.json: 0%| | 0.00/614 [00:00<?, ?B/s]
chat_template.json: 0%| | 0.00/1.05k [00:00<?, ?B/s]
We now add LoRA adapters for parameter efficient finetuning - this allows us to only efficiently train 1% of all parameters.
[NEW] We also support finetuning ONLY the vision part of the model, or ONLY the language part. Or you can select both! You can also select to finetune the attention or the MLP layers!
README.md: 0%| | 0.00/519 [00:00<?, ?B/s]
train-00000-of-00001.parquet: 0%| | 0.00/344M [00:00<?, ?B/s]
test-00000-of-00001.parquet: 0%| | 0.00/38.2M [00:00<?, ?B/s]
Generating train split: 0%| | 0/68686 [00:00<?, ? examples/s]
Generating test split: 0%| | 0/7632 [00:00<?, ? examples/s]
Let's take an overview look at the dataset. We shall see what the 3rd image is, and what caption it had.
Dataset({
, features: ['image', 'text'],
, num_rows: 68686
,}) 'H ^ { \\prime } = \\beta N \\int d \\lambda \\biggl \\{ \\frac { 1 } { 2 \\beta ^ { 2 } N ^ { 2 } } \\partial _ { \\lambda } \\zeta ^ { \\dagger } \\partial _ { \\lambda } \\zeta + V ( \\lambda ) \\zeta ^ { \\dagger } \\zeta \\biggr \\} \\ .' We can also render the LaTeX in the browser directly!
$\displaystyle H ^ { \prime } = \beta N \int d \lambda \biggl \{ \frac { 1 } { 2 \beta ^ { 2 } N ^ { 2 } } \partial _ { \lambda } \zeta ^ { \dagger } \partial _ { \lambda } \zeta + V ( \lambda ) \zeta ^ { \dagger } \zeta \biggr \} \ .$To format the dataset, all vision finetuning tasks should be formatted as follows:
[
{ "role": "user",
"content": [{"type": "text", "text": Q}, {"type": "image", "image": image} ]
},
{ "role": "assistant",
"content": [{"type": "text", "text": A} ]
},
]
Let's convert the dataset into the "correct" format for finetuning:
We look at how the conversations are structured for the first example:
{'messages': [{'role': 'user',
, 'content': [{'type': 'text',
, 'text': 'Write the LaTeX representation for this image.'},
, {'type': 'image',
, 'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=160x40>}]},
, {'role': 'assistant',
, 'content': [{'type': 'text',
, 'text': '{ \\frac { N } { M } } \\in { \\bf Z } , { \\frac { M } { P } } \\in { \\bf Z } , { \\frac { P } { Q } } \\in { \\bf Z }'}]}]} Let's first see before we do any finetuning what the model outputs for the first example!
```latex
H' = \beta N \int d\lambda \left\{ \frac{1}{2\beta^2 N^2} \partial_\lambda \zeta^\dagger \partial_\lambda \zeta + V(\lambda) \zeta^\dagger \zeta \right\}.
```<|im_end|>
Train the model
Now let's train our model. We do 60 steps to speed things up, but you can set num_train_epochs=1 for a full run, and turn off max_steps=None. We also support DPOTrainer and GRPOTrainer for reinforcement learning!!
We use our new UnslothVisionDataCollator which will help in our vision finetuning setup.
Unsloth: Model does not have a default image size - using 512
GPU = Tesla T4. Max memory = 14.741 GB. 6.068 GB of memory reserved.
==((====))== Unsloth - 2x faster free finetuning | Num GPUs used = 1 \\ /| Num examples = 68,686 | Num Epochs = 1 | Total steps = 30 O^O/ \_/ \ Batch size per device = 2 | Gradient accumulation steps = 4 \ / Data Parallel GPUs = 1 | Total batch size (2 x 4 x 1) = 8 "-____-" Trainable parameters = 51,521,536/7,000,000,000 (0.74% trained) `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...
Unsloth: Will smartly offload gradients to save VRAM!
219.9988 seconds used for training. 3.67 minutes used for training. Peak reserved memory = 6.484 GB. Peak reserved memory for training = 0.416 GB. Peak reserved memory % of max memory = 43.986 %. Peak reserved memory for training % of max memory = 2.822 %.
Inference
Let's run the model! You can change the instruction and input - leave the output blank!
We use min_p = 0.1 and temperature = 1.5. Read this Tweet for more information on why.
H ^ { \prime } = \beta N \int d \lambda \left\{ \frac { 1 } { 2 \beta ^ { 2 } \bar { N } ^ { 2 } } \partial _ { \lambda } \zeta ^ { \dagger } \partial _ { \lambda } \zeta + V ( \lambda ) \zeta ^ { \dagger } \zeta \right\} .<|im_end|>
[]
Now if you want to load the LoRA adapters we just saved for inference, set False to True:
\frac { N } { M } \in \mathbf { Z } , \frac { P } { Q } \in \mathbf { Z } , P \in \mathbf { Z }<|im_end|> Saving to float16 for VLLM
We also support saving to float16 directly. Select merged_16bit for float16. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens. See our docs for more deployment options.
And we're done! If you have any questions on Unsloth, we have a Discord channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!
Some other resources:
- Looking to use Unsloth locally? Read our Installation Guide for details on installing Unsloth on Windows, Docker, AMD, Intel GPUs.
- Learn how to do Reinforcement Learning with our RL Guide and notebooks.
- Read our guides and notebooks for Text-to-speech (TTS) and vision model support.
- Explore our LLM Tutorials Directory to find dedicated guides for each model.
- Need help with Inference? Read our Inference & Deployment page for details on using vLLM, llama.cpp, Ollama etc.



