Ministral 3 VL (3B) Vision
News
Placeholder
Installation
Unsloth
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning. 🦥 Unsloth Zoo will now patch everything to make training faster!
Unrecognized keys in `rope_parameters` for 'rope_type'='yarn': {'max_position_embeddings'}
==((====))== Unsloth 2025.11.6: Fast Ministral3 patching. Transformers: 5.0.0.dev0. \\ /| Tesla T4. Num GPUs = 1. Max memory: 14.741 GB. Platform: Linux. O^O/ \_/ \ Torch: 2.9.0+cu126. CUDA: 7.5. CUDA Toolkit: 12.6. Triton: 3.5.0 \ / Bfloat16 = FALSE. FA [Xformers = 0.0.33.post1. FA2 = False] "-____-" Free license: http://github.com/unslothai/unsloth Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
Unrecognized keys in `rope_parameters` for 'rope_type'='yarn': {'max_position_embeddings'}
Unsloth: QLoRA and full finetuning all not selected. Switching to 16bit LoRA.
Unrecognized keys in `rope_parameters` for 'rope_type'='yarn': {'max_position_embeddings'}
model.safetensors.index.json: 0%| | 0.00/45.6k [00:00<?, ?B/s]
Downloading (incomplete total...): 0.00B [00:00, ?B/s]
Fetching 2 files: 0%| | 0/2 [00:00<?, ?it/s]
Loading weights: 0%| | 0/458 [00:00<?, ?it/s]
generation_config.json: 0%| | 0.00/131 [00:00<?, ?B/s]
processor_config.json: 0%| | 0.00/976 [00:00<?, ?B/s]
chat_template.jinja: 0%| | 0.00/7.75k [00:00<?, ?B/s]
tokenizer_config.json: 0%| | 0.00/198k [00:00<?, ?B/s]
tokenizer.json: 0%| | 0.00/17.1M [00:00<?, ?B/s]
special_tokens_map.json: 0%| | 0.00/147k [00:00<?, ?B/s]
We now add LoRA adapters for parameter efficient finetuning - this allows us to only efficiently train 1% of all parameters.
[NEW] We also support finetuning ONLY the vision part of the model, or ONLY the language part. Or you can select both! You can also select to finetune the attention or the MLP layers!
Unsloth: Making `model.base_model.model.model.vision_tower.transformer` require gradients
README.md: 0%| | 0.00/519 [00:00<?, ?B/s]
data/train-00000-of-00001.parquet: 0%| | 0.00/344M [00:00<?, ?B/s]
data/test-00000-of-00001.parquet: 0%| | 0.00/38.2M [00:00<?, ?B/s]
Generating train split: 0%| | 0/68686 [00:00<?, ? examples/s]
Generating test split: 0%| | 0/7632 [00:00<?, ? examples/s]
Let's take an overview look at the dataset. We shall see what the 3rd image is, and what caption it had.
Dataset({
, features: ['image', 'text'],
, num_rows: 68686
,}) 'H ^ { \\prime } = \\beta N \\int d \\lambda \\biggl \\{ \\frac { 1 } { 2 \\beta ^ { 2 } N ^ { 2 } } \\partial _ { \\lambda } \\zeta ^ { \\dagger } \\partial _ { \\lambda } \\zeta + V ( \\lambda ) \\zeta ^ { \\dagger } \\zeta \\biggr \\} \\ .' We can also render the LaTeX in the browser directly!
$\displaystyle H ^ { \prime } = \beta N \int d \lambda \biggl \{ \frac { 1 } { 2 \beta ^ { 2 } N ^ { 2 } } \partial _ { \lambda } \zeta ^ { \dagger } \partial _ { \lambda } \zeta + V ( \lambda ) \zeta ^ { \dagger } \zeta \biggr \} \ .$To format the dataset, all vision finetuning tasks should be formatted as follows:
[
{ "role": "user",
"content": [{"type": "text", "text": Q}, {"type": "image", "image": image} ]
},
{ "role": "assistant",
"content": [{"type": "text", "text": A} ]
},
]
Let's convert the dataset into the "correct" format for finetuning:
We look at how the conversations are structured for the first example:
{'messages': [{'role': 'user',
, 'content': [{'type': 'text',
, 'text': 'Write the LaTeX representation for this image.'},
, {'type': 'image',
, 'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=160x40>}]},
, {'role': 'assistant',
, 'content': [{'type': 'text',
, 'text': '{ \\frac { N } { M } } \\in { \\bf Z } , { \\frac { M } { P } } \\in { \\bf Z } , { \\frac { P } { Q } } \\in { \\bf Z }'}]}]} Let's first see before we do any finetuning what the model outputs for the first example!
The expression in the image can be written in LaTeX as follows:
```latex
H' = \beta N \int \frac{1}{2 \beta^2 N^2} \partial_\lambda \zeta \partial_\lambda \zeta + V(\lambda) \zeta^2 \, d\lambda
```
However, if you want to make it more compact and standard for mathematical notation (assuming the integral is over \(\lambda\)):
```latex
H' = \beta N \int \left( \frac{1}{2 \beta^2 N^2} \partial_\lambda \zeta \partial_\lambda \zeta + V(\lambda) \zeta^2 \right) d\lambda
```
If you want to emphasize the structure with a clear separation of terms, you can also write it as:
```latex
H' = \beta N \int \left[ \frac{1}{2 \beta^2 N^2} \left( \frac{\partial \zeta}{\partial \lambda} \right)^2 + V(\lambda) \zeta^2 \right] d\lambda
```</s>
warmup_ratio is deprecated and will be removed in v5.2. Use `warmup_steps` instead.
GPU = Tesla T4. Max memory = 14.741 GB. 7.697 GB of memory reserved.
The model is already on multiple devices. Skipping the move to device specified in `args`.
==((====))== Unsloth - 2x faster free finetuning | Num GPUs used = 1
\\ /| Num examples = 68,686 | Num Epochs = 1 | Total steps = 30
O^O/ \_/ \ Batch size per device = 4 | Gradient accumulation steps = 2
\ / Data Parallel GPUs = 1 | Total batch size (4 x 2 x 1) = 8
"-____-" Trainable parameters = 67,502,080 of 3,916,592,128 (1.72% trained)
Unrecognized keys in `rope_parameters` for 'rope_type'='yarn': {'max_position_embeddings'}
Unsloth: Will smartly offload gradients to save VRAM!
454.3304 seconds used for training. 7.57 minutes used for training. Peak reserved memory = 9.73 GB. Peak reserved memory for training = 2.033 GB. Peak reserved memory % of max memory = 66.006 %. Peak reserved memory for training % of max memory = 13.791 %.
Inference
Let's run the model! You can change the instruction and input - leave the output blank!
We use min_p = 0.1 and temperature = 1.5. Read this Tweet for more information on why.
H ^ { \prime } = \beta N \int d \lambda \left\{ \frac { 1 } { 2 \beta ^ { 2 } N ^ { 2 } } \partial _ { \lambda } \zeta ^ { \dagger } \partial _ { \lambda } \zeta + V ( \lambda ) \zeta ^ { \dagger } \zeta \right\} .</s>
['lora_model/processor_config.json']
Now if you want to load the LoRA adapters we just saved for inference, set False to True:
\frac { N } { M } \in \mathbb { Z } , \frac { M } { P } \in \mathbb { Z } , \frac { P } { Q } \in \mathbb { Z }</s>
Saving to float16 for VLLM
We also support saving to float16 directly. Select merged_16bit for float16. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens.
GGUF / llama.cpp Conversion
To save to GGUF / llama.cpp, we support it natively now! We clone llama.cpp and we default save it to q8_0. We allow all methods like q4_k_m. Use save_pretrained_gguf for local saving and push_to_hub_gguf for uploading to HF.
Some supported quant methods (full list on our Wiki page):
q8_0- Fast conversion. High resource use, but generally acceptable.q4_k_m- Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.q5_k_m- Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K.
[NEW] To finetune and auto export to Ollama, try our Ollama notebook
Now, use the model-unsloth.gguf file or model-unsloth-Q4_K_M.gguf file in llama.cpp.
And we're done! If you have any questions on Unsloth, we have a Discord channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!
Some other resources:
- Train your own reasoning model - Llama GRPO notebook Free Colab
- Saving finetunes to Ollama. Free notebook
- Llama 3.2 Vision finetuning - Radiography use case. Free Colab
- See notebooks for DPO, ORPO, Continued pretraining, conversational finetuning and more on our documentation!


