Notebooks
L
Liquid
Sft For Vision Language Model

Sft For Vision Language Model

liquid-cookbooklanguage-modelslaptopedgeiosnotebooksfinetuninglanguage-modelandroid

Supervised fine-tuning (SFT) for Small Vision Language Models

Fine-tuning requires a GPU. If you don't have one locally, you can run this notebook for free on Google Colab using a free NVIDIA T4 GPU instance.

Open In Colab

What's in this notebook?

In this notebook you will learn how to fine-tune a Small Vision Language Model to increase task-specific accuracy. The model we will use is LFM2.5-VL-1.6B and the task is Optical Character Recognition (OCR) of mathematical formulas. The same workflow and learnings apply for other vision tasks you can encode as (query, image, response) tuples.

We will cover

  • Environment setup
  • Data preparation
  • Model training
  • Local inference with your new model
  • Model saving and exporting it into the format you need for deployment.

Deployment options

LFM2.5 models are small and efficient, enabling deployment across a wide range of platforms:

Deployment Target Use Case
📱 Android Mobile apps on Android devices
📱 iOS Mobile apps on iPhone/iPad
🍎 Apple Silicon Mac Local inference on Mac with MLX
🦙 llama.cpp Local deployments on any hardware
🦙 Ollama Local inference with easy setup
🖥️ LM Studio Desktop app for local inference
vLLM Cloud deployments with high throughput
☁️ Modal Serverless cloud deployment
🏗️ Baseten Production ML infrastructure
🚀 Fal Fast inference API

Need help building with our models and tools?

Join the Liquid AI Discord Community and ask.

Join Discord

And now, let the fine tune begin!

Installation

[ ]

Unsloth

[ ]
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
==((====))==  Unsloth 2026.1.2: Fast Lfm2 patching. Transformers: 5.0.0.dev0.
   \\   /|    NVIDIA L4. Num GPUs = 1. Max memory: 22.161 GB. Platform: Linux.
O^O/ \_/ \    Torch: 2.9.0+cu126. CUDA: 8.9. CUDA Toolkit: 12.6. Triton: 3.5.0
\        /    Bfloat16 = TRUE. FA [Xformers = 0.0.33.post1. FA2 = False]
 "-____-"     Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
Warning: You are sending unauthenticated requests to the HF Hub. Please set a HF_TOKEN to enable higher rate limits and faster downloads.
WARNING:huggingface_hub.utils._http:Warning: You are sending unauthenticated requests to the HF Hub. Please set a HF_TOKEN to enable higher rate limits and faster downloads.
Unsloth: QLoRA and full finetuning all not selected. Switching to 16bit LoRA.
Loading weights:   0%|          | 0/589 [00:00<?, ?it/s]
Warning: You are sending unauthenticated requests to the HF Hub. Please set a HF_TOKEN to enable higher rate limits and faster downloads.
WARNING:huggingface_hub.utils._http:Warning: You are sending unauthenticated requests to the HF Hub. Please set a HF_TOKEN to enable higher rate limits and faster downloads.

We now add LoRA adapters so we only need to update a small amount of parameters!

[ ]
Unsloth: Making `model.base_model.model.model.language_model` require gradients

Data Prep

We'll be using a sampled dataset of handwritten maths formulas. The goal is to convert these images into a computer readable form - ie in LaTeX form, so we can render it. This can be very useful for complex formulas.

You can access the dataset here. The full dataset is here. LFM-VL renders ChatML conversations with images like below:

	<|startoftext|><|im_start|>system
You are a helpful multimodal assistant by Liquid AI.<|im_end|>
<|im_start|>user
<image>Describe this image.<|im_end|>
<|im_start|>assistant
This image shows a Caenorhabditis elegans (C. elegans) nematode.<|im_end|>

[ ]
"<|startoftext|><|im_start|>user\n<image>What's in this image?<|im_end|>\n<|im_start|>assistant\nI can see a cat sitting on a couch.<|im_end|>\n"

We get the first 3000 rows of the dataset

[ ]

Let's take an overview look at the dataset. We shall see what the 3rd image is, and what caption it had.

[ ]
Dataset({
,    features: ['image', 'text'],
,    num_rows: 3000
,})
[ ]
Output
[ ]
'H ^ { \\prime } = \\beta N \\int d \\lambda \\biggl \\{ \\frac { 1 } { 2 \\beta ^ { 2 } N ^ { 2 } } \\partial _ { \\lambda } \\zeta ^ { \\dagger } \\partial _ { \\lambda } \\zeta + V ( \\lambda ) \\zeta ^ { \\dagger } \\zeta \\biggr \\} \\ .'

To format the dataset, all vision finetuning tasks should be formatted as follows:

	[
{ "role": "user",
  "content": [{"type": "text",  "text": Q}, {"type": "image", "image": image} ]
},
{ "role": "assistant",
  "content": [{"type": "text",  "text": A} ]
},
]

[ ]

Let's convert the dataset into the "correct" format for finetuning:

[ ]

We look at how the conversations are structured for the first example:

[ ]
{'messages': [{'role': 'user',
,   'content': [{'type': 'text',
,     'text': 'Write the LaTeX representation for this image.'},
,    {'type': 'image',
,     'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=160x40>}]},
,  {'role': 'assistant',
,   'content': [{'type': 'text',
,     'text': '{ \\frac { N } { M } } \\in { \\bf Z } , { \\frac { M } { P } } \\in { \\bf Z } , { \\frac { P } { Q } } \\in { \\bf Z }'}]}]}

Let's first see before we do any finetuning what the model outputs for the first example!

[ ]
H ^ { \prime } = \beta N \int d \lambda \Big \{ \frac { 1 } { 2 \beta ^ { 2 } N ^ { 2 } } \partial _ { \lambda } \zeta ^ { \dagger } \partial _ { \lambda } \zeta + V ( \lambda ) \zeta ^ { \dagger } \zeta \Big \} \ .<|im_end|>

Train the model

Now let's train our model. We do 60 steps to speed things up, but you can set num_train_epochs=1 for a full run, and turn off max_steps=None. We also support TRL's DPOTrainer!

We use our new UnslothVisionDataCollator which will help in our vision finetuning setup.

[ ]
warmup_ratio is deprecated and will be removed in v5.2. Use `warmup_steps` instead.
Unsloth: Model does not have a default image size - using 512
[ ]
GPU = NVIDIA L4. Max memory = 22.161 GB.
3.109 GB of memory reserved.
[ ]
==((====))==  Unsloth - 2x faster free finetuning | Num GPUs used = 1
   \\   /|    Num examples = 3,000 | Num Epochs = 1 | Total steps = 30
O^O/ \_/ \    Batch size per device = 2 | Gradient accumulation steps = 4
\        /    Data Parallel GPUs = 1 | Total batch size (2 x 4 x 1) = 8
 "-____-"     Trainable parameters = 9,142,272 of 1,605,768,176 (0.57% trained)
Unsloth: Will smartly offload gradients to save VRAM!
[ ]
88.8178 seconds used for training.
1.48 minutes used for training.
Peak reserved memory = 3.639 GB.
Peak reserved memory for training = 0.53 GB.
Peak reserved memory % of max memory = 16.421 %.
Peak reserved memory for training % of max memory = 2.392 %.

Inference

Let's run the model! You can change the instruction and input - leave the output blank!

We use min_p = 0.1 and temperature = 1.5. Read this Tweet for more information on why.

[ ]
H ^ { \prime } = \beta N \int d \lambda \Big \{ \frac { 1 } { 2 \beta ^ { 2 } N ^ { 2 } } \partial _ { \lambda } \zeta ^ { \dagger } \partial _ { \lambda } \zeta + V ( \lambda ) \zeta ^ { \dagger } \zeta \Big \} \ .<|im_end|>

Saving, loading finetuned models

To save the final model as LoRA adapters, either use Huggingface's push_to_hub for an online save or save_pretrained for a local save.

[NOTE] This ONLY saves the LoRA adapters, and not the full model. To save to 16bit or GGUF, scroll down!

[ ]
['lora_model/processor_config.json']

Now if you want to load the LoRA adapters we just saved for inference, set False to True:

[ ]
\frac { N } { M } \in { \bf Z } , \frac { M } { P } \in { \bf Z } , \frac { P } { Q } \in { \bf Z }<|im_end|>

Saving to float16 for VLLM

We also support saving to float16 directly. Select merged_16bit for float16. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens.

[ ]

GGUF / llama.cpp Conversion

To save to GGUF / llama.cpp, we support it natively now! We clone llama.cpp and we default save it to q8_0. We allow all methods like q4_k_m. Use save_pretrained_gguf for local saving and push_to_hub_gguf for uploading to HF.

Some supported quant methods (full list on our Wiki page):

  • q8_0 - Fast conversion. High resource use, but generally acceptable.
  • q4_k_m - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.
  • q5_k_m - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K.

[NEW] To finetune and auto export to Ollama, try our Ollama notebook

[ ]

Now, use the model-unsloth.gguf file or model-unsloth-Q4_K_M.gguf file in llama.cpp.