Deepseek OCR (3B) Eval
News
Placeholder
Installation
Unsloth
Let's prepare the OCR model to our local first
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning. 🦥 Unsloth Zoo will now patch everything to make training faster!
You are using a model of type deepseek_vl_v2 to instantiate a model of type DeepseekOCR. This is not supported for all configurations of models and can yield errors.
Unsloth: WARNING `trust_remote_code` is True. Are you certain you want to do remote code execution? ==((====))== Unsloth 2025.10.12: Fast Deepseekocr patching. Transformers: 4.56.2. \\ /| Tesla T4. Num GPUs = 1. Max memory: 14.741 GB. Platform: Linux. O^O/ \_/ \ Torch: 2.8.0+cu126. CUDA: 7.5. CUDA Toolkit: 12.6. Triton: 3.4.0 \ / Bfloat16 = FALSE. FA [Xformers = 0.0.32.post2. FA2 = False] "-____-" Free license: http://github.com/unslothai/unsloth Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
You are using a model of type deepseek_vl_v2 to instantiate a model of type DeepseekOCR. This is not supported for all configurations of models and can yield errors.
Unsloth: QLoRA and full finetuning all not selected. Switching to 16bit LoRA.
Some weights of DeepseekOCRForCausalLM were not initialized from the model checkpoint at ./deepseek_ocr and are newly initialized: ['model.vision_model.embeddings.position_ids'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Let's Evaluate Deepseek-OCR Baseline Performance on Persian Transcription
README.md: 0%| | 0.00/967 [00:00<?, ?B/s]
data/train-00000-of-00002.parquet: 0%| | 0.00/255M [00:00<?, ?B/s]
data/train-00001-of-00002.parquet: 0%| | 0.00/256M [00:00<?, ?B/s]
data/test-00000-of-00001.parquet: 0%| | 0.00/57.1M [00:00<?, ?B/s]
Generating train split: 0%| | 0/179999 [00:00<?, ? examples/s]
Generating test split: 0%| | 0/20000 [00:00<?, ? examples/s]
['', '\nFree OCR.']
The attention mask is not set and cannot be inferred from input because pad token is same as eos token. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
انضباطم نندم حقيقتن باورم نميتند ===============save results:===============
image: 0it [00:00, ?it/s] other: 0it [00:00, ?it/s]
'انضباطمم شدم حقیقتن باورم نمیشد'
Baseline Model Performance: 23% Character Error Rate (CER) for this sample !
Let's finetune Deepseek-OCR !
We now add LoRA adapters for parameter efficient finetuning - this allows us to only efficiently train 1% of all parameters.
[NEW] We also support finetuning ONLY the vision part of the model, or ONLY the language part. Or you can select both! You can also select to finetune the attention or the MLP layers!
Unsloth: Making `model.base_model.model.model` require gradients
Data Prep
We'll be using a dataset for Persian OCR. The goal is to convert these images into a computer readable form - ie text. This can be very useful for digitizing Persian text.
You can access the dataset here.
To format the dataset, all vision finetuning tasks should be formatted as follows:
[
{ "role": "<|User|>",
"content": "",
"images": []
},
{ "role": "<|Assistant|>",
"content": ""
},
]
Let's convert the dataset into the "correct" format for finetuning:
We look at how the conversations are structured for the first example:
{'messages': [{'role': '<|User|>',
, 'content': '<image>\nFree OCR. ',
, 'images': [<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=218x48>]},
, {'role': '<|Assistant|>', 'content': 'همهاش جبره و اختیار توهمه'}]} /tmp/ipython-input-860910537.py:12: FutureWarning: `tokenizer` is deprecated and will be removed in version 5.0.0 for `Trainer.__init__`. Use `processing_class` instead. trainer = Trainer(
GPU = NVIDIA L4. Max memory = 22.161 GB. 8.012 GB of memory reserved.
==((====))== Unsloth - 2x faster free finetuning | Num GPUs used = 1 \\ /| Num examples = 1,000 | Num Epochs = 1 | Total steps = 60 O^O/ \_/ \ Batch size per device = 2 | Gradient accumulation steps = 4 \ / Data Parallel GPUs = 1 | Total batch size (2 x 4 x 1) = 8 "-____-" Trainable parameters = 77,509,632 of 3,413,615,872 (2.27% trained) Unsloth: Not an error, but DeepseekOCRForCausalLM does not accept `num_items_in_batch`. Using gradient accumulation will be very slightly less accurate. Read more on gradient accumulation issues here: https://unsloth.ai/blog/gradient
Unsloth: Will smartly offload gradients to save VRAM!
You are using a model of type deepseek_vl_v2 to instantiate a model of type DeepseekOCR. This is not supported for all configurations of models and can yield errors.
['', '\nFree OCR.'] انضباطم نشدم حقیقتن باورم نمیشد ===============save results:===============
image: 0it [00:00, ?it/s] other: 0it [00:00, ?it/s]
With only 60 steps, we dramatically improved the transcription quality. The Character Error Rate (CER) on this single sample dropped from 23% to 6%, a 74% relative reduction!
| Type | OCR |
|---|---|
| Baseline (Pre-Finetune) | انضباطم نندم حقيقتن باورم نميتند |
| Finetuned (60 steps) | انضباطم نشدم حقیقتن باورم نمیشد |
| Ground Truth | انضباطمم شدم حقیقتن باورم نمیشد |
You are using a model of type deepseek_vl_v2 to instantiate a model of type DeepseekOCR. This is not supported for all configurations of models and can yield errors.
('lora_model/tokenizer_config.json',
, 'lora_model/special_tokens_map.json',
, 'lora_model/tokenizer.json') Now if you want to load the LoRA adapters we just saved for inference, set False to True:
Saving to float16 for VLLM
We also support saving to float16 directly. Select merged_16bit for float16. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens.
And we're done! If you have any questions on Unsloth, we have a Discord channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!
Some other resources:
- Train your own reasoning model - Llama GRPO notebook Free Colab
- Saving finetunes to Ollama. Free notebook
- Llama 3.2 Vision finetuning - Radiography use case. Free Colab
- See notebooks for DPO, ORPO, Continued pretraining, conversational finetuning and more on our documentation!


