Notebooks
L
Liquid
Cpt Translation With Unsloth

Continued Pre-training (CPT) with Unsloth for Translation

Fine-tuning requires a GPU. If you don't have one locally, you can run this notebook for free on Google Colab using a free NVIDIA T4 GPU instance.

Open In Colab

What's in this notebook?

In this notebook you will learn how to use Continued Pre-training (CPT) with Unsloth to adapt a language model for translation tasks. We will use the LFM2.5-1.2B-Base model and perform continued pre-training on Korean Wikipedia data, followed by instruction fine-tuning on Korean translation examples. This approach is ideal for adapting models to specific languages or translation domains.

We will cover

  • Environment setup
  • Data preparation for translation
  • Continued pre-training on domain-specific data
  • Instruction fine-tuning for translation tasks
  • Local inference with your new model
  • Model saving and exporting it into the format you need for deployment.

Deployment options

LFM2.5 models are small and efficient, enabling deployment across a wide range of platforms:

Deployment Target Use Case
πŸ“± Android Mobile apps on Android devices
πŸ“± iOS Mobile apps on iPhone/iPad
🍎 Apple Silicon Mac Local inference on Mac with MLX
πŸ¦™ llama.cpp Local deployments on any hardware
πŸ¦™ Ollama Local inference with easy setup
πŸ–₯️ LM Studio Desktop app for local inference
⚑ vLLM Cloud deployments with high throughput
☁️ Modal Serverless cloud deployment
πŸ—οΈ Baseten Production ML infrastructure
πŸš€ Fal Fast inference API

Need help building with our models and tools?

Join the Liquid AI Discord Community and ask.

Join Discord

And now, let the fine tune begin!

Installation

[ ]

Unsloth

[ ]
env: UNSLOTH_RETURN_LOGITS=1 # Run this to disable CCE since it is not supported for CPT
[ ]
πŸ¦₯ Unsloth: Will patch your computer to enable 2x faster free finetuning.
πŸ¦₯ Unsloth Zoo will now patch everything to make training faster!
==((====))==  Unsloth 2026.1.2: Fast Lfm2 patching. Transformers: 4.57.3.
   \\   /|    NVIDIA L4. Num GPUs = 1. Max memory: 22.161 GB. Platform: Linux.
O^O/ \_/ \    Torch: 2.9.0+cu126. CUDA: 8.9. CUDA Toolkit: 12.6. Triton: 3.5.0
\        /    Bfloat16 = TRUE. FA [Xformers = 0.0.33.post1. FA2 = False]
 "-____-"     Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
Unsloth: QLoRA and full finetuning all not selected. Switching to 16bit LoRA.
model.safetensors:   0%|          | 0.00/2.34G [00:00<?, ?B/s]
generation_config.json:   0%|          | 0.00/132 [00:00<?, ?B/s]
tokenizer_config.json: 0.00B [00:00, ?B/s]
tokenizer.json: 0.00B [00:00, ?B/s]
special_tokens_map.json:   0%|          | 0.00/434 [00:00<?, ?B/s]
chat_template.jinja: 0.00B [00:00, ?B/s]

We now add LoRA adapters so we only need to update 1 to 10% of all parameters!

We also add embed_tokens and lm_head to allow the model to learn out of distribution data.

[ ]
/usr/local/lib/python3.12/dist-packages/peft/tuners/tuners_utils.py:916: UserWarning: Model with `tie_word_embeddings=True` and the tied_target_modules=['model.embed_tokens', 'lm_head'] are part of the adapter. This can lead to complications, for example when merging the adapter or converting your model to formats other than safetensors. See for example https://github.com/huggingface/peft/issues/2018.
  warnings.warn(
Unsloth: Making `model.base_model.model.model.embed_tokens` require gradients

Data Prep

We now use the Korean subset of the Wikipedia dataset to first continually pretrain the model. You can use any language you like! Go to Wikipedia's List of Languages to find your own language!

[NOTE] To train only on completions (ignoring the user's input) read TRL's docs here.

[NOTE] Remember to add the EOS_TOKEN to the tokenized output!! Otherwise you'll get infinite generations!

If you want to use the llama-3 template for ShareGPT datasets, try our conversational notebook

For text completions like novel writing, try this notebook.

[NOTE] Use https://translate.google.com to translate from English to Korean!

[ ]

We only use 1% of the dataset to speed things up! Use more for longer runs!

[ ]
README.md: 0.00B [00:00, ?B/s]
20231101.ko/train-00000-of-00003.parquet:   0%|          | 0.00/400M [00:00<?, ?B/s]
20231101.ko/train-00001-of-00003.parquet:   0%|          | 0.00/205M [00:00<?, ?B/s]
20231101.ko/train-00002-of-00003.parquet:   0%|          | 0.00/177M [00:00<?, ?B/s]
Generating train split:   0%|          | 0/647897 [00:00<?, ? examples/s]
Map:   0%|          | 0/6478 [00:00<?, ? examples/s]

Continued Pretraining

Now let's use Unsloth's UnslothTrainer! More docs here: TRL SFT docs. We do 20 steps to speed things up, but you can set num_train_epochs=1 for a full run, and turn off max_steps=None.

Also set embedding_learning_rate to be a learning rate at least 2x or 10x smaller than learning_rate to make continual pretraining work!

[ ]
Unsloth: Tokenizing ["text"] (num_proc=16):   0%|          | 0/6478 [00:00<?, ? examples/s]
πŸ¦₯ Unsloth: Padding-free auto-enabled, enabling faster training.
[ ]
GPU = NVIDIA L4. Max memory = 22.161 GB.
2.598 GB of memory reserved.
[ ]
The model is already on multiple devices. Skipping the move to device specified in `args`.
==((====))==  Unsloth - 2x faster free finetuning | Num GPUs used = 1
   \\   /|    Num examples = 6,478 | Num Epochs = 1 | Total steps = 120
O^O/ \_/ \    Batch size per device = 2 | Gradient accumulation steps = 8
\        /    Data Parallel GPUs = 1 | Total batch size (2 x 8 x 1) = 16
 "-____-"     Trainable parameters = 97,517,568 of 1,276,508,928 (7.64% trained)
Unsloth: Will smartly offload gradients to save VRAM!
/usr/local/lib/python3.12/dist-packages/peft/utils/save_and_load.py:279: UserWarning: Setting `save_embedding_layers` to `True` as embedding layers found in `target_modules`.
  warnings.warn("Setting `save_embedding_layers` to `True` as embedding layers found in `target_modules`.")

Instruction Finetuning

We now use the Alpaca in GPT4 Dataset but translated in Korean!

Go to vicgalle/alpaca-gpt4 for the original GPT4 dataset for Alpaca or MultilingualSIFT project for other translations of the Alpaca dataset.

[ ]
README.md:   0%|          | 0.00/124 [00:00<?, ?B/s]
Repo card metadata block was not found. Setting CardData to empty.
WARNING:huggingface_hub.repocard:Repo card metadata block was not found. Setting CardData to empty.
alpaca-gpt4-korean.json:   0%|          | 0.00/51.6M [00:00<?, ?B/s]
Generating train split:   0%|          | 0/49969 [00:00<?, ? examples/s]

We print 1 example:

[ ]
{'conversations': [{'from': 'human', 'value': 'μž¬ν™œμš© 캠페인 μŠ¬λ‘œκ±΄μ„ μ œμ‹œν•˜μ„Έμš”.\n'}, {'from': 'gpt', 'value': '1. "λ”μš± 녹색 미래λ₯Ό μœ„ν•΄ ν•¨κ»˜ 쀄이고, μž¬μ‚¬μš©ν•˜κ³ , μž¬ν™œμš©ν•˜μ„Έμš”."\n2. "더 λ‚˜μ€ 내일을 μœ„ν•΄ 였늘 λ°”λ‘œ μž¬ν™œμš©ν•˜μ„Έμš”."\n3. "μ“°λ ˆκΈ°λ₯Ό 보물둜 λ§Œλ“œλŠ” 법 - μž¬ν™œμš©!"\n4. "μΈμƒμ˜ μˆœν™˜μ„ μœ„ν•΄ μž¬ν™œμš©ν•˜μ„Έμš”."\n5. "μžμ›μ„ 아끼고 더 많이 μž¬ν™œμš©ν•˜μ„Έμš”."'}], 'id': '23712'}

We again use https://translate.google.com/ to translate the Alpaca format into Korean

[ ]
Map:   0%|          | 0/49969 [00:00<?, ? examples/s]

We again employ UnslothTrainer and do instruction finetuning!

[ ]
Unsloth: Tokenizing ["text"] (num_proc=16):   0%|          | 0/49969 [00:00<?, ? examples/s]
[ ]
The model is already on multiple devices. Skipping the move to device specified in `args`.
==((====))==  Unsloth - 2x faster free finetuning | Num GPUs used = 1
   \\   /|    Num examples = 49,969 | Num Epochs = 1 | Total steps = 120
O^O/ \_/ \    Batch size per device = 2 | Gradient accumulation steps = 8
\        /    Data Parallel GPUs = 1 | Total batch size (2 x 8 x 1) = 16
 "-____-"     Trainable parameters = 97,517,568 of 1,276,508,928 (7.64% trained)
Unsloth: Will smartly offload gradients to save VRAM!
/usr/local/lib/python3.12/dist-packages/peft/utils/save_and_load.py:279: UserWarning: Setting `save_embedding_layers` to `True` as embedding layers found in `target_modules`.
  warnings.warn("Setting `save_embedding_layers` to `True` as embedding layers found in `target_modules`.")
[ ]
226.8895 seconds used for training.
3.78 minutes used for training.
Peak reserved memory = 4.361 GB.
Peak reserved memory for training = 1.763 GB.
Peak reserved memory % of max memory = 19.679 %.
Peak reserved memory for training % of max memory = 7.955 %.

Inference

Let's run the model! You can change the instruction and input - leave the output blank!

Remember to use https://translate.google.com/!

[ ]
['<|startoftext|>λ‹€μŒμ€ μž‘μ—…μ„ μ„€λͺ…ν•˜λŠ” λͺ…λ Ήμž…λ‹ˆλ‹€. μš”μ²­μ„ μ μ ˆν•˜κ²Œ μ™„λ£Œν•˜λŠ” 응닡을 μž‘μ„±ν•˜μ„Έμš”.\n\n### μ§€μΉ¨:\nν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ„ κ³„μ†ν•˜μ„Έμš”: 1, 1, 2, 3, 5, 8,\n\n### 응닡:\nν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ€ 각 μˆ«μžκ°€ μ•žμ˜ 두 숫자의 합인 μˆ˜μ—΄μž…λ‹ˆλ‹€. 이 μˆ˜μ—΄μ€ 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ']

You can also use a TextStreamer for continuous inference - so you can see the generation token by token, instead of waiting the whole time!

[ ]
<|startoftext|>λ‹€μŒμ€ μž‘μ—…μ„ μ„€λͺ…ν•˜λŠ” λͺ…λ Ήμž…λ‹ˆλ‹€. μš”μ²­μ„ μ μ ˆν•˜κ²Œ μ™„λ£Œν•˜λŠ” 응닡을 μž‘μ„±ν•˜μ„Έμš”.

### μ§€μΉ¨:
ν•œκ΅­μŒμ•…μ€ μ–΄λ–€κ°€μš”?

### 응닡:
ν•œκ΅­μŒμ•…μ€ 전톡 μŒμ•…κ³Ό ν˜„λŒ€ μŒμ•…μ„ ν¬ν•¨ν•œ λ‹€μ–‘ν•œ μž₯λ₯΄λ₯Ό ν¬ν•¨ν•©λ‹ˆλ‹€. 전톡 μŒμ•…μ€ μ’…μ’… 악기와 λ…Έλž˜λ₯Ό μ‚¬μš©ν•˜μ—¬ ν•œκ΅­μ˜ 역사와 λ¬Έν™”λ₯Ό λ°˜μ˜ν•©λ‹ˆλ‹€. ν˜„λŒ€ μŒμ•…μ€ 전톡 μŒμ•…μ˜ 영ν–₯을 λ°›μ•„ λ°œμ „ν•΄μ™”μœΌλ©°, μ „μž μŒμ•…, 둝, 팝 λ“± λ‹€μ–‘ν•œ μž₯λ₯΄κ°€ μžˆμŠ΅λ‹ˆλ‹€. ν•œκ΅­ μŒμ•…μ€ μ „ μ„Έκ³„μ μœΌλ‘œ 인기λ₯Ό μ–»κ³  있으며, λ§Žμ€ μ•„ν‹°μŠ€νŠΈλ“€μ΄ ꡭ제적인 성곡을 κ±°λ‘μ—ˆμŠ΅λ‹ˆλ‹€.<|im_end|>

By using https://translate.google.com/ we get

	Korean music is classified into many types of music genres.

This genre is classified into different music genres such as pop songs,

rock songs, classical songs and pop songs, music groups consisting of drums, fans, instruments and singers

Saving, loading finetuned models

To save the final model as LoRA adapters, either use Huggingface's push_to_hub for an online save or save_pretrained for a local save.

[NOTE] This ONLY saves the LoRA adapters, and not the full model. To save to 16bit or GGUF, scroll down!

[ ]
/usr/local/lib/python3.12/dist-packages/peft/utils/save_and_load.py:279: UserWarning: Setting `save_embedding_layers` to `True` as embedding layers found in `target_modules`.
  warnings.warn("Setting `save_embedding_layers` to `True` as embedding layers found in `target_modules`.")
('lora_model/tokenizer_config.json',
, 'lora_model/special_tokens_map.json',
, 'lora_model/chat_template.jinja',
, 'lora_model/tokenizer.json')

Now if you want to load the LoRA adapters we just saved for inference, set False to True:

[ ]
<|startoftext|>λ‹€μŒμ€ μž‘μ—…μ„ μ„€λͺ…ν•˜λŠ” λͺ…λ Ήμž…λ‹ˆλ‹€. μš”μ²­μ„ μ μ ˆν•˜κ²Œ μ™„λ£Œν•˜λŠ” 응닡을 μž‘μ„±ν•˜μ„Έμš”.

### μ§€μΉ¨:
지ꡬλ₯Ό κ΄‘λ²”μœ„ν•˜κ²Œ μ„€λͺ…ν•˜μ„Έμš”.

### 응닡:
지ꡬλ₯Ό κ΄‘λ²”μœ„ν•˜κ²Œ μ„€λͺ…ν•˜μ„Έμš”.

지ꡬλ₯Ό κ΄‘λ²”μœ„ν•˜κ²Œ μ„€λͺ…ν•˜μ„Έμš”.

지ꡬλ₯Ό κ΄‘λ²”μœ„ν•˜κ²Œ μ„€λͺ…ν•˜μ„Έμš”.

지ꡬλ₯Ό κ΄‘λ²”μœ„ν•˜κ²Œ μ„€λͺ…ν•˜μ„Έμš”.

지ꡬλ₯Ό κ΄‘λ²”μœ„ν•˜κ²Œ μ„€λͺ…ν•˜μ„Έμš”.

지ꡬλ₯Ό κ΄‘λ²”μœ„ν•˜κ²Œ μ„€λͺ…ν•˜μ„Έμš”.

지ꡬλ₯Ό κ΄‘λ²”μœ„ν•˜κ²Œ μ„€λͺ…ν•˜μ„Έμš”.

지ꡬλ₯Ό κ΄‘λ²”μœ„ν•˜κ²Œ μ„€λͺ…ν•˜μ„Έμš”.

지ꡬλ₯Ό κ΄‘λ²”μœ„ν•˜κ²Œ μ„€λͺ…ν•˜μ„Έμš”.

지ꡬλ₯Ό κ΄‘λ²”μœ„ν•˜κ²Œ μ„€λͺ…ν•˜μ„Έμš”.

지ꡬλ₯Ό κ΄‘λ²”μœ„ν•˜κ²Œ μ„€λͺ…

By using https://translate.google.com/ we get

	Earth refers to all things including natural disasters such as local derailment

and local depletion that occur in one space along with the suppression of water, gases, and living things.

Most of the Earth's water comes from oceans, atmospheric water, underground water layers, and rivers and rivers.

Yikes the language model is a bit whacky! Change the temperature and using sampling will definitely make the output much better!

You can also use Hugging Face's AutoModelForPeftCausalLM. Only use this if you do not have unsloth installed. It can be hopelessly slow, since 4bit model downloading is not supported, and Unsloth's inference is 2x faster.

[ ]

Saving to float16 for VLLM

We also support saving to float16 directly. Select merged_16bit for float16 or merged_4bit for int4. We also allow lora adapters as a fallback. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens.

[ ]

GGUF / llama.cpp Conversion

To save to GGUF / llama.cpp, we support it natively now! We clone llama.cpp and we default save it to q8_0. We allow all methods like q4_k_m. Use save_pretrained_gguf for local saving and push_to_hub_gguf for uploading to HF.

Some supported quant methods (full list on our Wiki page):

  • q8_0 - Fast conversion. High resource use, but generally acceptable.
  • q4_k_m - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.
  • q5_k_m - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K.
[ ]

Now, use the model-unsloth.gguf file or model-unsloth-Q4_K_M.gguf file in llama.cpp.

And we're done! If you have any questions on Unsloth, we have a Discord channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!

Some other links:

  1. Train your own reasoning model - Llama GRPO notebook Free Colab
  2. Saving finetunes to Ollama. Free notebook
  3. Llama 3.2 Vision finetuning - Radiography use case. Free Colab
  4. See notebooks for DPO, ORPO, Continued pretraining, conversational finetuning and more on our documentation!

Join Discord if you need help + ⭐️ Star us on Github ⭐️

This notebook and all Unsloth notebooks are licensed LGPL-3.0.