Notebooks
L
Liquid
Sft With Trl

Sft With Trl

liquid-cookbooklanguage-modelslaptopedgeiosnotebooksfinetuninglanguage-modelandroid

Supervised Fine-tuning (SFT) with TRL

Fine-tuning requires a GPU. If you don't have one locally, you can run this notebook for free on Google Colab using a free NVIDIA T4 GPU instance.

Open In Colab

What's in this notebook?

In this notebook you will learn how to perform Supervised Fine-tuning (SFT) using the Hugging Face TRL (Transformer Reinforcement Learning) library. We will use the LFM2.5-1.2B-Instruct model and fine-tune it on the SmolTalk dataset. You'll learn both standard SFT and parameter-efficient fine-tuning with LoRA for constrained hardware.

We will cover

  • Environment setup
  • Data preparation
  • Model training with TRL's SFTTrainer
  • LoRA for parameter-efficient fine-tuning
  • Local inference with your new model
  • Model saving and exporting it into the format you need for deployment.

Deployment options

LFM2.5 models are small and efficient, enabling deployment across a wide range of platforms:

Deployment Target Use Case
๐Ÿ“ฑ Android Mobile apps on Android devices
๐Ÿ“ฑ iOS Mobile apps on iPhone/iPad
๐ŸŽ Apple Silicon Mac Local inference on Mac with MLX
๐Ÿฆ™ llama.cpp Local deployments on any hardware
๐Ÿฆ™ Ollama Local inference with easy setup
๐Ÿ–ฅ๏ธ LM Studio Desktop app for local inference
โšก vLLM Cloud deployments with high throughput
โ˜๏ธ Modal Serverless cloud deployment
๐Ÿ—๏ธ Baseten Production ML infrastructure
๐Ÿš€ Fal Fast inference API

Need help building with our models and tools?

Join the Liquid AI Discord Community and ask.

Join Discord

And now, let the fine tune begin!

๐Ÿ“ฆ Installation & Setup

First, let's install all the required packages:

[ ]

Let's now verify the packages are installed correctly

[ ]

Loading the model from Transformers ๐Ÿค—

[ ]

๐ŸŽฏ Part 1: Supervised Fine-Tuning (SFT)

SFT teaches the model to follow instructions by training on input-output pairs (instruction vs response). This is the foundation for creating instruction-following models.

Load an SFT Dataset

We will use HuggingFaceTB/smoltalk, limiting ourselves to the first 5k samples for brevity. Feel free to change the limit by changing the slicing index in the parameter split.

[ ]

Launch Training

We are now ready to launch an SFT run with SFTTrainer, feel free to modify SFTConfig to play around with different configurations.

[ ]

๐ŸŽ›๏ธ Part 2: LoRA + SFT (Parameter-Efficient Fine-tuning)

LoRA (Low-Rank Adaptation) allows efficient fine-tuning by only training a small number of additional parameters. Perfect for limited compute resources!

Wrap the model with PEFT

We specify target modules that will be finetuned while the rest of the models weights remains frozen. Feel free to modify the r (rank) value:

  • higher -> better approximation of full-finetuning
  • lower -> needs even less compute resources
[ ]

Launch Training

Now ready to launch the SFT training, but this time with the LoRA-wrapped model

[ ]

Save merged model

Merge the extra weights learned with LoRA back into the model to obtain a "normal" model checkpoint.

[ ]