Sft With Trl
Supervised Fine-tuning (SFT) with TRL
Fine-tuning requires a GPU. If you don't have one locally, you can run this notebook for free on Google Colab using a free NVIDIA T4 GPU instance.
What's in this notebook?
In this notebook you will learn how to perform Supervised Fine-tuning (SFT) using the Hugging Face TRL (Transformer Reinforcement Learning) library. We will use the LFM2.5-1.2B-Instruct model and fine-tune it on the SmolTalk dataset. You'll learn both standard SFT and parameter-efficient fine-tuning with LoRA for constrained hardware.
We will cover
- Environment setup
- Data preparation
- Model training with TRL's SFTTrainer
- LoRA for parameter-efficient fine-tuning
- Local inference with your new model
- Model saving and exporting it into the format you need for deployment.
Deployment options
LFM2.5 models are small and efficient, enabling deployment across a wide range of platforms:
| Deployment Target | Use Case |
|---|---|
| ๐ฑ Android | Mobile apps on Android devices |
| ๐ฑ iOS | Mobile apps on iPhone/iPad |
| ๐ Apple Silicon Mac | Local inference on Mac with MLX |
| ๐ฆ llama.cpp | Local deployments on any hardware |
| ๐ฆ Ollama | Local inference with easy setup |
| ๐ฅ๏ธ LM Studio | Desktop app for local inference |
| โก vLLM | Cloud deployments with high throughput |
| โ๏ธ Modal | Serverless cloud deployment |
| ๐๏ธ Baseten | Production ML infrastructure |
| ๐ Fal | Fast inference API |
๐ฆ Installation & Setup
First, let's install all the required packages:
Let's now verify the packages are installed correctly
Loading the model from Transformers ๐ค
๐ฏ Part 1: Supervised Fine-Tuning (SFT)
SFT teaches the model to follow instructions by training on input-output pairs (instruction vs response). This is the foundation for creating instruction-following models.
Load an SFT Dataset
We will use HuggingFaceTB/smoltalk, limiting ourselves to the first 5k samples for brevity. Feel free to change the limit by changing the slicing index in the parameter split.
Launch Training
We are now ready to launch an SFT run with SFTTrainer, feel free to modify SFTConfig to play around with different configurations.
๐๏ธ Part 2: LoRA + SFT (Parameter-Efficient Fine-tuning)
LoRA (Low-Rank Adaptation) allows efficient fine-tuning by only training a small number of additional parameters. Perfect for limited compute resources!
Wrap the model with PEFT
We specify target modules that will be finetuned while the rest of the models weights remains frozen. Feel free to modify the r (rank) value:
- higher -> better approximation of full-finetuning
- lower -> needs even less compute resources
Launch Training
Now ready to launch the SFT training, but this time with the LoRA-wrapped model
Save merged model
Merge the extra weights learned with LoRA back into the model to obtain a "normal" model checkpoint.