Notebooks
U
Unsloth
Llama3 (8B) ORPO

Llama3 (8B) ORPO

unsloth-notebooksunslothoriginal_template

News

Placeholder

Installation

[ ]

Unsloth

[ ]
config.json:   0%|          | 0.00/1.14k [00:00<?, ?B/s]
==((====))==  Unsloth: Fast Llama patching release 2024.4
   \\   /|    GPU: Tesla T4. Max memory: 14.748 GB. Platform = Linux.
O^O/ \_/ \    Pytorch: 2.2.1+cu121. CUDA = 7.5. CUDA Toolkit = 12.1.
\        /    Bfloat16 = FALSE. Xformers = 0.0.25.post1. FA = False.
 "-____-"     Free Apache license: http://github.com/unslothai/unsloth
Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>.
model.safetensors:   0%|          | 0.00/5.70G [00:00<?, ?B/s]
generation_config.json:   0%|          | 0.00/131 [00:00<?, ?B/s]
tokenizer_config.json:   0%|          | 0.00/50.6k [00:00<?, ?B/s]
tokenizer.json:   0%|          | 0.00/9.09M [00:00<?, ?B/s]
special_tokens_map.json:   0%|          | 0.00/449 [00:00<?, ?B/s]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.

We now add LoRA adapters so we only need to update 1 to 10% of all parameters!

[ ]
Unsloth 2024.4 patched 32 layers with 32 QKV layers, 32 O layers and 32 MLP layers.

Data Prep

We now use a special ORPO style dataset from recipe-research.

You need at least 3 columns:

  • Instruction
  • Accepted
  • Rejected

For example:

  • Instruction: "What is 2+2?"
  • Accepted: "The answer is 4"
  • Rejected: "The answer is 5"

The goal of ORPO is to penalize the "rejected" samples, and increase the likelihood of "accepted" samples. recipe-research essentially used Mistral to generate the "rejected" responses, and used GPT-4 to generated the "accepted" responses.

[ ]
Downloading readme:   0%|          | 0.00/490 [00:00<?, ?B/s]
Downloading data:   0%|          | 0.00/34.1M [00:00<?, ?B/s]
Generating train split:   0%|          | 0/16000 [00:00<?, ? examples/s]
Map:   0%|          | 0/16000 [00:00<?, ? examples/s]

Let's print out some examples to see how the dataset should look like

[ ]
INSTRUCTION: ==================================================
('Below is an instruction that describes a task, paired with an input that '
 'provides further context. Write a response that appropriately completes the '
 'request.\n'
 '\n'
 '### Instruction:\n'
 'You are an AI assistant that helps people find information.\n'
 '\n'
 '### Input:\n'
 'Given the rationale, provide a reasonable question and answer. Step-by-step '
 'reasoning process: Xkcd comics are very popular amongst internet users.\n'
 ' The question and answer:\n'
 '\n'
 '### Response:\n')
ACCEPTED: ==================================================
('Question: What makes Xkcd comics popular among internet users?\n'
 '\n'
 'Answer: Xkcd comics are popular among internet users because of their clever '
 'humor, relatable themes, and minimalist art style. They often cover topics '
 'like science, technology, and life experiences, making them appealing to a '
 'broad audience.<|end_of_text|>')
REJECTED: ==================================================
('Question: What is the reason behind the popularity of Xkcd comics among '
 'internet users?\n'
 '\n'
 'Answer: Xkcd comics are popular among internet users because they offer a '
 'unique blend of humor, relatable content, and thought-provoking topics that '
 'resonate with a wide range of people. The comics often address everyday '
 'experiences, technology, and social issues, making them accessible and '
 'enjoyable for many individuals. Additionally, the simple and minimalistic '
 'art style of Xkcd comics allows for easy comprehension and sharing, '
 'contributing to their widespread appeal.<|end_of_text|>')
[ ]

Train the model

Now let's train our model. We do 60 steps to speed things up, but you can set num_train_epochs=1 for a full run, and turn off max_steps=None.

[ ]
/usr/local/lib/python3.10/dist-packages/trl/trainer/orpo_trainer.py:247: UserWarning: When using DPODataCollatorWithPadding, you should set `remove_unused_columns=False` in your TrainingArguments we have set it for you, but you should do it yourself in the future.
  warnings.warn(
Map:   0%|          | 0/16000 [00:00<?, ? examples/s]
max_steps is given, it will override any value given in num_train_epochs
[ ]
==((====))==  Unsloth - 2x faster free finetuning | Num GPUs = 1
   \\   /|    Num examples = 16,000 | Num Epochs = 1
O^O/ \_/ \    Batch size per device = 2 | Gradient Accumulation steps = 4
\        /    Total batch size = 8 | Total steps = 30
 "-____-"     Number of trainable parameters = 41,943,040
Could not estimate the number of tokens of the input, floating-point operations will not be computed
TrainOutput(global_step=30, training_loss=2.234786526362101, metrics={'train_runtime': 665.6376, 'train_samples_per_second': 0.361, 'train_steps_per_second': 0.045, 'total_flos': 0.0, 'train_loss': 2.234786526362101, 'epoch': 0.015})

Inference

Let's run the model! You can change the instruction and input - leave the output blank!

[ ]
Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
['<|begin_of_text|>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nContinue the fibonnaci sequence.\n\n### Input:\n1, 1, 2, 3, 5, 8\n\n### Response:\n13<|end_of_text|>']

You can also use a TextStreamer for continuous inference - so you can see the generation token by token, instead of waiting the whole time!

[ ]
Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
<|begin_of_text|>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
Continue the fibonnaci sequence.

### Input:
1, 1, 2, 3, 5, 8

### Response:
13<|end_of_text|>

Saving, loading finetuned models

To save the final model as LoRA adapters, either use Huggingface's push_to_hub for an online save or save_pretrained for a local save.

[NOTE] This ONLY saves the LoRA adapters, and not the full model. To save to 16bit or GGUF, scroll down!

[ ]
('lora_model/tokenizer_config.json',
, 'lora_model/special_tokens_map.json',
, 'lora_model/tokenizer.json')

Now if you want to load the LoRA adapters we just saved for inference, set False to True:

[ ]
Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
["<|begin_of_text|>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nWhat is a famous tall tower in Paris?\n\n### Input:\n\n\n### Response:\nThe Eiffel Tower is a famous tall tower in Paris. It is a wrought iron tower located on the Champ de Mars in Paris, France. The tower is named after the engineer Gustave Eiffel, the main designer, and was built as the entrance to the 1889 World's Fair. The tower"]

You can also use Hugging Face's AutoModelForPeftCausalLM. Only use this if you do not have unsloth installed. It can be hopelessly slow, since 4bit model downloading is not supported, and Unsloth's inference is 2x faster.

[ ]

Saving to float16 for VLLM

We also support saving to float16 directly. Select merged_16bit for float16 or merged_4bit for int4. We also allow lora adapters as a fallback. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens.

[ ]

GGUF / llama.cpp Conversion

To save to GGUF / llama.cpp, we support it natively now! We clone llama.cpp and we default save it to q8_0. We allow all methods like q4_k_m. Use save_pretrained_gguf for local saving and push_to_hub_gguf for uploading to HF.

Some supported quant methods (full list on our Wiki page):

  • q8_0 - Fast conversion. High resource use, but generally acceptable.
  • q4_k_m - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.
  • q5_k_m - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K.
[ ]