Notebooks
U
Unsloth
Mistral V0.3 (7B) Conversational

Mistral V0.3 (7B) Conversational

unsloth-notebooksunslothoriginal_template

News

Placeholder

Installation

[ ]

Unsloth

[ ]
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
config.json:   0%|          | 0.00/1.16k [00:00<?, ?B/s]
==((====))==  Unsloth: Fast Mistral patching release 2024.5
   \\   /|    GPU: Tesla T4. Max memory: 14.748 GB. Platform = Linux.
O^O/ \_/ \    Pytorch: 2.3.0+cu121. CUDA = 7.5. CUDA Toolkit = 12.1.
\        /    Bfloat16 = FALSE. Xformers = 0.0.26.post1. FA = False.
 "-____-"     Free Apache license: http://github.com/unslothai/unsloth
model.safetensors:   0%|          | 0.00/4.14G [00:00<?, ?B/s]
generation_config.json:   0%|          | 0.00/111 [00:00<?, ?B/s]
tokenizer_config.json:   0%|          | 0.00/137k [00:00<?, ?B/s]
tokenizer.model:   0%|          | 0.00/587k [00:00<?, ?B/s]
special_tokens_map.json:   0%|          | 0.00/560 [00:00<?, ?B/s]
tokenizer.json:   0%|          | 0.00/1.96M [00:00<?, ?B/s]
You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565

We now add LoRA adapters so we only need to update 1 to 10% of all parameters!

[ ]
Unsloth 2024.5 patched 32 layers with 32 QKV layers, 32 O layers and 32 MLP layers.

Data Prep

We now use the ChatML format for conversation style finetunes. We use Open Assistant conversations in ShareGPT style. ChatML renders multi turn conversations like below:

	<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
What's the capital of France?<|im_end|>
<|im_start|>assistant
Paris.

[NOTE] To train only on completions (ignoring the user's input) read TRL's docs here.

We use our get_chat_template function to get the correct chat template. We support zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old and our own optimized unsloth template.

Normally one has to train <|im_start|> and <|im_end|>. We instead map <|im_end|> to be the EOS token, and leave <|im_start|> as is. This requires no additional training of additional tokens.

Note ShareGPT uses {"from": "human", "value" : "Hi"} and not {"role": "user", "content" : "Hi"}, so we use mapping to map it.

For text completions like novel writing, try this notebook.

[ ]
Unsloth: Will map <|im_end|> to EOS = </s>.
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
Downloading readme:   0%|          | 0.00/442 [00:00<?, ?B/s]
Downloading data:   0%|          | 0.00/8.24M [00:00<?, ?B/s]
Generating train split:   0%|          | 0/9033 [00:00<?, ? examples/s]
Map:   0%|          | 0/9033 [00:00<?, ? examples/s]

Let's see how the ChatML format works by printing the 5th element

[ ]
[{'from': 'human',
,  'value': 'What is the typical wattage of bulb in a lightbox?'},
, {'from': 'gpt',
,  'value': 'The typical wattage of a bulb in a lightbox is 60 watts, although domestic LED bulbs are normally much lower than 60 watts, as they produce the same or greater lumens for less wattage than alternatives. A 60-watt Equivalent LED bulb can be calculated using the 7:1 ratio, which divides 60 watts by 7 to get roughly 9 watts.'},
, {'from': 'human',
,  'value': 'Rewrite your description of the typical wattage of a bulb in a lightbox to only include the key points in a list format.'}]
[ ]
<|im_start|>user
What is the typical wattage of bulb in a lightbox?<|im_end|>
<|im_start|>assistant
The typical wattage of a bulb in a lightbox is 60 watts, although domestic LED bulbs are normally much lower than 60 watts, as they produce the same or greater lumens for less wattage than alternatives. A 60-watt Equivalent LED bulb can be calculated using the 7:1 ratio, which divides 60 watts by 7 to get roughly 9 watts.<|im_end|>
<|im_start|>user
Rewrite your description of the typical wattage of a bulb in a lightbox to only include the key points in a list format.<|im_end|>

If you're looking to make your own chat template, that also is possible! You must use the Jinja templating regime. We provide our own stripped down version of the Unsloth template which we find to be more efficient, and leverages ChatML, Zephyr and Alpaca styles.

More info on chat templates on our wiki page!

[ ]

Train the model

Now let's train our model. We do 60 steps to speed things up, but you can set num_train_epochs=1 for a full run, and turn off max_steps=None. We also support TRL's DPOTrainer!

[ ]
/usr/local/lib/python3.10/dist-packages/multiprocess/popen_fork.py:66: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.
  self.pid = os.fork()
Map (num_proc=2):   0%|          | 0/9033 [00:00<?, ? examples/s]
max_steps is given, it will override any value given in num_train_epochs
[ ]
GPU = Tesla T4. Max memory = 14.748 GB.
4.52 GB of memory reserved.
[ ]
==((====))==  Unsloth - 2x faster free finetuning | Num GPUs = 1
   \\   /|    Num examples = 9,033 | Num Epochs = 1
O^O/ \_/ \    Batch size per device = 2 | Gradient Accumulation steps = 4
\        /    Total batch size = 8 | Total steps = 60
 "-____-"     Number of trainable parameters = 41,943,040
[ ]
870.2993 seconds used for training.
14.5 minutes used for training.
Peak reserved memory = 6.289 GB.
Peak reserved memory for training = 1.769 GB.
Peak reserved memory % of max memory = 42.643 %.
Peak reserved memory for training % of max memory = 11.995 %.

Inference

Let's run the model! Since we're using ChatML, use apply_chat_template with add_generation_prompt set to True for inference.

[ ]
Unsloth: Will map <|im_end|> to EOS = <|im_end|>.
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
['<|im_start|>user\nContinue the fibonnaci sequence: 1, 1, 2, 3, 5, 8,<|im_end|> \n<|im_start|>assistant\nThe next number in the Fibonacci sequence is 13.\n\nThe Fibonacci sequence is a series of numbers in which each number is the sum of the two preceding ones, starting from 0 and 1. The sequence goes: 0, 1, 1, ']

You can also use a TextStreamer for continuous inference - so you can see the generation token by token, instead of waiting the whole time!

[ ]
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
<|im_start|>user
Continue the fibonnaci sequence: 1, 1, 2, 3, 5, 8,<|im_end|> 
<|im_start|>assistant
The next number in the Fibonacci sequence is 13.

The Fibonacci sequence is a series of numbers in which each number is the sum of the two preceding ones, starting from 0 and 1. The sequence goes: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2

Saving, loading finetuned models

To save the final model as LoRA adapters, either use Huggingface's push_to_hub for an online save or save_pretrained for a local save.

[NOTE] This ONLY saves the LoRA adapters, and not the full model. To save to 16bit or GGUF, scroll down!

[ ]
('lora_model/tokenizer_config.json',
, 'lora_model/special_tokens_map.json',
, 'lora_model/tokenizer.json')

Now if you want to load the LoRA adapters we just saved for inference, set False to True:

[ ]
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
<|im_start|>user
What is a famous tall tower in Paris?<|im_end|> 
<|im_start|>assistant
The Eiffel Tower is a famous tall tower in Paris. It is one of the most recognizable landmarks in the world and is a popular tourist destination. The tower was built in 1889 for the World's Fair and is named after Gustave Eiffel, the engineer who designed and built it. The tower stands at a height of 324 meters (1,063 feet) and is made of iron. It is located on the Champ de Mars, a large public park in Paris.<|im_end|>

You can also use Hugging Face's AutoModelForPeftCausalLM. Only use this if you do not have unsloth installed. It can be hopelessly slow, since 4bit model downloading is not supported, and Unsloth's inference is 2x faster.

[ ]

Saving to float16 for VLLM

We also support saving to float16 directly. Select merged_16bit for float16 or merged_4bit for int4. We also allow lora adapters as a fallback. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens.

[ ]

GGUF / llama.cpp Conversion

To save to GGUF / llama.cpp, we support it natively now! We clone llama.cpp and we default save it to q8_0. We allow all methods like q4_k_m. Use save_pretrained_gguf for local saving and push_to_hub_gguf for uploading to HF.

Some supported quant methods (full list on our Wiki page):

  • q8_0 - Fast conversion. High resource use, but generally acceptable.
  • q4_k_m - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.
  • q5_k_m - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K.
[ ]