Notebooks
U
Unsloth
Mistral V0.3 (7B) Alpaca

Mistral V0.3 (7B) Alpaca

unsloth-notebooksunslothoriginal_template

News

Placeholder

Installation

[ ]

Unsloth

[ ]
πŸ¦₯ Unsloth: Will patch your computer to enable 2x faster free finetuning.
Unsloth: You passed in `unsloth/mistral-7b-v0.3` and `load_in_4bit = True`.
We shall load `unsloth/mistral-7b-v0.3-bnb-4bit` for 4x faster loading.
config.json:   0%|          | 0.00/1.15k [00:00<?, ?B/s]
==((====))==  Unsloth: Fast Mistral patching release 2024.5
   \\   /|    GPU: Tesla T4. Max memory: 14.748 GB. Platform = Linux.
O^O/ \_/ \    Pytorch: 2.3.0+cu121. CUDA = 7.5. CUDA Toolkit = 12.1.
\        /    Bfloat16 = FALSE. Xformers = 0.0.26.post1. FA = False.
 "-____-"     Free Apache license: http://github.com/unslothai/unsloth
model.safetensors:   0%|          | 0.00/4.14G [00:00<?, ?B/s]
generation_config.json:   0%|          | 0.00/111 [00:00<?, ?B/s]
tokenizer_config.json:   0%|          | 0.00/137k [00:00<?, ?B/s]
tokenizer.model:   0%|          | 0.00/587k [00:00<?, ?B/s]
special_tokens_map.json:   0%|          | 0.00/560 [00:00<?, ?B/s]
tokenizer.json:   0%|          | 0.00/1.96M [00:00<?, ?B/s]
You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565

We now add LoRA adapters so we only need to update 1 to 10% of all parameters!

[ ]
Unsloth 2024.5 patched 32 layers with 32 QKV layers, 32 O layers and 32 MLP layers.

Data Prep

We now use the Alpaca dataset from yahma, which is a filtered version of 52K of the original Alpaca dataset. You can replace this code section with your own data prep.

[NOTE] To train only on completions (ignoring the user's input) read TRL's docs here.

[NOTE] Remember to add the EOS_TOKEN to the tokenized output!! Otherwise you'll get infinite generations!

If you want to use the llama-3 template for ShareGPT datasets, try our conversational notebook

For text completions like novel writing, try this notebook.

[ ]
Downloading readme:   0%|          | 0.00/11.6k [00:00<?, ?B/s]
Downloading data:   0%|          | 0.00/44.3M [00:00<?, ?B/s]
Generating train split:   0%|          | 0/51760 [00:00<?, ? examples/s]
Map:   0%|          | 0/51760 [00:00<?, ? examples/s]

Train the model

Now let's train our model. We do 60 steps to speed things up, but you can set num_train_epochs=1 for a full run, and turn off max_steps=None. We also support TRL's DPOTrainer!

[ ]
/usr/local/lib/python3.10/dist-packages/multiprocess/popen_fork.py:66: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.
  self.pid = os.fork()
Map (num_proc=2):   0%|          | 0/51760 [00:00<?, ? examples/s]
max_steps is given, it will override any value given in num_train_epochs
[ ]
GPU = Tesla T4. Max memory = 14.748 GB.
4.52 GB of memory reserved.
[ ]
==((====))==  Unsloth - 2x faster free finetuning | Num GPUs = 1
   \\   /|    Num examples = 51,760 | Num Epochs = 1
O^O/ \_/ \    Batch size per device = 2 | Gradient Accumulation steps = 4
\        /    Total batch size = 8 | Total steps = 60
 "-____-"     Number of trainable parameters = 41,943,040
[ ]
540.6501 seconds used for training.
9.01 minutes used for training.
Peak reserved memory = 5.836 GB.
Peak reserved memory for training = 1.316 GB.
Peak reserved memory % of max memory = 39.571 %.
Peak reserved memory for training % of max memory = 8.923 %.

Inference

Let's run the model! You can change the instruction and input - leave the output blank!

[ ]
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
['<s> Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nContinue the fibonnaci sequence.\n\n### Input:\n1, 1, 2, 3, 5, 8\n\n### Response:\n13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6']

You can also use a TextStreamer for continuous inference - so you can see the generation token by token, instead of waiting the whole time!

[ ]
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
<s> Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
Continue the fibonnaci sequence.

### Input:
1, 1, 2, 3, 5, 8

### Response:
13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811, 

Saving, loading finetuned models

To save the final model as LoRA adapters, either use Huggingface's push_to_hub for an online save or save_pretrained for a local save.

[NOTE] This ONLY saves the LoRA adapters, and not the full model. To save to 16bit or GGUF, scroll down!

[ ]
('lora_model/tokenizer_config.json',
, 'lora_model/special_tokens_map.json',
, 'lora_model/tokenizer.model',
, 'lora_model/added_tokens.json',
, 'lora_model/tokenizer.json')

Now if you want to load the LoRA adapters we just saved for inference, set False to True:

[ ]
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
['<s> Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nWhat is a famous tall tower in Paris?\n\n### Input:\n\n\n### Response:\nOne of the most famous tall towers in Paris is the Eiffel Tower. It is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower. The Eiffel Tower is one']

You can also use Hugging Face's AutoModelForPeftCausalLM. Only use this if you do not have unsloth installed. It can be hopelessly slow, since 4bit model downloading is not supported, and Unsloth's inference is 2x faster.

[ ]

Saving to float16 for VLLM

We also support saving to float16 directly. Select merged_16bit for float16 or merged_4bit for int4. We also allow lora adapters as a fallback. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens.

[ ]

GGUF / llama.cpp Conversion

To save to GGUF / llama.cpp, we support it natively now! We clone llama.cpp and we default save it to q8_0. We allow all methods like q4_k_m. Use save_pretrained_gguf for local saving and push_to_hub_gguf for uploading to HF.

Some supported quant methods (full list on our Wiki page):

  • q8_0 - Fast conversion. High resource use, but generally acceptable.
  • q4_k_m - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.
  • q5_k_m - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K.
[ ]