Notebooks
U
Unsloth
TinyLlama (1.1B) Alpaca

TinyLlama (1.1B) Alpaca

unsloth-notebooksunslothoriginal_template

News

Placeholder

Installation

[ ]

Unsloth

[ ]
/usr/local/lib/python3.10/dist-packages/unsloth/__init__.py:67: UserWarning: CUDA is not linked properly.
We shall run `ldconfig /usr/lib64-nvidia` to try to fix it.
  warnings.warn(
config.json:   0%|          | 0.00/1.09k [00:00<?, ?B/s]
==((====))==  Unsloth: Fast Llama patching release 2024.1
   \\   /|    GPU: Tesla T4. Max memory: 14.748 GB
O^O/ \_/ \    CUDA capability = 7.5. Xformers = 0.0.22.post7. FA = False.
\        /    Pytorch version: 2.1.0+cu121. CUDA Toolkit = 12.1
 "-____-"     bfloat16 = FALSE. Platform = Linux

Unsloth: unsloth/tinyllama-bnb-4bit can only handle sequence lengths of at most 2048.
But with kaiokendev's RoPE scaling of 2.0, it can be magically be extended to 4096!
You passed `quantization_config` to `from_pretrained` but the model you're loading already has a `quantization_config` attribute. The `quantization_config` attribute will be overwritten with the one you passed to `from_pretrained`.
model.safetensors:   0%|          | 0.00/762M [00:00<?, ?B/s]
generation_config.json:   0%|          | 0.00/129 [00:00<?, ?B/s]
tokenizer_config.json:   0%|          | 0.00/894 [00:00<?, ?B/s]
tokenizer.model:   0%|          | 0.00/500k [00:00<?, ?B/s]
tokenizer.json:   0%|          | 0.00/1.84M [00:00<?, ?B/s]
special_tokens_map.json:   0%|          | 0.00/438 [00:00<?, ?B/s]

[NOTE] TinyLlama's internal maximum sequence length is 2048. We use RoPE Scaling to extend it to 4096 with Unsloth!

We now add LoRA adapters so we only need to update 1 to 10% of all parameters!

[NOTE] We set gradient_checkpointing=False ONLY for TinyLlama since Unsloth saves tonnes of memory usage. This does NOT work for llama-2-7b or mistral-7b since the memory usage will still exceed Tesla T4's 15GB. GC recomputes the forward pass during the backward pass, saving loads of memory.

**[IF YOU GET OUT OF MEMORY]** set gradient_checkpointing to True.

[ ]
Unsloth 2024.1 patched 22 layers with 22 QKV layers, 22 O layers and 22 MLP layers.

Data Prep

We now use the Alpaca dataset from yahma, which is a filtered version of 52K of the original Alpaca dataset. You can replace this code section with your own data prep.

[NOTE] To train only on completions (ignoring the user's input) read TRL's docs here.

[NOTE] Remember to add the EOS_TOKEN to the tokenized output!! Otherwise you'll get infinite generations!

If you want to use the llama-3 template for ShareGPT datasets, try our conversational notebook

For text completions like novel writing, try this notebook.

[ ]
Downloading readme:   0%|          | 0.00/11.6k [00:00<?, ?B/s]
Downloading data:   0%|          | 0.00/44.3M [00:00<?, ?B/s]
Generating train split: 0 examples [00:00, ? examples/s]
Map:   0%|          | 0/51760 [00:00<?, ? examples/s]

Train the model

Now let's train our model. We do 1 full epoch which makes Alpaca run in 80ish minutes!

[ ]
PyTorch: setting up devices
The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
loading file tokenizer.model from cache at /root/.cache/huggingface/hub/models--unsloth--tinyllama-bnb-4bit/snapshots/fc56510003ea9d49362400b8a362345150802c31/tokenizer.model
loading file tokenizer.json from cache at /root/.cache/huggingface/hub/models--unsloth--tinyllama-bnb-4bit/snapshots/fc56510003ea9d49362400b8a362345150802c31/tokenizer.json
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at /root/.cache/huggingface/hub/models--unsloth--tinyllama-bnb-4bit/snapshots/fc56510003ea9d49362400b8a362345150802c31/special_tokens_map.json
loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models--unsloth--tinyllama-bnb-4bit/snapshots/fc56510003ea9d49362400b8a362345150802c31/tokenizer_config.json
Generating train split: 0 examples [00:00, ? examples/s]
Using auto half precision backend
[ ]
GPU = Tesla T4. Max memory = 14.748 GB.
0.879 GB of memory reserved.
[ ]
***** Running training *****
  Num examples = 3,000
  Num Epochs = 1
  Instantaneous batch size per device = 2
  Total train batch size (w. parallel, distributed & accumulation) = 8
  Gradient Accumulation steps = 4
  Total optimization steps = 375
  Number of trainable parameters = 25,231,360
You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.


Training completed. Do not forget to share your model on huggingface.co/models =)


[ ]
5034.6413 seconds used for training.
83.91 minutes used for training.
Peak reserved memory = 13.508 GB.
Peak reserved memory for training = 12.629 GB.
Peak reserved memory % of max memory = 91.592 %.
Peak reserved memory for training % of max memory = 85.632 %.

Inference

Let's run the model! You can change the instruction and input - leave the output blank!

[ ]
['<s> Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nContinue the fibonnaci sequence.\n\n### Input:\n1, 1, 2, 3, 5, 8\n\n### Response:\nThe fibonacci sequence is a sequence of numbers that can be generated by adding the previous two numbers and then subtracting the previous number from the previous number. The first number in the sequence is 1, and the second number is 1. The third number in the sequence is 1 + 1 = 2, the fourth number is 1 + 2 = 3, and so on. The sequence can be continued by adding 1 + 1 = 2, 1 + 2 = 3, and so on.</s>']

You can also use a TextStreamer for continuous inference - so you can see the generation token by token, instead of waiting the whole time!

[ ]

Saving, loading finetuned models

To save the final model as LoRA adapters, either use Huggingface's push_to_hub for an online save or save_pretrained for a local save.

[NOTE] This ONLY saves the LoRA adapters, and not the full model. To save to 16bit or GGUF, scroll down!

[ ]

Now if you want to load the LoRA adapters we just saved for inference, set False to True:

[ ]

You can also use Hugging Face's AutoModelForPeftCausalLM. Only use this if you do not have unsloth installed. It can be hopelessly slow, since 4bit model downloading is not supported, and Unsloth's inference is 2x faster.

[ ]

Saving to float16 for VLLM

We also support saving to float16 directly. Select merged_16bit for float16 or merged_4bit for int4. We also allow lora adapters as a fallback. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens.

[ ]

GGUF / llama.cpp Conversion

To save to GGUF / llama.cpp, we support it natively now! We clone llama.cpp and we default save it to q8_0. We allow all methods like q4_k_m. Use save_pretrained_gguf for local saving and push_to_hub_gguf for uploading to HF.

Some supported quant methods (full list on our Wiki page):

  • q8_0 - Fast conversion. High resource use, but generally acceptable.
  • q4_k_m - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.
  • q5_k_m - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K.
[ ]