Qwen2 (7B) Alpaca
News
Placeholder
Installation
Unsloth
π¦₯ Unsloth: Will patch your computer to enable 2x faster free finetuning.
config.json: 0%| | 0.00/1.18k [00:00<?, ?B/s]
==((====))== Unsloth: Fast Qwen2 patching release 2024.6 \\ /| GPU: Tesla T4. Max memory: 14.748 GB. Platform = Linux. O^O/ \_/ \ Pytorch: 2.3.0+cu121. CUDA = 7.5. CUDA Toolkit = 12.1. \ / Bfloat16 = FALSE. Xformers = 0.0.26.post1. FA = False. "-____-" Free Apache license: http://github.com/unslothai/unsloth
model.safetensors: 0%| | 0.00/5.55G [00:00<?, ?B/s]
generation_config.json: 0%| | 0.00/117 [00:00<?, ?B/s]
tokenizer_config.json: 0%| | 0.00/1.33k [00:00<?, ?B/s]
vocab.json: 0%| | 0.00/2.78M [00:00<?, ?B/s]
merges.txt: 0%| | 0.00/1.67M [00:00<?, ?B/s]
added_tokens.json: 0%| | 0.00/80.0 [00:00<?, ?B/s]
special_tokens_map.json: 0%| | 0.00/370 [00:00<?, ?B/s]
tokenizer.json: 0%| | 0.00/7.03M [00:00<?, ?B/s]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. unsloth/qwen2-7b-bnb-4bit does not have a padding token! Will use pad_token = <|PAD_TOKEN|>.
We now add LoRA adapters so we only need to update 1 to 10% of all parameters!
Unsloth 2024.6 patched 28 layers with 0 QKV layers, 28 O layers and 28 MLP layers.
Data Prep
We now use the Alpaca dataset from yahma, which is a filtered version of 52K of the original Alpaca dataset. You can replace this code section with your own data prep.
[NOTE] To train only on completions (ignoring the user's input) read TRL's docs here.
[NOTE] Remember to add the EOS_TOKEN to the tokenized output!! Otherwise you'll get infinite generations!
If you want to use the llama-3 template for ShareGPT datasets, try our conversational notebook
For text completions like novel writing, try this notebook.
Downloading readme: 0%| | 0.00/11.6k [00:00<?, ?B/s]
Downloading data: 0%| | 0.00/44.3M [00:00<?, ?B/s]
Generating train split: 0%| | 0/51760 [00:00<?, ? examples/s]
Map: 0%| | 0/51760 [00:00<?, ? examples/s]
/usr/local/lib/python3.10/dist-packages/multiprocess/popen_fork.py:66: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock. self.pid = os.fork()
Map (num_proc=2): 0%| | 0/51760 [00:00<?, ? examples/s]
max_steps is given, it will override any value given in num_train_epochs
GPU = Tesla T4. Max memory = 14.748 GB. 7.072 GB of memory reserved.
==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1 \\ /| Num examples = 51,760 | Num Epochs = 1 O^O/ \_/ \ Batch size per device = 2 | Gradient Accumulation steps = 4 \ / Total batch size = 8 | Total steps = 60 "-____-" Number of trainable parameters = 40,370,176
454.0829 seconds used for training. 7.57 minutes used for training. Peak reserved memory = 9.137 GB. Peak reserved memory for training = 2.065 GB. Peak reserved memory % of max memory = 61.954 %. Peak reserved memory for training % of max memory = 14.002 %.
Setting `pad_token_id` to `eos_token_id`:151643 for open-end generation.
['Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nContinue the fibonnaci sequence.\n\n### Input:\n1, 1, 2, 3, 5, 8\n\n### Response:\nThe next number in the Fibonacci sequence is 13. The sequence continues as follows: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 23']
You can also use a TextStreamer for continuous inference - so you can see the generation token by token, instead of waiting the whole time!
Setting `pad_token_id` to `eos_token_id`:151643 for open-end generation.
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: Continue the fibonnaci sequence. ### Input: 1, 1, 2, 3, 5, 8 ### Response: The next number in the Fibonacci sequence is 13. The sequence continues as follows: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 4
('lora_model/tokenizer_config.json',
, 'lora_model/special_tokens_map.json',
, 'lora_model/vocab.json',
, 'lora_model/merges.txt',
, 'lora_model/added_tokens.json',
, 'lora_model/tokenizer.json') Now if you want to load the LoRA adapters we just saved for inference, set False to True:
Setting `pad_token_id` to `eos_token_id`:151643 for open-end generation.
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: What is a famous tall tower in Paris? ### Input: ### Response: The famous tall tower in Paris is the Eiffel Tower. It is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower. The Eiffel Tower is one of the most recognizable structures in
You can also use Hugging Face's AutoModelForPeftCausalLM. Only use this if you do not have unsloth installed. It can be hopelessly slow, since 4bit model downloading is not supported, and Unsloth's inference is 2x faster.
Saving to float16 for VLLM
We also support saving to float16 directly. Select merged_16bit for float16 or merged_4bit for int4. We also allow lora adapters as a fallback. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens.
GGUF / llama.cpp Conversion
To save to GGUF / llama.cpp, we support it natively now! We clone llama.cpp and we default save it to q8_0. We allow all methods like q4_k_m. Use save_pretrained_gguf for local saving and push_to_hub_gguf for uploading to HF.
Some supported quant methods (full list on our Wiki page):
q8_0- Fast conversion. High resource use, but generally acceptable.q4_k_m- Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.q5_k_m- Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K.