Phi 3 Medium Conversational
News
Placeholder
Installation
Unsloth
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
Unsloth: You passed in `unsloth/Phi-3-medium-4k-instruct` and `load_in_4bit = True`. We shall load `unsloth/Phi-3-medium-4k-instruct-bnb-4bit` for 4x faster loading.
config.json: 0%| | 0.00/1.16k [00:00<?, ?B/s]
==((====))== Unsloth: Fast Mistral patching release 2024.5 \\ /| GPU: Tesla T4. Max memory: 14.748 GB. Platform = Linux. O^O/ \_/ \ Pytorch: 2.3.0+cu121. CUDA = 7.5. CUDA Toolkit = 12.1. \ / Bfloat16 = FALSE. Xformers = 0.0.26.post1. FA = False. "-____-" Free Apache license: http://github.com/unslothai/unsloth
model.safetensors.index.json: 0%| | 0.00/165k [00:00<?, ?B/s]
Downloading shards: 0%| | 0/2 [00:00<?, ?it/s]
model-00001-of-00002.safetensors: 0%| | 0.00/3.97G [00:00<?, ?B/s]
model-00002-of-00002.safetensors: 0%| | 0.00/3.72G [00:00<?, ?B/s]
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
generation_config.json: 0%| | 0.00/172 [00:00<?, ?B/s]
tokenizer_config.json: 0%| | 0.00/3.14k [00:00<?, ?B/s]
tokenizer.model: 0%| | 0.00/500k [00:00<?, ?B/s]
added_tokens.json: 0%| | 0.00/293 [00:00<?, ?B/s]
special_tokens_map.json: 0%| | 0.00/571 [00:00<?, ?B/s]
tokenizer.json: 0%| | 0.00/1.84M [00:00<?, ?B/s]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
We now add LoRA adapters so we only need to update 1 to 10% of all parameters!
Unsloth 2024.5 patched 40 layers with 40 QKV layers, 40 O layers and 40 MLP layers.
Data Prep
We now use the Phi-3 format for conversation style finetunes. We use Open Assistant conversations in ShareGPT style. Phi-3 renders multi turn conversations like below:
<s><|user|>
Hi!<|end|>
<|assistant|>
Hello! How are you?<|end|>
<|user|>
I'm doing great! And you?<|end|>
[NOTE] To train only on completions (ignoring the user's input) read TRL's docs here.
We use our get_chat_template function to get the correct chat template. We support zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old and our own optimized unsloth template.
Note ShareGPT uses {"from": "human", "value" : "Hi"} and not {"role": "user", "content" : "Hi"}, so we use mapping to map it.
For text completions like novel writing, try this notebook.
Downloading readme: 0%| | 0.00/442 [00:00<?, ?B/s]
Downloading data: 0%| | 0.00/8.24M [00:00<?, ?B/s]
Generating train split: 0%| | 0/9033 [00:00<?, ? examples/s]
Map: 0%| | 0/9033 [00:00<?, ? examples/s]
Let's see how the Phi-3 format works by printing the 5th element
[{'from': 'human',
, 'value': 'What is the typical wattage of bulb in a lightbox?'},
, {'from': 'gpt',
, 'value': 'The typical wattage of a bulb in a lightbox is 60 watts, although domestic LED bulbs are normally much lower than 60 watts, as they produce the same or greater lumens for less wattage than alternatives. A 60-watt Equivalent LED bulb can be calculated using the 7:1 ratio, which divides 60 watts by 7 to get roughly 9 watts.'},
, {'from': 'human',
, 'value': 'Rewrite your description of the typical wattage of a bulb in a lightbox to only include the key points in a list format.'}] <s><|user|> What is the typical wattage of bulb in a lightbox?<|end|> <|assistant|> The typical wattage of a bulb in a lightbox is 60 watts, although domestic LED bulbs are normally much lower than 60 watts, as they produce the same or greater lumens for less wattage than alternatives. A 60-watt Equivalent LED bulb can be calculated using the 7:1 ratio, which divides 60 watts by 7 to get roughly 9 watts.<|end|> <|user|> Rewrite your description of the typical wattage of a bulb in a lightbox to only include the key points in a list format.<|end|>
If you're looking to make your own chat template, that also is possible! You must use the Jinja templating regime. We provide our own stripped down version of the Unsloth template which we find to be more efficient, and leverages ChatML, Zephyr and Alpaca styles.
More info on chat templates on our wiki page!
/usr/local/lib/python3.10/dist-packages/multiprocess/popen_fork.py:66: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock. self.pid = os.fork()
Map (num_proc=2): 0%| | 0/9033 [00:00<?, ? examples/s]
max_steps is given, it will override any value given in num_train_epochs
GPU = Tesla T4. Max memory = 14.748 GB. 7.504 GB of memory reserved.
==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1 \\ /| Num examples = 9,033 | Num Epochs = 1 O^O/ \_/ \ Batch size per device = 2 | Gradient Accumulation steps = 4 \ / Total batch size = 8 | Total steps = 60 "-____-" Number of trainable parameters = 65,536,000
1620.631 seconds used for training. 27.01 minutes used for training. Peak reserved memory = 9.623 GB. Peak reserved memory for training = 2.119 GB. Peak reserved memory % of max memory = 65.25 %. Peak reserved memory for training % of max memory = 14.368 %.
['<s><|user|> Continue the fibonnaci sequence: 1, 1, 2, 3, 5, 8,<|end|><|assistant|> The next number in the Fibonacci sequence is 13.<|end|>']
You can also use a TextStreamer for continuous inference - so you can see the generation token by token, instead of waiting the whole time!
<s><|user|> Continue the fibonnaci sequence: 1, 1, 2, 3, 5, 8,<|end|><|assistant|> The next number in the Fibonacci sequence is 13.<|end|>
Now if you want to load the LoRA adapters we just saved for inference, set False to True:
<s><|user|> What is a famous tall tower in Paris?<|end|><|assistant|> The Eiffel Tower is a famous tall tower in Paris.<|end|>
You can also use Hugging Face's AutoModelForPeftCausalLM. Only use this if you do not have unsloth installed. It can be hopelessly slow, since 4bit model downloading is not supported, and Unsloth's inference is 2x faster.
Saving to float16 for VLLM
We also support saving to float16 directly. Select merged_16bit for float16 or merged_4bit for int4. We also allow lora adapters as a fallback. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens.
GGUF / llama.cpp Conversion
To save to GGUF / llama.cpp, we support it natively now! We clone llama.cpp and we default save it to q8_0. We allow all methods like q4_k_m. Use save_pretrained_gguf for local saving and push_to_hub_gguf for uploading to HF.
Some supported quant methods (full list on our Wiki page):
q8_0- Fast conversion. High resource use, but generally acceptable.q4_k_m- Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.q5_k_m- Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K.