Pixtral (12B) Vision
News
Placeholder
Installation
Unsloth
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning. 🦥 Unsloth Zoo will now patch everything to make training faster! ==((====))== Unsloth 2024.11.9: Fast Pixtral vision patching. Transformers = 4.46.2. \\ /| GPU: Tesla T4. Max memory: 14.748 GB. Platform = Linux. O^O/ \_/ \ Pytorch: 2.5.1+cu121. CUDA = 7.5. CUDA Toolkit = 12.1. \ / Bfloat16 = FALSE. FA [Xformers = 0.0.28.post3. FA2 = False] "-____-" Free Apache license: http://github.com/unslothai/unsloth Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
model.safetensors.index.json: 0%| | 0.00/316k [00:00<?, ?B/s]
Downloading shards: 0%| | 0/2 [00:00<?, ?it/s]
model-00001-of-00002.safetensors: 0%| | 0.00/4.97G [00:00<?, ?B/s]
model-00002-of-00002.safetensors: 0%| | 0.00/3.57G [00:00<?, ?B/s]
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
generation_config.json: 0%| | 0.00/133 [00:00<?, ?B/s]
processor_config.json: 0%| | 0.00/162 [00:00<?, ?B/s]
chat_template.json: 0%| | 0.00/1.59k [00:00<?, ?B/s]
preprocessor_config.json: 0%| | 0.00/483 [00:00<?, ?B/s]
tokenizer_config.json: 0%| | 0.00/177k [00:00<?, ?B/s]
tokenizer.json: 0%| | 0.00/17.1M [00:00<?, ?B/s]
special_tokens_map.json: 0%| | 0.00/552 [00:00<?, ?B/s]
We now add LoRA adapters for parameter efficient finetuning - this allows us to only efficiently train 1% of all parameters.
[NEW] We also support finetuning ONLY the vision part of the model, or ONLY the language part. Or you can select both! You can also select to finetune the attention or the MLP layers!
README.md: 0%| | 0.00/728 [00:00<?, ?B/s]
train-00000-of-00001.parquet: 0%| | 0.00/357M [00:00<?, ?B/s]
test-00000-of-00001.parquet: 0%| | 0.00/57.6M [00:00<?, ?B/s]
Generating train split: 0%| | 0/8552 [00:00<?, ? examples/s]
Generating test split: 0%| | 0/1364 [00:00<?, ? examples/s]
To format the dataset, all vision finetuning tasks should be formatted as follows:
[
{ "role": "user",
"content": [{"type": "text", "text": Q}, {"type": "image", "image": image} ]
},
{ "role": "assistant",
"content": [{"type": "text", "text": A} ]
},
]
Let's take an overview look at the dataset. We shall see what the 3rd image is, and what caption it had.
Dataset({
, features: ['messages', 'images'],
, num_rows: 8552
,}) [{'content': [{'index': 0, 'text': None, 'type': 'image'},
, {'index': None,
, 'text': '\nWhat makes the train in the image unique compared to other trains?',
, 'type': 'text'}],
, 'role': 'user'},
, {'content': [{'index': None,
, 'text': 'What sets the train in the image apart from other trains is the presence of a distinctive graffiti on the side of it. This graffiti is a rendition of Edvard Munch\'s famous painting, "The Scream." This street art adds a unique artistic and unconventional appearance to the train, and it attracts attention due to the reference to a well-known piece of art. It is not common for trains to have such artwork on their outer surface, especially a representation of a famous painting.',
, 'type': 'text'}],
, 'role': 'assistant'}] max_steps is given, it will override any value given in num_train_epochs
GPU = Tesla T4. Max memory = 14.748 GB. 8.031 GB of memory reserved.
==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1 \\ /| Num examples = 8,552 | Num Epochs = 1 O^O/ \_/ \ Batch size per device = 1 | Gradient Accumulation steps = 4 \ / Total batch size = 4 | Total steps = 30 "-____-" Number of trainable parameters = 18,677,760 🦥 Unsloth needs about 1-3 minutes to load everything - please wait!
963.6424 seconds used for training. 16.06 minutes used for training. Peak reserved memory = 12.643 GB. Peak reserved memory for training = 4.612 GB. Peak reserved memory % of max memory = 85.727 %. Peak reserved memory for training % of max memory = 31.272 %.
Inference
Let's run the model! You can change the instruction and input - leave the output blank!
We use min_p = 0.1 and temperature = 1.5. Read this Tweet for more information on why.
Expanding inputs for image tokens in LLaVa should be done in processing. Please add `patch_size` and `vision_feature_select_strategy` to the model's processing config or set directly with `processor.patch_size = {{patch_size}}` and processor.vision_feature_select_strategy = {{vision_feature_select_strategy}}`. Using processors without these attributes in the config is deprecated and will throw an error in v4.47.
Yes, there is something interesting about this image. It shows a creative and eye-catching design on the side of a vehicle, likely a trailer or a large truck, featuring a stylized depiction of a rocket with wings and the words "Space Force" written on it. This design is visible from the perspective of someone
['lora_model/processor_config.json']
Now if you want to load the LoRA adapters we just saved for inference, set False to True:
The image shows a train traveling through a rural area with a tall tower in the background.</s>
Saving to float16 for VLLM
We also support saving to float16 directly. Select merged_16bit for float16. Use push_to_hub_merged to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens.