Notebooks
W
Weights and Biases
Train And Debug YOLOX Models With Weights & Biases

Train And Debug YOLOX Models With Weights & Biases

wandb-examplescolabsyolox

Open In Colab

Weights & Biases

Train and Debug YOLOX Models with Weights & Biases 🪄🐝

In this colab, we'll demonstrate how to use the W&B integration with YOLOX for real-time object detection framework to track model metrics, log checkpoints and visualize predictions.

It can be done with just 1 added argument to your command!

	python tools/train.py -n yolox-s -d 8 -b 64 --fp16 -o [--cache] --logger wandb

To log the metrics and checkpoints to W&B during training, the wandb client now has a direct integration into YOLOX. Using wandb for logging automatically adds all the metrics to your W&B dashboard, saves the models at every evaluation step , tags the model with the best average precision and shows you visualizations of the predicted bounding boxes along with the confidence score!

Setup 🖥

We begin by downloading the YOLOX GitHub repository and a subset of the COCO dataset for object detection.

Below, we'll use this dataset to train a model to detect objects in images.

We also install all the requirements for YOLOX and wandb.

[ ]
[ ]
[ ]

Downloading the dataset

[ ]
[ ]
[ ]

Training 🏋️

Using wandb just requires configuring the command line argument --logger wandb. This automatically turns on the wandb logger for your experiment and further arguments can be added -

  1. wandb-project: To specify the project in which experiment is being run.
  2. wandb-run: The name of the wandb run
  3. wandb-entity: Entity which is starting the run
  4. wandb-log_checkpoints: True/False to log model checkpoints to the wandb dashboard
  5. wandb-num_eval_images: Number of images from the validation set to be logged to wandb. Predictions corresponding to these can be visualized on the dashboard. No images are logged if the value is 0 and all are logged if the value is -1.

and more!

Reproduce the results

The different YOLOX models can be trained from scratch with the entire process being logged to W&B. In this case we train on a much smaller subset of the COCO dataset.

[ ]

This W&B dashboard shows the visualization of how all the metrics vary over time. Average precision is logged against epoch and losses are logged against the step.

Screenshot 2022-03-24 at 3.21.30 PM.pngScreenshot 2022-03-24 at 3.22.48 PM.png

The checkpoints are logged to the wandb dashboard and tagged with epoch and if it is the best model. Along with that, metadata is also provided which consists of the optimizer state and the average precision on the validation set from that model. Screenshot 2022-03-24 at 3.25.01 PM.png

The first num_eval_images from the validation set are logged to the dasboard and the corresponding predictions are logged to the dashboard for visualization along with the confidence scores!

ezgif.com-gif-maker.gif

Finetuning a pretrained model

You can also finetune a pretrained model on a custom dataset. In this case we continue working on the subset of COCO.

[ ]
[ ]

We will finetune the trained model from the previous step. To do that we download the logged artifact using the wandb API.

[ ]
[ ]

Using the trained model

The cell below runs detection using a pretrained model on a given image.

[ ]
InputAnnotated Image
dog.jpeg

Resources 📚

Questions about W&B❓

If you have any questions about using W&B to track your model performance and predictions, please contact support@wandb.com