Metrics Software


The program we will be using to obtain the metrics for the YOLOv3 model will be the review_object_detection_metrics, developed by Rafael Padilla and available on github. This program as the advantage of having a User Interface (UI), so it's easier to work with, specially for someone with less experience with commands. I should also note that this program can work on Windows but you will need to install Anaconda. If you have Ubuntu with the Anaconda installed already you can proceed there.

Installation process


Unfortunately, the project is not particularly new, so, the installation process is not so direct. There were some incompatibilities with libraries, so, I will guide through the process so you can install the program today.

First, you should clone the repository to a working directory like the Desktop. Then you will need to base the program on a Virtual Environment. If you have been following the tutorial, you might have already installed Anaconda 2020.02. If not, you should refer to the Anaconda download chapter. You will need to create a Virtual Environment based on python 3.9 for the program to work.

Next, we will focus on the environment.yml file of the repository. This file will be responsible to guide the Virtual Environment set up on the installation of specific packages. We will be changing this file as the OpenCV and QT libraries are the ones causing installation problems. The plan is to remove them from the file and then to install them separately. In the end, the file should be like this:

# conda env create -n <env_name> --file <this_file>.yml
channels:
  - conda-forge
  - defaults
dependencies:
  - ipython
  - jupyter
  - matplotlib
  - notebook
  - numpy=1.19
  - pandas=1.1
  - pip
  - pytest=6.1
  - python==3.9
  - pip:
    - awscli==1.18.180
    - chardet==3.0.4
    - click==7.1.2
    - flake8==3.8.4
    - python-dotenv==0.15.0
    - pyyaml==5.3.1
    - sphinx==3.3.1

Now we just create the python 3.9 based virtual environment and point to the yml file. After creating the environment, you just have to activate and install the two missing libraries using the PIP command. Change the command based on the location of the program repository on your machine.

conda env create -n metrics_env --file {path/to/repository/location}/environment.yml --verbose
conda activate metrics_env
pip install opencv-python==4.5.5.62 pyqt5==5.12 --verbose

Everything is set, so, now you just need to follow the steps described on the program's github page to activate the program. Use these commands:

cd {path/to/repository/location}
python setup.py install
python run.py

This will launch a window with the program interface. I will guide you trough the usage of the program so you can obtain the metrics for your YOLOv3 model.

Using the program to obtain metrics


By analising the interface, you can probably tell that the upper part is to define the paths to the ground truth images and annotations, the center part is to indicate the path to your model detection's and the bottom part to define a path to the output metrics.

To fill these items, you just need to indicate the path to the COCO validation dataset images and that we talked about earlier. You also need to give the path to the ground truth annotations, represented by the file instances_val2017.json. Note that you can open a window showing the bounding boxes on the validation images so you can confirm everything is in the right place:

For the model detection's on the center, you need to select the path to the folder with all the text files with annotations and also need a file with the COCO classes. On this case we used all the 80 COCO classes but this text file exists in case our model was restricted to a specific set of classes. You also need to select the annotations format. You can see there are some options there and you need to select the one I told you about way earlier. This is the one with class ID, probability, then left corner, top corner, width and height in absolute values. Once again you can visualize the detection's and even compare them with the ground truths. I think this is a pretty cool feature, check it out!

Lastly, we only need to set a path to a folder where you want to store the results, select the metrics you want to obtain and define a reasonable IOU threshold. For the threshold I chose 50% to include most of the annotations. Then, you click RUN and after a minute you will have the results. Your program should be set similarly to this image:

After a minute or less, you will be presented with a window with the results. I recommend you copy them to a text file so you don't miss them.

As defined on the interface, there will be a folder that contains all the other information regarding the AP for each class.

Let's briefly discuss the results on the next chapter. If you want to practice this part, you can try the process for the Tiny YOLO object detector!

Last updated