ROBOFLOW - train & test with python

victor_dalet - Aug 25 - - Dev Community

Roboflow is a platform for annotating images for use in object detection AI.

I use this platform for C2SMR c2smr.fr, my computer vision association for maritime rescue.

In this article I show you how to use this platform and train your model with python.

You can find more sample code on my github : https://github.com/C2SMR/detector


I - Dataset

To create your dataset, go to https://app.roboflow.com/ and start annotating your image as shown in the following image.

In this example, I detour all the swimmers to predict their position in future images.
To get a good result, crop all the swimmers and place the bounding box just after the object to surround it correctly.

Image description

You can already use a public roboflow dataset, for this check https://universe.roboflow.com/

II - Training

For the training stage, you can use roboflow directly, but by the third time you'll have to pay, which is why I'm showing you how to do it with your laptop.

The first step is to import your dataset. To do this, you can import the Roboflow library.

pip install roboflow
Enter fullscreen mode Exit fullscreen mode

To create a model, you need to use the YOLO algorithm, which you can import with the ultralytics library.

pip install ultralytics
Enter fullscreen mode Exit fullscreen mode

In my script, I use the following command :

py train.py api-key project-workspace project-name project-version nb-epoch size_model
Enter fullscreen mode Exit fullscreen mode

You must obtain :

  • the access key
  • workspace
  • roboflow project name
  • project dataset version
  • number of epochs to train the model
  • neural network size

Initially, the script downloads yolov8-obb.pt, the default yolo weight with pre-workout data, to facilitate training.

import sys
import os
import random
from roboflow import Roboflow
from ultralytics import YOLO
import yaml
import time


class Main:
    rf: Roboflow
    project: object
    dataset: object
    model: object
    results: object
    model_size: str

    def __init__(self):
        self.model_size = sys.argv[6]
        self.import_dataset()
        self.train()

    def import_dataset(self):
        self.rf = Roboflow(api_key=sys.argv[1])
        self.project = self.rf.workspace(sys.argv[2]).project(sys.argv[3])
        self.dataset = self.project.version(sys.argv[4]).download("yolov8-obb")

        with open(f'{self.dataset.location}/data.yaml', 'r') as file:
            data = yaml.safe_load(file)

        data['path'] = self.dataset.location

        with open(f'{self.dataset.location}/data.yaml', 'w') as file:
            yaml.dump(data, file, sort_keys=False)

    def train(self):
        list_of_models = ["n", "s", "m", "l", "x"]
        if self.model_size != "ALL" and self.model_size in list_of_models:

            self.model = YOLO(f"yolov8{self.model_size}-obb.pt")

            self.results = self.model.train(data=f"{self.dataset.location}/"
                                                 f"yolov8-obb.yaml",
                                            epochs=int(sys.argv[5]), imgsz=640)



        elif self.model_size == "ALL":
            for model_size in list_of_models:
                self.model = YOLO(f"yolov8{model_size}.pt")

                self.results = self.model.train(data=f"{self.dataset.location}"
                                                     f"/yolov8-obb.yaml",
                                                epochs=int(sys.argv[5]),
                                                imgsz=640)



        else:
            print("Invalid model size")



if __name__ == '__main__':
    Main()
Enter fullscreen mode Exit fullscreen mode

III - Display

After training the model, you get the files best.py and last.py, which correspond to the weight.

With ultralytics library, you can also import YOLO and load your weight and then your test video.
In this example, I'm using the tracking function to get an ID for each swimmer.

import cv2
from ultralytics import YOLO
import sys


def main():
    cap = cv2.VideoCapture(sys.argv[1])

    model = YOLO(sys.argv[2])

    while True:
        ret, frame = cap.read()
        results = model.track(frame, persist=True)
        res_plotted = results[0].plot()
        cv2.imshow("frame", res_plotted)

        if cv2.waitKey(1) == 27:
            break

    cap.release()
    cv2.destroyAllWindows()


if __name__ == "__main__":
    main()
Enter fullscreen mode Exit fullscreen mode

To analyze the prediction, you can obtain the model json as follows.

 results = model.track(frame, persist=True)
 results_json = json.loads(results[0].tojson())
Enter fullscreen mode Exit fullscreen mode
. . . . . . . . . .
Terabox Video Player