Label Studio is an annotation tool that comes really handy when dealing with object detection datasets. A major feature in my workflow is the ability to upload “pre-annotations”, which are used when first opening a task: a draft is automatically created with all objects present in the pre-annotation.

To speed up labeling, I often use this pre-annotation feature to label all images using a zero-shot model (such as YOLO-World or SAM 3). Once I’ve annotated enough images, I train a first object detection model, run this model on the full dataset again, and import these predictions as pre-annotations.

However, there was a serious shortcoming of Label Studio for me: according to the documentation, it did not seem possible to link each detected object to the model score. Being able to know each object detection unlocks the possibility to focus specifically on images for which the current model is the most uncertain about. By focusing on uncertain predictions, we maximize the effect of the each new annotation on the final mAP score.

Luckily, it seems uploading an individual score for each object is possible. The score can be provided for each result in the JSON:

{
  "id": "1",
  "type": "rectanglelabels",
  "from_name": "label",
  "to_name": "image",
  "original_width": 1000,
  "original_height": 600,
  "image_rotation": 0,
  "score": 0.28,
  ...

}

After import, in the Label Studio UI, each individual object score is now displayed in the “Regions” panel:

Label Studio UI

Furthermore, using the API, we can now list all annotation tasks with at least one object with a confidence score below a certain threshold.