LabelMe v5.11.4 is released: introduce 'reset layout'.

Annotating images for YOLO training with LabelMe

Pretrained YOLO models work on generic datasets. If you're detecting defects on a factory line or identifying crops from drone footage, you need a model trained on your images. That means annotating them yourself.

Annotate with LabelMe, export to YOLO format, train with Ultralytics. Everything here runs locally.

Annotate

Install LabelMe and open your image directory. Ctrl+R draws bounding boxes (for YOLO detection), Ctrl+N draws polygons (for YOLO segmentation). The starter guide walks through this in detail.

LabelMe's AI text-to-annotation can also propose bounding boxes from class names automatically, running locally after a one-time model download.

Export to YOLO format

LabelMe saves annotations as JSON. YOLO expects .txt files with normalized coordinates. The toolkit (labelmetk) converts between the two:

pip install labelmetk
labelmetk export-to-yolo your_dataset/ --class-names crack,normal

This gives you images/, labels/, and classes.txt — ready for training. Polygons are converted to bounding boxes automatically.

See the export-to-yolo docs for all options.

Train with Ultralytics

Point Ultralytics at the exported directory and train:

pip install ultralytics
yolo train data=dataset.yaml model=yolo11n.pt epochs=50 imgsz=640

See the Ultralytics docs for dataset.yaml format and training parameters.

Offline

Everything in this pipeline runs on your machine. Nothing leaves your disk. More on this in Why offline-first annotation matters.

LabelMe Pro includes the AI annotation suite and export toolkit. $79, one-time.

LabelMe はオフラインで動く AI 搭載アノテーションツールです。

LabelMe を無料で試す