Help Overview

RectLabel version 72

- Added "Use English as an app language" option on the settings dialog.

- Corresponded to YOLOv5 v7.0 and the instance segmentation models. You can auto label using the instance segmentation models.

- Fixed the problem of not correctly saving mask images when you auto label all images using the segmentation models.

- Fixed the problem of showing Not found message after processing the Depth Estimation models.

- Removed the animation when a long text is on the label table.

Key features

Draw bounding boxes and read/write in YOLO text format

Draw oriented bounding boxes in aerial images

Draw polygons, cubic bezier curves, line segments, and points

Draw keypoints with a skeleton

Draw pixels with brushes and superpixels

Read/write in PASCAL VOC xml format

Export to CreateML object detection and image classification formats

Export to COCO JSON, YOLO, DOTA, and CSV formats

Export indexed color mask image and grayscale mask images

Settings for objects, attributes, hotkeys, and labeling fast

Customize the label dialog to combine with attributes

1-click buttons speed up selecting the object name

Auto-suggest works for more than 5000 object names

Search object/attribute names and image names

Automatic labeling using Core ML models

Video to image frames, augment images, etc.

Support English, Chinese, Korean, and 11 other languages.

Report issues

Post the problem to our Github issues page.

Have questions?

Send an email to support@rectlabel.com

Thank you.

Requested features

  • iPadOS version.
  • Windows version.

Troubleshooting

If RectLabel is not launched correctly, please delete cache files.

  1. Backup the settings file "~/Library/Containers/RectLabel/Data/settings_labels.json".
  2. Delete cache files "~/Library/Containers/RectLabel/".
  3. Clean up the trash box and launch RectLabel.

How to solve purchase errors when you subscribe.

  1. Delete RectLabel via the Launchpad app
  2. Log out of the Mac App Store and iTunes.
  3. Reboot your Mac.
  4. Log back in to the Mac App Store and install RectLabel.

Tutorials cited RectLabel

Papers cited RectLabel

Settings

Projects

You can switch different objects/attributes settings for different labeling tasks.

To add projects from another settings file, import the settings file.

To use the project, check on the "Primary" check box.

To duplicate the project, right click on the row and "Duplicate" menu would open.

Settings for objects, attributes, hotkeys, and labeling fast

Objects

Objects table describes each object name and the object index.

To add objects from a text file, Import object names file.

You can assign 0-9 num keys and A-Z alphabet keys to objects.

To duplicate the object, right click on the row and "Duplicate" menu would open.

Right click on the objects table header, "Sort alphabetically" and "Clear table" menu would open.


When you label "sneakers" which uses 2 attributes "brand" and "color".

Settings for objects, attributes, hotkeys, and labeling fast

Attributes

The label "sneakers-converse-yellow" is a combination of the object and attributes.

'-' is used as a separator so that '-' in the object name and attribute name is replaced with '_'.

The prefix is used as '-' + prefix + attribute name.

if any objects are not using attributes, '-' in the object name is not replaced with '_'.

The attribute types are "Single select", "Multiple select", and "Text input".

To change the name on the items table, single click, double click, or press the enter key on the selected item.

For "Single select" type, you can use an empty string item so that the default label name becomes the object name.

You can assign 0-9 num keys and A-Z alphabet keys to items.

To duplicate the attribute, right click on the row and "Duplicate" menu would open.

Right click on the attributes table header, "Sort alphabetically" and "Clear table" menu would open.

Settings for objects, attributes, hotkeys, and labeling fast

Hotkeys

Customize the hotkeys to make your labeling work faster.

Settings for objects, attributes, hotkeys, and labeling fast

Label fast

"Auto save" is to skip the confirm dialog when save.

"Auto copy from previous image" is to copy annotations from the previous image.

"Skip label dialog when create" is to skip the label dialog when create.

"Close label dialog when select" is to skip clicking the OK button on the label dialog.

"Use 0-9 and a-z hotkeys" is to change the label name using the hotkey.

"Use 1-click buttons" is to show 1-click buttons of all objects on the label dialog.

"Maintain zoom and position" is to maintain zoom and position when you change the image.

"Label format" is to change the label format to read/write in the PASCAL VOC xml format or YOLO text format.

For the PASCAL VOC xml format, you can draw everything and you can use attributes.

For the YOLO text format, you can draw bounding boxes and save in the YOLO text format, you can draw rotated boxes and save in the DOTA OBB text format, and you cannot use attributes.

Settings for objects, attributes, hotkeys, and labeling fast

Others

"Use difficult tag" is to show the difficult checkbox on the label dialog.

"Save as floating-point values" is to save coordinates as floating-point values.

"Fix image position" is to fix the image position.

"Show circle edit points" is to show circle edit points instead of rectangle edit points.

"Show all edit points" is to show edit points of all objects.

"Show edit points between box corners" is to show edit points between box corners.

"Click 4 points when draw boxes" is to draw a box clicking xmin, xmax, ymin, and ymax of the object.

"Use English as an app language" is to change the app language to English.

"Sort images" is to sort images by Alphabetic, Numeric, and Last modified.

Settings for objects, attributes, hotkeys, and labeling fast

Export settings file

You can export the current settings file to import to another computer.

Import settings file

When you import the settings file, the projects are added to "Projects" table.

To use the project, check on the "Primary" check box.

File menu

Open images folder and annotations folder

You can open multiple images and annotations folders.

RectLabel reads/writes in the PASCAL VOC xml or YOLO text format.

├── images0
│   ├── 0.jpg
│   └── 1.jpg
├── annotations0
│   ├── 0.xml or 0.txt
│   └── 1.xml or 1.txt
├── images1
│   ├── 2.jpg
│   └── 3.jpg
└── annotations1
    ├── 2.xml or 2.txt
    └── 3.xml or 3.txt

On RectLabel, according to the Exif orientation flags, each image is rotated and shown in the front orientation.

To convert images to the front orientation, please use "Resize". Set "Image size max" empty, then images are saved in the front orientation with the same image size.


Image file names which include "_pixels" are skipped because the suffix is used in the pixels image file.

Image file names which include "_depth" are skipped because the suffix is used in the depth image file.

To copy the current image file name, click on the image file name shown on the top-left corner.

Open images folder

For the annotations folder, we always use "images/annotations" folder.

└── images
    ├── annotations
    ├── 0.jpg
    └── 1.jpg

If you need to open multiple folders, you can use symbolic links.

After opening images and annotations folders which include symbolic links, you need to "Open destination folder for symbolic links" to open the destination folder which includes images and annotations files pointed by symbolic links. So that RectLabel can read/write.

To create symbolic links from images folder to symbolic links folder, use symbolic_links.sh.

./symbolic_links.sh images0 symbolic_links
./symbolic_links.sh images1 symbolic_links

├── images0
│   ├── 0.jpg
│   └── 1.jpg
├── images1
│   ├── 2.jpg
│   └── 3.jpg
└── symbolic_links
    ├── 0.jpg
    ├── 1.jpg
    ├── 2.jpg
    └── 3.jpg

Next image and Prev image

To show the next image, press the right arrow key.

To show the previous image, press the left arrow key.

Pressing Command + arrow key, the step size becomes 10.

You can use Trackpad gestures and Magic Mouse gestures.

Jump to image number

You can specify the image number to show.

Move images

You can move all images to another folder.

Or using search images, you can move searched images to another folder.

Copy images

You can copy all images to another folder.

Or using search images, you can copy searched images to another folder.

Save

The annotation file is saved as {image_file_name}.xml in the PASCAL VOC xml format or {image_file_name}.txt in the YOLO text format.

For the PASCAL VOC xml format, the top-left pixel in the image has coordinates (1, 1).

The rotated box is saved in the format as (center_x, center_y, width, height, rotation).

The rotation is counter-clockwise and ranged between 0 and 2Pi.

<annotation>
    <folder>test_data_test</folder>
    <filename>aaron-burden-40491-unsplash.jpg</filename>
    <size>
        <width>4417</width>
        <height>3317</height>
        <depth>3</depth>
    </size>
    <object>
        <name>box</name>
        <bndbox>
            <xmin>501</xmin>
            <ymin>211</ymin>
            <xmax>893</xmax>
            <ymax>512</ymax>
        </bndbox>
    </object>
    <object>
        <name>rotated box</name>
        <rotated_box>
            <cx>1426</cx>
            <cy>311</cy>
            <width>360</width>
            <height>297</height>
            <rot>6.103084</rot>
        </rotated_box>
        <polygon>
            <x1>1223</x1>
            <y1>424</y1>
            <x2>1576</x2>
            <y2>489</y2>
            <x3>1629</x3>
            <y3>198</y3>
            <x4>1276</x4>
            <y4>133</y4>
        </polygon>
        <bndbox>
            <xmin>1223</xmin>
            <ymin>133</ymin>
            <xmax>1629</xmax>
            <ymax>489</ymax>
        </bndbox>
    </object>
    <object>
        <name>polygon</name>
        <polygon>
            <x1>1893</x1>
            <y1>505</y1>
            <x2>2300</x2>
            <y2>137</y2>
            <x3>2882</x3>
            <y3>739</y3>
        </polygon>
        <bndbox>
            <xmin>1893</xmin>
            <ymin>137</ymin>
            <xmax>2882</xmax>
            <ymax>739</ymax>
        </bndbox>
    </object>
    <object>
        <name>cubic_bezier</name>
        <cubic_bezier>
            <x1>3025</x1>
            <y1>501</y1>
            <x2>3381</x2>
            <y2>184</y2>
            <x3>3787</x3>
            <y3>414</y3>
        </cubic_bezier>
        <bndbox>
            <xmin>3025</xmin>
            <ymin>184</ymin>
            <xmax>3787</xmax>
            <ymax>501</ymax>
        </bndbox>
    </object>
    <object>
        <name>line</name>
        <line>
            <x1>403</x1>
            <y1>1159</y1>
            <x2>704</x2>
            <y2>920</y2>
            <x3>1146</x3>
            <y3>941</y3>
        </line>
        <bndbox>
            <xmin>403</xmin>
            <ymin>920</ymin>
            <xmax>1146</xmax>
            <ymax>1159</ymax>
        </bndbox>
    </object>
    <object>
        <name>keypoints</name>
        <keypoints>
            <x1>2094</x1>
            <y1>962</y1>
            <v1>2</v1>
            <x2>2344</x2>
            <y2>864</y2>
            <v2>2</v2>
            <x3>2902</x3>
            <y3>1277</y3>
            <v3>2</v3>
        </keypoints>
        <bndbox>
            <xmin>2094</xmin>
            <ymin>864</ymin>
            <xmax>2902</xmax>
            <ymax>1277</ymax>
        </bndbox>
    </object>
    <object>
        <name>pixels</name>
        <pixels>
            <id>0</id>
        </pixels>
        <bndbox>
            <xmin>1729</xmin>
            <ymin>1169</ymin>
            <xmax>1838</xmax>
            <ymax>1278</ymax>
        </bndbox>
    </object>
</annotation>

Export menu

Export Create ML JSON file

All xml/txt files are exported as an Create ML JSON file.

Training Object Detection Models in Create ML.

When training on Create ML, put training images and JSON file into the same folder and do not put any other files in the folder. "Empty table from specified data source" error in Create ML.

[{
    "image": "sneakers-1.jpg",
    "annotations": [
    {
        "label": "sneakers",
        "coordinates":
        {
            "y": 838,
            "x": 393,
            "width": 62,
            "height": 118
        }
    },
    {
        "label": "sneakers",
        "coordinates":
        {
            "y": 881,
            "x": 392,
            "width": 51,
            "height": 102
        }
    }]
}]

Import Create ML JSON file

All objects in the Create ML JSON file are imported to xml/txt files in the current folder.

Before importing, be sure that you opened images/annotations folders to be imported.

RectLabel can import from "imagefilename" and "annotation" keys, too.

Export COCO JSON file

All xml/txt files are exported as an COCO JSON file.

To display an image with the COCO JSON file, use pycocoDemo.ipynb.

COCO JSON file format.


For box, polygon, and line objects, "segmentation" is exported as polygon.

"segmentation" : [[x1, y1, x2, y2, ...]],

For pixels and cubic bezier objects, "segmentation" is exported as RLE.

RLE is encoding the mask image using the COCO Mask API.

"segmentation":
{
    "size": [960, 896],
    "counts": "TP\\4<`m08J4L4M2M4M2N2N2N2N101N2N101O0O2O0O101O00000O10000000000001O0000000O2O000O2O0O2O1N1O2N2N2O1M3N3M2M4L5K5I[on4LiPQK4J6G9kROZO0Nkl0T1lNmNSUOW1kj0mNmTOZ1Qk0mNgTOU1Wk0SOcTOm0[k0m0O1N3M2O2M3M2O1N2N2N2O1N1O2O0O2N100002N1M3N2M201O1M3fNWUO\\Ojj0b0YUO\\Ohj0?cUOWOaj0f0dUOVO^j0i0dUOUO]j0h0gUOVOZj0i0_1O1N3M2M6M0N6K1L<DTck="
},

To decode the RLE in your python code, use the code below from rectlabel_create_coco_tf_record.py.

segm = object_annotations['segmentation']
m = mask.decode(segm)    
pil_image = PIL.Image.fromarray(m)

For keypoints objects, "keypoints" and "num_keypoints" are exported.

To edit "bbox", use "Show keypoints boxes".

For "segmentation", you can export a keypoints object combined with a pixels object when you aligned the keypoints object at the row and the pixels object at the row + 1 on the right label table.

"segmentation":
{
    "size": [4834, 3648],
    "counts": "V[gV;2Vk31]PM8[o27VPM1bo2>noLK_^OZORX3Z1oXMEW^OEfX3U1cXM^Oo]O1[Y3P1WXMWOe]O=QZ3g0nWMUOY]Og0fZ3?eWMSOW]Oh0P[3a0YWMSOY]Oe0\\[3b0jVMWO\\]O`0g[3e0]QMlMPA\\1_1N2O1O2N1O1O1O1O1O1N2O1O1O1O1O1O1O1N2O1O1N2O1N2O1O2M2O1N2O1N3N1O1N3N1d`NfTMnR1]k2PmNkTMhR1Vk2WmNlTMgR1Uk2XmNlTMfR1Wk2WmNlTMgR1Uk2XmNlTMfR1Vk2YmNkTMfR1Wk2XmNkTMeR1Wk2ZmNjTMeR1Xk2YmNjTMeR1Wk2ZmNjTMdR1Xk2[mNjTMcR1Xk2[mNiTMcR1Yk2\\mNiTMbR1Yk2\\mNhTMbR1Zk2]mNgTMbR1Zk2]mNhTMaR1Zk2\\mNhTMbR1Zk2]mNhTMaR1Zk2]mNgTMaR1[k2^mNgTM`R1Zk2_mNgTM_R1\\k2_mNfTM_R1[k2`mNfTM_R1[k2`mNfTM^R1]k2`mNeTM^R1\\k2amNeTM]R1^k2amNdTM]R1]k2bmNdTM]R1]k2bmNeTM[R1^k2bmNdTM]R1]k2bmNeTM[R1^k2bmNdTM]R1]k2bmNdTM\\R1^k2bmNeTM\\R1]k2amNeTM^R1\\k2amNfTM\\R1\\k2bmNgTM\\R1[k2bmNgTM[R1[k2cmNgTM\\R1[k2amNhTM\\R1Zk2cmNhTM[R1Yk2cmNjTM[R1Xk2cmNjTMZR1Xk2dmNjTM[R1Xk2bmNkTM[R1Wk2dmNkTMZR1Vk2dmNmTMZR1Uk2cmNnTMZR1Tk2emNnTMWR1Uk2fmNnTMXR1Uk2emNnTMXR1Tk2fmNoTMWR1Tk2emNPUMXR1Rk2fmNQUMVR1Rk2hmNPUMVR1Sk2fmNQUMWR1Qk2gmNRUMVR1Qk2gmNRUMWR1oj2fmNUUMXR1lj2fmNVUMYR1lj2dmNWUMYR1kj2fmNWUMXR1kj2gmNVUMWR1kj2imNVUMUR1kj2kmNUUMSR1nj2lmNSUMRR1nj2mmNTUMQR1mj2omNTUMoQ1nj2PnNSUMmQ1oj2SnNRUMkQ1Pk2TnNPUMlQ1Pk2TnNQUMjQ1Pk2UnNRUMiQ1Pk2VnNQUMiQ1oj2WnNRUMgQ1Pk2XnNPUMgQ1Qk2YnNPUMfQ1Pk2ZnNQUMdQ1Qk2[nNPUMcQ1Qk2\\nNQUMcQ1oj2]nNQUMbQ1Qk2]nNPUMbQ1Pk2^nNQUM`Q1Rk2^nNoTMaQ1Tk2\\nNmTMcQ1Vk2ZnNkTMeQ1Xk2WnNiTMiQ1Zk2TnNgTMkQ1[k2SnNfTMlQ1]k2QnNdTMmQ1`k2PnNbTMnQ1ak2omNbTMnQ1ak2omNaTMoQ1ak2nmNbTMPR1ak2mmNbTMXc0]JYDTQ3[HbTMYc0aJVDPQ3]HcTMYc0dJTDlP3`HbTMYc0iJPDhP3dHcTMXc0lJnCdP3fHcTMZc0oJjCaP3hHdTMZc0RKhC]P3jHdTM[c0VKeCYP3lHeTM[c0YKcCUP3nHeTM\\c0]K`CPP3QIgTM[c0`K^Clo2SIgTM\\c0dKZCio2VIfTM]c0hKWCeo2XIgTM]c0jKVCbo2YIgTM^c0nKSC]o2\\IiTM]c0PLRCZo2]IiTM^c0TLoBVo2_IjTM^c0WLmBVo2]IfTMcc0ZLjBmo2fHmSM\\d0]LhBko2fHkSM_d0`LfBgo2hHmSM^d0cLdBbo2kHnSM^d0gLaB\\o2oHQTM]d0hL_BYo2QIRTM]d0lL\\BTo2TISTM]d0oLZBon2WIVTM[d0RMWBkn2[IVTM[d0UMUBgn2\\IYTMZd0XMTB`n2^I]TMQd0dM[BQn2_IaTMhc0nMdBcm2_IdTM`c0ZNkBSm2aIiTMUc0eNUCdl2aIiUMUb0oMTDZl2bIfWM_`0ULhEXl2dI`YMj>]J]GTl2dIaZMR>`ITHQl2]Ii[M`=[HnHnk2VIh\\MV=_GSHTGdJgT3Ol\\MW=^GhG^GkJZT32Q]MR4S^OnMLR3]9D`m2WKY[Mm3T^OiM1`3W9XObm2^KY[Mg3T^OgM4l3R9PObm2cKY[Mb3U^ObM9Z4l8fNbm2hKZ[M]3V^ObM9c4j8^Nbm2mK[[MW3U^OiM6e4m8UNam2SL][MQ3V^OnM1j4Q9jM`m2ZL^[Mk2V^OTNLm4V9aMSa3UORVLYNGR5Y9WMVa3UOPVL_NBU5_9mLVa3VOPVLdN]OZ5b9cLYa3VOnULiNYO_5f9XLZa3WOnULoNSOb5k9oK\\a3WOlULTOPOf5n9eK]a3XOlULZOjNi5S:\\K_a3XOjUL_OfNn5W:QK`a3YOjULE`NR6[:gJca3YOhWLP6k6^JZg3c5lXLRJXg3n5n<10O01O1O0010O2O1O1N2O1N2O1O1N2O1N20O2O001O0R`KYJ_^4h5ZaKbJb^4_5VaKkJg^4U5SaKRKl^4o4l`KVKV_4k4c`KYK__4U61N2N2O1^IZ`Kd5g_4YJ\\`Kf5f_4VJ]`Kj5d_4QJ``Kn5b_4nI``KR6b_4jIa`KV6V`4O1O100O1O100O001O100O1N3M3N2M3M4L3M3N2M3M3M4M2M3M3M3N3L3M3K5J6J6J6J6J6I7J6J6J6J6J6J6J6J7J5J6J6J6K5J6J6K5JmTVo1"
},
"bbox": [2430, 1860, 788, 2561],
"keypoints": [2977, 2133, 2, 3014, 2084, 2, 2922, 2070, 2, 3044, 2122, 2, 2821, 2062, 2, 2975, 2431, 2, 2641, 2330, 2, 3034, 2781, 2, 2420, 2741, 2, 3103, 3001, 2, 2839, 2883, 2, 2957, 3101, 2, 2676, 3104, 2, 2991, 3598, 2, 2630, 3614, 2, 2751, 4058, 2, 2678, 4098, 2],
"num_keypoints": 17

For "categories", "keypoints" and "skeleton" are exported.

To change the order of the keypoints, use keypoints_order_coco.py.

"categories": [
{
    "id": 1,
    "keypoints": ["nose", "leftEye", "rightEye", "leftEar", "rightEar", "leftShoulder", "rightShoulder", "leftElbow", "rightElbow", "leftWrist", "rightWrist", "leftHip", "rightHip", "leftKnee", "rightKnee", "leftAnkle", "rightAnkle"],
    "name": "person",
    "skeleton": [
        [9, 11],
        [6, 12],
        [14, 16],
        [7, 13],
        [15, 17],
        [12, 13],
        [14, 12],
        [8, 6],
        [10, 8],
        [6, 7],
        [9, 7],
        [15, 13],
        [5, 3],
        [3, 1],
        [1, 2],
        [2, 4]
    ]
},
Export keypoints and pixels objects to COCO JSON file

Import COCO JSON file

All objects in the COCO JSON file are imported to xml/txt files in the current folder.

Before importing, be sure that you opened images/annotations folders to be imported.

Export YOLO txt files

All xml files are exported in the YOLO txt format to a folder.

To export all images to a folder, use Copy menu.

├── datasets
│   └── sneakers
│       ├── images
│       └── labels
└── yolov5
    └── data
        └── sneakers.yaml

All objects are converted to boxes and a txt file is saved per an image in the YOLO format.

Where x, y, width, and height are relative to the image's width and height.

class_index center_x center_y width height
0 0.464615 0.594724 0.680000 0.769784

YOLOv7 document.

YOLOv6 document.

YOLOX document.

YOLOv5 document.

YOLOv4 document.

YOLOv3 document.

Import YOLO txt files

All objects in the YOLO txt files are imported to xml files in the current folder.

Before importing, be sure that you opened images/annotations folders to be imported.

Before importing, be sure that the objects table is set up through importing the corresponding object names file.

Export DOTA txt files

All xml files are exported in the DOTA oriented bounding box (OBB) txt format to a folder.

To draw oriented bounding boxes, use polygons and rotated boxes.

The first point is drawn with fill-color to show the orientation, the first point is assumed that it is the top-left corner of the object, and 4 points are arranged in a clockwise order when save.

To set the difficult value, "Use difficult tag" settings.

x1 y1 x2 y2 x3 y3 x4 y4 category difficult
1300.536987 1413.503784 1192.848755 1535.568848 530.876038 951.562073 638.564270 829.497009 truck 0

YOLOv5 for Oriented Object Detection document.

Draw oriented bounding boxes in aerial images

Import DOTA txt files

All objects in the DOTA oriented bounding box (OBB) txt files are imported to xml files in the current folder.

Before importing, be sure that you opened images/annotations folders to be imported.

Export CSV file

All xml/txt files are exported as an CSV file.

When you select "Row by image" and check on the "Convert to boxes" checkbox.

path,annotations
/Users/ryo/rcam/test_annotations/sneakers/images/sneakers-1.jpg,[{"label":"sneakers","coordinates":{"x":302,"y":248,"width":442,"height":321}}]

(x, y) means the center of the box where (0, 0) is the top-left corner.

To train a Turi Create Object Detection model from the exported CSV file, use the code below.

train_turicreate.py

python train_turicreate.py "${EXPORTED_CSV_FILE}"

When you select "Row by image" and check off the "Convert to boxes" checkbox.

path,annotations
/Users/ryo/rcam/test_annotations/sneakers/images/sneakers-1.jpg,[{"label":"sneakers","type":"rectangle","coordinates":{"x":302,"y":248,"width":442,"height":321}}]

When you select "Row by label" and check on the "Convert to boxes" checkbox.

filename,width,height,label,xmin,ymin,xmax,ymax
/Users/ryo/rcam/test_annotations/sneakers/images/sneakers-1.jpg,650,417,sneakers,81,88,522,408

This format is the same as Tensorflow Object Detection CSV format.

When you select "Row by label" and check off the "Convert to boxes" checkbox.

filename,width,height,label,type,annotations
/Users/ryo/rcam/test_annotations/sneakers/images/sneakers-1.jpg,650,417,sneakers,rectangle,81,88,522,408

Import CSV file

All objects in the CSV file are imported to xml/txt files in the current folder.

Before importing, be sure that you opened images/annotations folders to be imported.

Export object names file

The names file is created from the objects table on the settings dialog.

YOLOv5 yaml file as dictionary.

path: ../datasets/sneakers
train: images
val: images

names:
  0: sneakers
  1: ignore

YOLOv5 yaml file as array.

path: ../datasets/sneakers
train: images
val: images

nc: 2
names: ['sneakers', 'ignore']

YOLOv3 names file.

sneakers
ignore

Tensorflow Object Detection API label map file.

item {
  id: 1
  name: 'sneakers'
}

item {
  id: 2
  name: 'ignore'
}

Import object names file

You can import an object names file or import object names from xml files to the objects table on the settings dialog.

Before importing YOLO txt files, be sure that the objects table is set up through importing the corresponding object names file.

Export mask images

The mask images are exported in the PNG format.


You can specify which mask image to export.

  • Export an image includes all objects: An indexed color image which includes all objects is saved as {image_file_name}_all_objects.png.
  • Export an image per object class: A grayscale image per object class is saved as {image_file_name}_class_{class_name}.png.
  • Export an image per object: A grayscale image per object is saved as {image_file_name}_object{object_idx}.png.

For the indexed color image, all objects and their overlaps are based on the layer order on the label table.

Pixel values are set based on the object index on the objects table and 0 is set for the background.

The indexed color table is created from object colors on the objects table.

For grayscale images, pixel values are set 255 for the foreground and 0 for the background.

Run an instance segmentation model on Tensorflow Object Detection API.

Export indexed color mask image and grayscale mask images

Export screenshots

You can export images/annotations as jpg images.

It exports labels when "Show labels on boxes" is ON and exports coordinates when "Show coordinates on boxes" is ON.

Video to image frames, augment images, etc.

Export train/val/test.txt files

Specify the split ratio "80/10/10" so that all images in the current folder are split into train, validation, and test set.

In the specified folder, train.txt, val.txt, and test.txt are saved.

sneakers-1.jpg
sneakers-2.jpg
...

Using "Full path" option, you can save full paths. Or you can add prefix to file names.

/Users/ryo/Desktop/test_annotations/sneakers/images/sneakers-1.jpg
/Users/ryo/Desktop/test_annotations/sneakers/images/sneakers-2.jpg
...

Export images for classification

All images are exported into object-named subfolders.

Creating an Image Classifier Model on Create ML .

└── saved_folder
    ├── object0
    ├── object1
    └── object2

Export objects and attributes stats

The number of objects used in annotation files is saved as objects_stats.txt file.

The number of attributes used in annotation files is saved as attributes_stats.txt file.

Convert video to image frames

For "Video file", open a video file.

For "Output folder", open a folder to save the image frames.

For "Image size", both width and height would be less than or equal to the size.

For "Frame suffix", to correspond to the labels folder generated by detect.py in the yolov5 folder, choose the second one.

Open the output folder to start labeling. To sort the image frames, set "Sort images" as Numeric.

Video to image frames, augment images, etc.

Replace label names

Replace label names using regular expressions.

Resize images

You can resize images/annotations in the current folder.

For "Image size", both width and height would be less than or equal to the size.

If "Image size" is empty, images are not resized but annotations are resized to the same size as images.

Augment images

Images/annotations are augmented using "Flip", "Crop", "Contrast", and "Rotate".

For "Flip", each image is flipped horizontally with 0.5 probability.

For "Crop", each image is cropped to [100% - value, 100%] of the original size.

For "Contrast", each image contrast is changed to [100% - value, 100% + value].

For "Rotate", each image is rotated to [-value, value] degrees.

For "Number of augmented images", the number of generated images from an image through the augmentation.

If the object is cut out so that the bounding box size is less than 0.1 of the original size, the object is removed.

To flip keypoints horizontally, use "left" and "right" prefix or suffix for each keypoint name.

Video to image frames, augment images, etc.

Edit menu

Create box

Change the mode to "Create box".

To draw a box, click 2 points.


Or click 4 points for xmin, xmax, ymin, and ymax of the object..

Refer to "Extreme clicking for efficient object annotation".

Press enter key to finish drawing when the number of points is less than 4.


When you finished drawing, the label dialog would open.

The label would be added to the label table on the right.

Drag the center of the box to move the box.

Drag one of the four corner points to transform the box.

If necessary, use Show edit points between box corners..

Drag on the box pressing option key, the box size is scaled up/down from the center.


To change the box color, use the color picker at the top-right corner of the image.

You can hide the cross hairs using "Hide cross hairs".

To change the color, deselect all boxes and change the default color.

To show the label name on each box, use "Show label on the box".

Draw bounding boxes and read/write in YOLO text format

Create polygon, cubic bezier, line, and point

Change the mode to "Create polygon", "Create cubic bezier", "Create line", or "Create point".

Click to add points.

Press enter key to finish drawing.

Press escape key to cancel drawing.


When you right click on the point, edit menu would open.

"Add a point forward/backward" to add a point.

"Delete this point" to delete the point.

"Set to the first point" to set this point to the first point for rotated boxes and polygons.

"Point size up/down" to change the size of points.


When you right click on the label, edit menu would open.

"Convert to polygon" to change the polygon type to polygon.


When you select multiple polygons and right click on them, you can merge polygons.

To separate the merged polygon to multiple polygons, right click on the merged polygon.

Editing points to fit the shape

Create keypoints

You can label keypoints for COCO Keypoint Detection Task.

Each keypoint has a 0-indexed location x,y and a visibility flag v defined as v=0: not labeled, v=1: labeled but not visible, and v=2: labeled and visible.

Change the mode to "Create keypoints".

Click to add points.

Click holding option button, the point is added as not labeled.

Click holding option + command button, the point is added as labeled but not visible.

Press enter key to finish drawing.

Press escape key to cancel drawing.


When you right click on the point, edit menu would open.

"Change keypoint name" to change the keypoint name.

"Change keypoint color" to change the keypoint color and the edge color is defined by the source point color.

"Make invisible as not labeled" to make the point invisible as not labeled

"Make invisible" to make the point invisible as labeled but not visible.

"Delete edge" to delete the edge with the point.


If you put empty string to the keypoint name, the keypoint name is hidden.

To add an edge, drag from one to another point holding option button.


When you right click on the label, edit menu would open.

"Clear bounding box" to clear the current bounding box.

"Flip" to flip the "left" included keypoint position and the "right" included keypoint position.

"Make visible" to make the point visible.

To hide keypoints names, use "Hide keypoints names".

To show and edit the bounding box, use "Show boxes on keypoints".


Keypoints names/edges are saved in the settings file.

For the first keypoints object, you have to press enter key to finish drawing, change keypoints names, and add edges.

From the second keypoints object, if currently selected object or lastly selected object has keypoints names/edges, the label dialog would appear without pressing the enter key and keypoints names/edges are automatically shown.

Draw keypoints with a skeleton

Create pixels

You can label pixels using brushes and superpixels.

Brush size 1 means 1px in the image.

For large images, change "Brush size max".

You can change the brush size up/down using command + option + mouse wheel up/down.

Erase is used to erase pixels and you can use "Toggle pixels erase" hotkey.

When sidecar, double tap on Apple Pencil toggles the pixels draw/erase checkbox.

Polygon is used to label pixels using the polygon tool.

Right clicking on the pixels, "Convert to polygon", "Flood Fill", "Clear pixels", and "Import pixels" menus would appear.

To hide pixels, use "Hide pixels".

To show other pixels, use "Show other pixels".

The pixels image file is saved as {image_file_name}_pixels{pixels_idx}.png in the annotations folder.

Draw pixels with brushes and superpixels

Click or drag on the superpixels.

Superpixel size is used to adjust the segmentation size.

Superpixel smoothness is used to adjust the segmentation boundary smoothness.

On the pixels dialog, right/left arrow keys change the superpixel size by 1px.

To hide superpixels, use "Hide superpixels".

If the superpixels are not shown, reselect the label on the label table.

Draw pixels with brushes and superpixels

To train Mask R-CNN / Keypoint Detection on Detectron2, follow these steps.

  1. Export COCO JSON file.
  2. Put the python code rectlabel_coco_detectron2.py into your preferred path.
    Edit cfg parameters if necessary.
  3. Run.
# Train a new model starting from pre-trained COCO weights
python rectlabel_coco_detectron2.py train --type=${TYPE} --images_dir=${IMAGES_DIR} --annotations_path=${ANNOTATIONS_PATH} --weights=coco

# Resume training a model that you had trained earlier
python rectlabel_coco_detectron2.py train --type=${TYPE} --images_dir=${IMAGES_DIR} --annotations_path=${ANNOTATIONS_PATH} --weights=last

# Apply inference to an image
python rectlabel_coco_detectron2.py inference --type=${TYPE} --weights=last --image=${IMAGE_PATH}

Create image label

You can label the whole image without drawing boxes.

To train an image classifier model, Export images for classification.

Move

Change the mode to "Move".

To switch between Create and Move mode, hold space key when Create mode.

Drag the box or the image to move the position.

You can use mouse wheel to move the image position.

You can select multiple boxes and move them.

When you click on the box or the label, four corner points would appear.

Drag one of the four corner points to transform the box.

When you right click on the box or the label, edit menu would open.

"Focus" to quick zoom to the selected box, "Edit" to open the label dialog, "Duplicate" to duplicate the box, and "Delete" to delete the box.

When you double click on the box or the label, the label dialog would open.

To change the layer order, drag the label on the label table upward or downward.

Rotate

Change the mode to "Rotate".

Drag up/down on the box to rotate the box.

You can select multiple boxes and rotate them.

Draw oriented bounding boxes in aerial images

Delete

You can select multiple boxes and delete them.

Layer up/down

Change the layer order of the box.

Toggle pixels erase

You can check on/off the erase checkbox on the pixels panel.

Change brightness and contrast

Change the image brightness and contrast for dark images.

Change image brightness and contrast

Change object color

Change the object color using color picker.

Clear object color

Clear the object color to the default color.

Search images

You can search object/attribute names and image names.

To reload all images again, use "Clear search images".

You can use Wildcard(*), AND(&), OR(|), NOT(!), and more in the search text.

To search unlabeled images, use empty search text.

Load all images again.

Undo/Redo

You can undo/redo operations.

Copy/Paste

You can select multiple boxes and copy/paste them on another image.

Core ML menu

Load Core ML model

Apple provides Core ML Models.

To convert your ML model to the Core ML format, use coremltools.

You can convert YOLOv5 model, MobileNetV2 + SSDLite model, Turi Create model to Core ML models.

When auto labeling, RectLabel recognizes the model type through the file name and the short description. Such as YOLOv3, yolov5m, yolov5m-cls, yolov5m-seg, DeepLab, PoseNet, and FCRN-Depth.

If the output layers are interpreted as VNRecognizedObjectObservation, VNClassificationObservation, and VNPixelBufferObservation, RectLabel can decode the output layers without checking the file name and the short description.

If the Core ML model has 'classes' meta data, RectLabel uses these object names instead of the current objects table when auto labeling.

import coremltools
coreml_file = 'FCRN.mlmodel'
coreml_model = coremltools.models.MLModel(coreml_file)
coreml_model.short_description = "FCRN-Depth"
labels = [
    "person", "bicycle", "car", "motorbike", "aeroplane", "bus", "train", "truck", "boat", "traffic light",
    "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow",
    "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee",
    "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle",
    "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange",
    "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "sofa", "pottedplant", "bed",
    "diningtable", "toilet", "tvmonitor", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven",
    "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"
]
coreml_model.user_defined_metadata['classes'] = ",".join(labels)
coreml_model.save(coreml_file)

Clear Core ML model

Clear the loaded Core ML model to close the Confidence/Overlap threshold panel and to use ordinary Create box mode.

Auto labeling

For object detection and classification models, you can change the confidence threshold, and if the model does not include non-maximum suppression, you can change the overlap threshold.

To hide the label name and the confidence, use Show labels on boxes.

After loading a Core ML Model, switching to Create box mode and drawing a box, the Core ML Model is applied to the cropped area.

For auto segmentation, you can use YOLOv5 Instance Segmentation, IS-Net, U2Net, and DeepLabV3 Core ML models.

Automatic labeling using Core ML models

View menu

Use an iPad as a second display

With Sidecar, you can use your iPad as a display that extends or mirrors your Mac desktop.

Trackpad gestures

Slide two fingers to move the image.

Double-tap with two fingers for smart zoom.

Pinch with two fingers to zoom in or out.

Swipe left or right with three fingers to change the image.


System Preferences > Trackpad > More Gestures tab.

For "Swipe between pages", select "Swipe with two or three fingers" or "Swipe with three fingers".

For "Swipe between full-screen apps", select "Swipe left or right with four fingers".

Magic Mouse gestures

Slide a finger to move the image.

Double-tap with a finger for smart zoom.

Swipe left or right with two fingers to change the image.


System Preferences > Mouse > Point & Click tab > Smart zoom.

System Preferences > Mouse > More Gestures tab.

For "Swipe between pages", select "Swipe with two fingers".

Zoom in, Zoom out

Click a position to zoom in/out.

Or using command + mouse wheel up/down, you can zoom in/out.

Zoom fit

Clear zoom.

Focus box

You can quick zoom to the selected box.

Quick zoom to the object

Hide cross hairs

Hide cross hairs when creating box.

Hide other boxes

Hide other boxes except the selected box. Toggle boxes alpha.

Hide keypoints names

Hide keypoints names when creating keypoints.

Hide pixels

Toggle pixels alpha.

Hide superpixels

Toggle superpixels alpha.

Hide superpixels mouseover

Toggle superpixels mouseover alpha.

To lower the CPU usage, hide the superpixels mouseover.

Show other pixels

You can show other pixels objects.

Show boxes on keypoints

You can edit the bounding box for each keypoints object.

The bounding box is exported to the COCO JSON file as bbox.

Show labels on boxes

Show labels on boxes.

Combining with Hide other boxes, only selected boxes are shown.

Show coordinates on boxes

Show (x, y) coordinates on boxes.

Combining with Hide other boxes, only selected boxes are shown.

Show depth image

When the rgb image name is image_name.jpg, put the depth image as image_name_depth.png in the same folder.

Toggle depth image alpha.

Video to image frames, augment images, etc.

Reload the image

Reload the current image and annotation file.