Apple provides Core ML Models.
To convert your ML model to the Core ML format, use coremltools.
You can convert YOLOv5, YOLOv8, MobileNetV2 + SSDLite, and Turi Create models to Core ML models.
Download .pt files from YOLOv5 page and convert to Core ML models using the export script.
How to export a YOLOv5 Core ML model so that Xcode shows the metadata and Preview tab.
git clone https://github.com/ultralytics/yolov5 cd yolov5 python export.py --weights yolov5m.pt --include coreml python export.py --weights yolov5m-seg.pt --include coreml --img 640 python export.py --weights yolov5m-cls.pt --include coreml --img 224
YOLOv8 may be used directly in the Command Line Interface (CLI) with a yolo command.
pip install ultralytics yolo detect export model=yolov8m.pt format=coreml nms=False yolo segment export model=yolov8m-seg.pt format=coreml yolo classify export model=yolov8m-cls.pt format=coreml imgsz=224 yolo pose export model=yolov8m-pose.pt format=coreml
RectLabel recognizes the model type through the file name and the short description. Such as YOLOv3, YOLOv5, YOLOv8, DeepLab, PoseNet, and FCRN-Depth.
If the output layers are interpreted as VNRecognizedObjectObservation, VNClassificationObservation, and VNPixelBufferObservation, RectLabel can decode the output layers without checking the file name and the short description.
If the Core ML model has 'classes' meta data, RectLabel uses these object names instead of the current objects table.
import coremltools coreml_file = 'DeepLab.mlmodel' coreml_model = coremltools.models.MLModel(coreml_file) coreml_model.short_description = "DeepLab" labels = [ "background", "aeroplane", "bicycle", "bird", "board", "bottle", "bus", "car", "cat", "chair", "cow", "diningTable", "dog", "horse", "motorbike", "person", "pottedPlant", "sheep", "sofa", "train", "tvOrMonitor" ] coreml_model.user_defined_metadata['classes'] = ",".join(labels) coreml_model.save(coreml_file)
Clear the loaded Core ML model to close the Confidence/Overlap threshold panel.
For object detection and classification models, you can change the confidence threshold, and if the model does not include non-maximum suppression, you can change the overlap threshold.
To hide the label and the confidence, toggle showing labels on boxes.
For auto segmentation, you can use YOLOv5, YOLOv8, IS-Net, U2Net, and DeepLabV3 Core ML models.
Using Vision Framework, automatic text recognition is performed. You can change the language according to your document type.