Apple provides Core ML Models.
To convert your ML model to the Core ML format, use coremltools.
You can convert YOLOv8 models, YOLOv5 models, MobileNetV2 + SSDLite model, and Turi Create models to Core ML models.
Download .pt files from YOLOv5 page and convert to Core ML models using the export script.
How to export a YOLOv5 Core ML model so that Xcode shows the metadata and Preview tab.
git clone https://github.com/ultralytics/yolov5 cd yolov5 python export.py --weights yolov5m.pt --include coreml python export.py --weights yolov5m-seg.pt --include coreml --img 640 python export.py --weights yolov5m-cls.pt --include coreml --img 224
YOLOv8 may be used directly in the Command Line Interface (CLI) with a yolo command.
pip install ultralytics yolo task=detect mode=export model=yolov8n.pt format=coreml nms=False yolo task=segment mode=export model=yolov8n-seg.pt format=coreml yolo task=classify mode=export model=yolov8n-cls.pt format=coreml imgsz=224
RectLabel recognizes the model type through the file name and the short description. Such as YOLOv3, YOLOv5, YOLOv8, DeepLab, PoseNet, and FCRN-Depth.
If the output layers are interpreted as VNRecognizedObjectObservation, VNClassificationObservation, and VNPixelBufferObservation, RectLabel can decode the output layers without checking the file name and the short description.
If the Core ML model has 'classes' meta data, RectLabel uses these object names instead of the current objects table.
import coremltools coreml_file = 'DeepLab.mlmodel' coreml_model = coremltools.models.MLModel(coreml_file) coreml_model.short_description = "DeepLab" labels = [ "background", "aeroplane", "bicycle", "bird", "board", "bottle", "bus", "car", "cat", "chair", "cow", "diningTable", "dog", "horse", "motorbike", "person", "pottedPlant", "sheep", "sofa", "train", "tvOrMonitor" ] coreml_model.user_defined_metadata['classes'] = ",".join(labels) coreml_model.save(coreml_file)
Clear the loaded Core ML model to close the Confidence/Overlap threshold panel and to use ordinary create box mode.
For object detection and classification models, you can change the confidence threshold, and if the model does not include non-maximum suppression, you can change the overlap threshold.
To hide the label and the confidence, toggle showing labels on boxes.
For auto segmentation, you can use YOLOv8, YOLOv5, IS-Net, U2Net, and DeepLabV3 Core ML models.
After loading the segmentation model, switching to create box mode and drawing a box, the segmentation model is applied to the box area.
Using Vision Framework, automatic text recognition is performed. You can change the language according to your document type.