Object Detection
Currently, there are two YOLOv5 implementations avaialables for object detection task. Old implementation will be removed in future version.
Those models contains a large number of pretrained weights. Please refer to the following table for available options.
The first letter nano, s, ...
is the scaling and suffix simp
means the model is simplified by onnx simplifier. With the number 6
, the model is updated to the upstream version of YOLOv5.
Model | Weights | Model | Weights |
---|---|---|---|
YOLO2 | yolo_nano yolo_nano6 yolo_nano_simp yolo_nano6_simp yolo_s yolo_s6 yolo_s_simp yolo_s6_simp yolo_m yolo_m6 yolo_m_simp yolo_m6_simp yolo_l yolo_l_simp yolo_x yolo_x6 yolo_x_simp yolo_x6_simp |
YOLO | yolo_nano yolo_s yolo_xl yolo_extreme yolo_nano_smp yolo_s_smp yolo_xl_smp yolo_extreme_smp |
BiWAKO.YOLO2
Note
It is recommended to use this model rather than previous YOLO. This model optimizes pre/post-processing operations with new ort opsets. Runtime is 3~4 times faster than the previous model. If you want to use the raw output of the YOLO or customize post-processing with your choice of parameters, use the previous model below.
Faster implementation of YOLO without any pre/post-processing.
Attributes:
Name | Type | Description |
---|---|---|
model_path |
str |
The path to the onnx file. If automatic download is triggered, the file is downloaded to this path. |
session |
onnxruntime.InferenceSession |
The session of the onnx model. |
input_name |
str |
The name of the input node. |
coco_label |
List[str] |
The coco mapping of the labels. |
colors |
Colors |
Color palette written by Ultralytics at https://github.com/ultralytics/yolov5/blob/a3d5f1d3e36d8e023806da0f0c744eef02591c9b/utils/plots.py |
__init__(self, model='yolo_nano_simp')
special
Initialize YOLO2.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model |
str |
Choice of the model. Also accept the path to the downloaded onnx file. If the model has not been downloaded yet, the file is downloaded automatically. Defaults to "yolo_nano_simp". |
'yolo_nano_simp' |
Examples:
predict(self, image)
Return the prediction of the model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image |
Image |
The image to be predicted. Accept both path and array in cv2 format. |
required |
Returns:
Type | Description |
---|---|
List[np.ndarray] |
The prediction of the model in the format of |
Examples:
render(self, prediction, image, **kwargs)
Return the original image with the predicted bounding boxes.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction |
List[np.ndarray] |
The prediction of the model in the format of |
required |
image |
Image |
The image to be predicted. Accept both path and array in cv2 format. |
required |
Returns:
Type | Description |
---|---|
np.ndarray |
The image with the predicted bounding boxes in cv2 format. |
Examples:
BiWAKO.YOLO
YOLOv5 onnx model.
Attributes:
Name | Type | Description |
---|---|---|
model_path |
str |
Path to the onnx file. If auto download is triggered, the file is downloaded to this path. |
session |
onnxruntime.InferenceSession |
Inference session. |
input_name |
str |
Name of the input node. |
output_name |
str |
Name of the output node. |
input_shape |
tuple |
Shape of the input image. Set accordingly to the model. |
coco_label |
list |
List of coco 80 labels. |
colors |
Colors |
Color palette written by Ultralytics at https://github.com/ultralytics/yolov5/blob/a3d5f1d3e36d8e023806da0f0c744eef02591c9b/utils/plots.py |
__init__(self, model='yolo_nano')
special
Initialize the model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model |
str |
Model type to be used. Also accept path to the onnx file. If the model is not found, it will be downloaded automatically. Currently |
'yolo_nano' |
Examples:
predict(self, image)
Return the prediction of the model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
image |
Image |
Image to be predicted. Accept str or cv2 image. |
required |
Returns:
Type | Description |
---|---|
np.ndarray |
n by 6 array where 2nd dimension is xyxy with label and confidence. |
render(self, prediction, image)
Return the original image with predicted bounding boxes.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prediction |
np.ndarray |
Prediction of the model. |
required |
image |
Image |
Image to be predicted. Accept str or cv2 image. |
required |
Returns:
Type | Description |
---|---|
np.ndarray |
Image with predicted bounding boxes in cv2 format. |