JCSE, vol. 17, no. 2, pp.51-59, 2023
DOI: http://dx.doi.org/10.5626/JCSE.2023.17.2.51
Edge Devices Inference Performance Comparison
Rafal Tobiasz, Grzegorz Wilczynski, Piotr Graszka, Nikodem Czechowski, and Sebastian Luczak
Bulletprove Spolka Z Ograniczona Odpowiedzialnoscia, Pulawy, Poland
Abstract: In this study, we investigated the inference time of the MobileNet family, EfficientNet V1 and V2 family, VGG models,
ResNet family, and InceptionV3 on four edge platforms: Specifically, NVIDIA Jetson Nano, Intel Neural Stick, Google
Coral USB Dongle, and Google Coral PCIe. Our main contribution is a thorough analysis of the afore mentioned models
in multiple settings, especially as a function of input size, the presence of the classification head, its size and the scale of
the model. Since throughout the industry, these architectures are mainly used as feature extractors we majorly analyzed
them as such. We show that Google platforms offer the fastest average inference time, especially for newer models like
MobileNet or EfficientNet family, while Intel Neural Stick is the most universal accelerator allowing it to run most architectures.
These results will provide guidance to engineers in the early stages of AI Edge systems development. The
results are accessible at https://bulletprove.com/research/edge_inference_results.csv.
Keyword:
Edge device; Deep learning; Computer vision
Full Paper: 116 Downloads, 1388 View
|