Call for Papers
About the Journal
Editorial Board
Publication Ethics
Instructions for Authors
Announcements
Current Issue
Back Issues
Search for Articles
Categories
Search for Articles
 

JCSE, vol. 17, no. 2, pp.51-59, 2023

DOI: http://dx.doi.org/10.5626/JCSE.2023.17.2.51

Edge Devices Inference Performance Comparison

Rafal Tobiasz, Grzegorz Wilczynski, Piotr Graszka, Nikodem Czechowski, and Sebastian Luczak
Bulletprove Spolka Z Ograniczona Odpowiedzialnoscia, Pulawy, Poland

Abstract: In this study, we investigated the inference time of the MobileNet family, EfficientNet V1 and V2 family, VGG models, ResNet family, and InceptionV3 on four edge platforms: Specifically, NVIDIA Jetson Nano, Intel Neural Stick, Google Coral USB Dongle, and Google Coral PCIe. Our main contribution is a thorough analysis of the afore mentioned models in multiple settings, especially as a function of input size, the presence of the classification head, its size and the scale of the model. Since throughout the industry, these architectures are mainly used as feature extractors we majorly analyzed them as such. We show that Google platforms offer the fastest average inference time, especially for newer models like MobileNet or EfficientNet family, while Intel Neural Stick is the most universal accelerator allowing it to run most architectures. These results will provide guidance to engineers in the early stages of AI Edge systems development. The results are accessible at https://bulletprove.com/research/edge_inference_results.csv.

Keyword: Edge device; Deep learning; Computer vision

Full Paper:   89 Downloads, 1118 View

 
 
ⓒ Copyright 2010 KIISE – All Rights Reserved.    
Korean Institute of Information Scientists and Engineers (KIISE)   #401 Meorijae Bldg., 984-1 Bangbae 3-dong, Seo-cho-gu, Seoul 137-849, Korea
Phone: +82-2-588-9240    Fax: +82-2-521-1352    Homepage: http://jcse.kiise.org    Email: office@kiise.org