Abstrakt: |
Machine Learning (ML) functions are becoming ubiquitous in latency- and privacy-sensitive IoT applications, prompting a shift toward near-sensor processing at the extreme edge and the consequent increasing adoption of Parallel Ultra-low-power (PULP) IoT processors. These compute- and memory-constrained parallel architectures need to run efficiently a wide range of algorithms, including key Non-neural ML kernels that compete favorably with Deep Neural Networks in terms of accuracy under severe resource constraints. In this article, we focus on enabling efficient parallel execution of Non-neural ML algorithms on two RISCV based PULP platforms, namely, GAP8, a commercial chip, and PULP-OPEN, a research platform running on an FPGA emulator. We optimized the parallel algorithms through a fine-grained analysis and intensive optimization to maximize the speedup, considering two alternative Floating-point (FP) emulation libraries on GAP8 and the native FPU support on PULP-OPEN. Experimental results show that a target-optimized emulation library can lead to an average 1.61× runtime improvement and 37% energy reduction compared to a standard emulation library, while the native FPU support reaches up to 32.09× and 99%, respectively. In terms of parallel speedup, our design improves the sequential execution by 7.04× on average on the targeted octa-core platforms leading to energy and latency decrease up to 87%. Last, we present a comparison with the ARM Cortex-M4 microcontroller, a widely adopted commercial solution for edge deployments, which is 12.87× slower than PULP-OPEN. [ABSTRACT FROM AUTHOR] |