Abstrakt: |
k-Nearest Neighbors (k-NN) is one of the most widely used classification algorithms in real-world machine learning applications such as computer vision, speech recognition, and data mining. Massive high-dimensional datasets, reasonable accuracy of results, and adequate response time are regarded as the most challenging aspects of the k-NN implementation, which are exacerbated by the exponential increase in dataset size and the feature dimension of each data point. In this paper, we leverage the parallelism and digit-level pipelining opportunities offered by FPGA devices and Online arithmetic to address such issues for k-NN classification based on different distance metrics: Manhattan and Euclidean. In these designs, all the necessary operations for measuring distances and sorting are performed on serially arriving data at no or minimal hardware cost. Due to serial computation, the size of the classifier instance and its memory footprint are reduced, leading to more parallel instances for our target devices. Furthermore, we dynamically terminate unnecessary computations upon detection to reduce power consumption, which is possible in more than half of cases on average. The proposed k-NN implementations are the first hardware accelerator designs that effectively use online arithmetic on FPGA. Based on implementation results, our proposed k-NN implementation based on Manhattan distance provides 1.72x to 2.23x Speedup compared to state-of-the-art designs, while these improvements for Euclidean distance are 1.06x to 2.76x. [ABSTRACT FROM AUTHOR] |