Autor: |
Μηλιαδης Παναγιωτης http://users.isc.tuc.gr/~pmiliadis, Miliadis Panagiotis http://users.isc.tuc.gr/~pmiliadis |
Přispěvatelé: |
Πνευματικατος Διονυσιος, Pnevmatikatos Dionysios, Δολλας Αποστολος, Dollas Apostolos, Θεοδωροπουλος Δημητριος, Theodoropoulos Dimitrios, Επιβλέπων: Πνευματικατος Διονυσιος, Advisor: Pnevmatikatos Dionysios, Μέλος επιτροπής: Δολλας Αποστολος, Committee member: Dollas Apostolos, Μέλος επιτροπής: Θεοδωροπουλος Δημητριος, Committee member: Theodoropoulos Dimitrios |
Jazyk: |
angličtina |
Předmět: |
|
Popis: |
Summarization: Over the last years, a rapid growth in the development of applications that are based on Convolutional Neural Networks is observed. Despite of the large advances in processor units, the use of computer vision tasks is still challenging in resource constrained platforms. This thesis will present four toolkits, that accelerate the performance of inference applications by targeting the processor units from the top hardware vendors; Intel, Nvidia, Arm and Xilinx. In order to achieve optimal execution, the toolkits exploit the hardware acceleration that processors provide, as well as special processor units and platforms, which are specially developed for deep learning inference tasks. The most well-known models for each task are described, alongside with the frameworks that the toolkits support and are used for model representation. Last but not least, real-world performance results are collected for different batches of images, in order to achieve a performance landscape of the existing tools. |
Databáze: |
OpenAIRE |
Externí odkaz: |
|