ApesNet: a pixel-wise efficient segmentation network for embedded devices
Autor: | Chunpeng Wu, Hsin-Pai Cheng, Sicheng Li, Hai (Helen) Li, Yiran Chen |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2016 |
Předmět: |
computer vision
image segmentation learning (artificial intelligence) embedded systems ApesNet pixel-wise efficient segmentation network embedded devices semantic segmentation road scene understanding machine learning model high-level scene understanding model classification time CamVid Cityscapes SegNet-Basic deep convolutional encoder-decoder architecture Computer engineering. Computer hardware TK7885-7895 Electronic computers. Computer science QA75.5-76.95 |
Zdroj: | IET Cyber-Physical Systems (2016) |
Druh dokumentu: | article |
ISSN: | 2398-3396 |
DOI: | 10.1049/iet-cps.2016.0027 |
Popis: | Road scene understanding and semantic segmentation is an on-going issue for computer vision. A precise segmentation can help a machine learning model understand the real world more accurately. In addition, a well-designed efficient model can be used on source limited devices. The authors aim to implement an efficient high-level, scene understanding model in an embedded device with finite power and resources. Toward this goal, the authors propose ApesNet, an efficient pixel-wise segmentation network which understands road scenes in near real-time and has achieved promising accuracy. The key findings in the authors’ experiments are significantly lower the classification time and achieving a high accuracy compared with other conventional segmentation methods. The model is characterised by an efficient training and a sufficient fast testing. Experimentally, the authors use two road scene benchmarks, CamVid and Cityscapes to show the advantages of ApesNet. The authors’ compare the proposed architecture's accuracy and time performance with SegNet-Basic, a deep convolutional encoder–decoder architecture. ApesNet is 37% smaller than SegNet-Basic in terms of model size. With this advantage, the combining encoding and decoding time for each image is 2.5 times faster than SegNet-Basic. |
Databáze: | Directory of Open Access Journals |
Externí odkaz: |