Exploring the Vision Processing Unit as Co-Processor for Inference

Other authors

Barcelona Supercomputing Center

Publication date

2018-08-06

Abstract

The success of the exascale supercomputer is largely debated to remain dependent on novel breakthroughs in technology that effectively reduce the power consumption and thermal dissipation requirements. In this work, we consider the integration of co-processors in high-performance computing (HPC) to enable low-power, seamless computation offloading of certain operations. In particular, we explore the so-called Vision Processing Unit (VPU), a highly-parallel vector processor with a power envelope of less than 1W. We evaluate this chip during inference using a pre-trained GoogLeNet convolutional network model and a large image dataset from the ImageNet ILSVRC challenge. Preliminary results indicate that a multi-VPU configuration provides similar performance compared to reference CPU and GPU implementations, while reducing the thermal-design power (TDP) up to 8x in comparison.


The experimental results were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at PDC Centre for High-Performance Com- puting (PDC-HPC). The work was funded by the European Commission through the SAGE project (Grant agreement no. 671500 / http://www.sagestorage.eu).


Postprint (author's final draft)

Document Type

Conference lecture

Language

English

Publisher

IEEE

Related items

https://ieeexplore.ieee.org/document/8425465/

info:eu-repo/grantAgreement/EC/H2020/671500/EU/SAGE/SAGE

Recommended citation

This citation was generated automatically.

Rights

Open Access

This item appears in the following Collection(s)

E-prints [72996]