Hardware accelerated artificial intelligence (AI) is now becoming ubiquitous, shifting from clouds to resource-limited embedded and IoT platforms. While hardware accelerators facilitate fast and energy-efficient neural network operations that are both memory and computational intensive, they are facing two fundamental challenges in practice. The first is unreliable inference caused by passive hardware faults in traditional CMOS-based accelerator memories, buffers and computation units, as well as the imperfect manufacturing and non-ideal device behaviors of emerging post CMOS processing-in-memory (PIM) accelerators. The second challenge is the violation of AI integrity and confidentiality caused by fault injection attacks and/or side channel attacks targeting these new NN hardware accelerators.
The goal of this workshop is to establish a forum for the discussion on state-of-the-art research in AI accelerator design from the aspects of reliability and security, which are two sides of the same coin – the unexpected accelerator behavior can be induced by either hardware faults or malicious attacks.