Jin-Chuan See

and 6 more

Practical deployment of convolutional neural net?work (CNN) and cryptography algorithm on constrained devices are challenging due to the huge computation and memory requirement. Developing separate hardware accelerator for AI and cryptography incur large area consumption, which is not desirable in many applications. This paper proposes a viable solution to this issue by expressing the CNN and cryptography as Generic-Matrix-Multiplication (GEMM) operations and map them to the same accelerator for reduced hardware consumption. A novel systolic tensor array (STA) design was proposed to reduce the data movement, effectively reducing the operand registers by 2×. Two novel techniques, input layer extension and polynomial factorization, are proposed to mitigate the under-utilization issue found in existing STA architecture. Additionally, the Tensor Processing Element (TPE) is fused using DSP unit to reduce the Look-Up Table (LUT) and Flip-Flops (FF) consumption for implementing multipliers. On top of that, a novel memory efficient factorization technique is proposed to allow computation of polynomial convolution on the same STA. Experimental results show that Cryptensor achieved 22.3% better throughput for VGG-16 implementation on XC7Z020 FPGA; 95.0% lesser LUT when implementing on XC7Z045 compared to state-of-the-art result. Cryptensor can also flexibly support multiple security levels in NTRU scheme, with no additional hardware. The proposed hardware unifies the computation of two different domains that are critical for IoT applications, which greatly reduces the hardware consumption on edge nodes.