Abstract:
Computing-in-memory (CIM) is a promising paradigm for energy- and area-efficient implementation of the heavy general matrix multiplication (GEMM) operations, especially in the evolving deep learning algorithms. Though existing CIM macros have demonstrated remarkable energy/area efficiency, the corresponding metrics of the system-level CIM chips degrade due to the peripheral components, …