deep neural network (DNN)

A 28-nm 18.7 TOPS/mm $^2$ 89.4-to-234.6 TOPS/W 8b Single-Finger eDRAM Compute-in-Memory Macro With Bit-Wise Sparsity Aware and Kernel-Wise Weight Update/Refresh

A 28-nm 18.7 TOPS/mm $^2$ 89.4-to-234.6 TOPS/W 8b Single-Finger eDRAM Compute-in-Memory Macro With Bit-Wise Sparsity Aware and Kernel-Wise Weight Update/Refresh 150 150

Abstract:

This article reports a high-density 3T1C single-finger (SF) embedded dynamic random access memory (eDRAM) compute-in-memory (CIM) macro. It features several techniques that enhance the memory density, the energy efficiency, and the throughput density, namely: 1) a high-density 3T1C SF-eDRAM cell with low-leakage retention (LLR) to improve the memory density …

View on IEEE Xplore

EPU: An Energy-Efficient Explainable AI Accelerator With Sparsity-Free Computation and Heat Map Compression/Pruning

EPU: An Energy-Efficient Explainable AI Accelerator With Sparsity-Free Computation and Heat Map Compression/Pruning 150 150

Abstract:

Deep neural networks (DNNs) have recently gained significant prominence in various real-world applications such as image recognition, natural language processing, and autonomous vehicles. However, due to their black-box nature in system, the underlying mechanisms of DNNs behind the inference results remain opaque to users. In order to address this challenge, …

View on IEEE Xplore