SRAM cells

A 14-nm Nonvolatile-Volatile-Fused Compute-In-Memory Macro Based on Logic-Compatible Flash for Plastic Neural Networks

A 14-nm Nonvolatile-Volatile-Fused Compute-In-Memory Macro Based on Logic-Compatible Flash for Plastic Neural Networks 150 150

Abstract:

Designing computing-in-memory (CIM) chips with synaptic plasticity can potentially support energy-efficient on-chip learning in edge devices for rapid local task adaptation. Its silicon implementation is challenging as it requires hybridizing nonvolatile and volatile memory (VM) and customized computational operations. In this work, we propose a plastic CIM (P-CIM) macro featuring: 1) …

View on IEEE Xplore

A 3-nm FinFET 563-kbit 35.5-Mbit/mm2 Dual-Rail SRAM With 3.89-pJ/Access High Energy Efficient and 27.5-μW/Mbit One-Cycle Latency Low-Leakage Mode

A 3-nm FinFET 563-kbit 35.5-Mbit/mm2 Dual-Rail SRAM With 3.89-pJ/Access High Energy Efficient and 27.5-μW/Mbit One-Cycle Latency Low-Leakage Mode 150 150

Abstract:

This article presents a high-density (HD) 6T SRAM macro designed in 3-nm FinFET technology with an extended dual-rail (XDR) architecture, addressing active energy and leakage for mobile applications. Two key innovations are introduced: the delayed-wordline in write operation (DEWL) technique and a one-cycle latency low-leakage access mode (1-CLM). The XDR …

View on IEEE Xplore

An Investigation of Minimum Supply Voltage of 5-nm SRAM From 300 K Down to 10 K

An Investigation of Minimum Supply Voltage of 5-nm SRAM From 300 K Down to 10 K 150 150

Abstract:

In this article, we present a comprehensive study of the impact of cryogenic temperatures on the minimum operating voltage ( $V_{\min }$ ) of 5-nm Fin Field-Effect Transistors (FinFETs)-based Static Random Access Memory (SRAM) cells. To perform the SRAM $V_{\min }$ evaluation, we have measured the FinFETs fabricated using a commercial 5…

View on IEEE Xplore

Binarized Neural-Network Parallel-Processing Accelerator Macro Designed for an Energy Efficiency Higher Than 100 TOPS/W

Binarized Neural-Network Parallel-Processing Accelerator Macro Designed for an Energy Efficiency Higher Than 100 TOPS/W 150 150

Abstract:

A binarized neural-network (BNN) accelerator macro is developed based on a processing-in-memory (PIM) architecture having the ability of eight-parallel multiply-accumulate (MAC) processing. The parallel-processing PIM macro, referred to as a PPIM macro, is designed to perform the parallel processing with no use of multiport SRAM cells and to achieve the …

View on IEEE Xplore