transformer

A Multicore Programmable Variable-Precision Near-Memory Accelerator for CNN and Transformer Models

A Multicore Programmable Variable-Precision Near-Memory Accelerator for CNN and Transformer Models 150 150

Abstract:

Convolutional neural network (CNN) and transformer are the most popular neural network models in computer vision (CV) and natural language processing (NLP). It is quite common to use both these two models in multimodal scenarios, such as text-to-image generation. However, these two models have very different memory mappings, dataflows and …

View on IEEE Xplore

A “No Gain” Direct-Conversion IQ RF-to-Bits Receiver Without Active Linear Amplification

A “No Gain” Direct-Conversion IQ RF-to-Bits Receiver Without Active Linear Amplification 150 150

Abstract:

This work describes a direct-conversion IQ receiver (RX) that does not utilize any active linear (power) amplification, covering its design considerations, prototype implementation, and measurement verification. Only RLC components, MOS transistor (MOST) switches, and comparators are used, leading to several unique design challenges. Key among these are the fact that …

View on IEEE Xplore

Integrating Atomistic Insights With Circuit Simulations via Transformer-Driven Symbolic Regression

Integrating Atomistic Insights With Circuit Simulations via Transformer-Driven Symbolic Regression 150 150

Abstract:

This article introduces a framework that establishes a cohesive link between the first principles-based simulations and circuit-level analyses using a machine learning-based compact modeling platform. Starting with atomistic simulations, the framework examines the microscopic details of material behavior, forming the foundation for later stages. The generated datasets, with molecular insights, …

View on IEEE Xplore

DPIM: A 2T1C eDRAM Transformer-in-Memory Chip With Sparsity-Aware Quantization and Heterogeneous Dense–Sparse Core

DPIM: A 2T1C eDRAM Transformer-in-Memory Chip With Sparsity-Aware Quantization and Heterogeneous Dense–Sparse Core 150 150

Abstract:

Transformer models have revolutionized artificial intelligence (AI) applications across various domains, but their increasing complexity poses significant challenges in terms of computational and memory demands. While processing-in-memory (PIM) paradigms have been adopted to address these limitations, existing PIM-based transformer accelerators still face hurdles such as: 1) focusing solely on optimizing attention …

View on IEEE Xplore