Special Issue on Advanced Circuits, Architectures, and Systems for Next-Generation AI Processors
Aim and Scope
As AI workloads span from always-on edge inference to hyperscale training in data centers, next-generation processors and hardware accelerators must unite circuits, architectures, and system-level integration to overcome compute–memory and energy constraints. Progress hinges on hardware–algorithm co-design and design automation, delivering energy-efficient dataflows, on-chip networks, and memory hierarchies tuned to AI workloads, including Large Language Models and other foundation models. Reliability, security, and safety must be first-class design goals, alongside emerging paradigms such as bio-inspired, neuromorphic, and probabilistic computing. Advances in compute-in-memory and near-memory AI hardware, efficient data movement, and high-density integration—chiplets, 2.5D/3D packaging, and domain-specific interconnects—are critical to scaling performance and capacity. These capabilities also enable autonomous, always-on inference under tight power and thermal envelopes, charting a path to versatile, high-performance AI processing across edge and cloud environments.
This special issue aims to highlight the latest innovations in AI processors and hardware accelerator designs, encompassing circuits, architectures, and system-level integration with application-oriented perspectives. We particularly welcome contributions that advance energy efficiency, performance, reliability, and scalability, with a strong focus on efficient and high-performance compute, memory, and interconnect architectures for AI processing.

Topics of Interest
Authors are invited to submit papers following the IEEE Open Journal of the Solid-State Circuits Society (OJ-SSCS) guidelines, within the remit of this Special Section call. Topics include (but are not limited to):
- Processor and hardware accelerator design for AI applications (training, inference) for edge, data centers
- Circuits, architectures, and system-level integrations for AI accelerators
- Hardware–algorithm co-design/optimization
- AI Processor Designs with reliability, security, and safety considerations
- Bio-inspired, neuromorphic, probabilistic AI computing
- Compute-in-memory and near-memory AI hardware
- Efficient architectures for data movement and memory hierarchy
- High-density integration (chiplet, 2.5D/3D packaging, interconnects)
- Design automation and optimization for AI processors
- Energy-efficient AI accelerator architectures
- On-chip network and memory hierarchy optimized for AI workloads
- Chiplet-based AI processor integration and interconnect technologies
- AI Processor and Hardware Accelerator for Large Language Models and Foundational Models
- Always-on and autonomous AI inference
Submission Guidelines
All submitted manuscripts are strongly encouraged to
- conform to OJ-SSCS’ normal formatting requirements and page count limits;
- validate principal claims with experimental results;
- be submitted online at: https://mc.manuscriptcentral.com/oj-sscs
Please note that you need to select “Advanced Circuits, Architectures, and Systems for Next-Generation AI Processors” when you submit a paper to this Special Section.
Deadlines
- Special Section Open for Submissions: October 15, 2025
- Paper Submission Deadline: December 14, 2025
- First Notification: January 29, 2026
- Revision Submission: February 19, 2026
- Final Decision: March 12, 2026
- Publication Online: March 24, 2026