Upcoming Webinars
SSCS Webinar Series - Professional Development, Networking, and Career Growth
We are proud to offer leading experts and advanced topics exclusively to our SSCS members.
All past webinars can be found here.
SSCS November Technical Webinar: Opportunities and Challenges for In-and Near-memory computing with RRAM, Presented by Arijit R
- Date
- 2023-11-30
- Time
- 10:00 AM ET
- Location
- Webinar - Online
- Contact
- Aeisha VanBuskirk – a.vanbuskirk@ieee.org
- Web site
- https://ieee.webex.com/weblink/register/rd01c42c6b94971b17e3dd7422f4f7bc5
- Sponsorship
- Sponsor
- Presenter
- Arijit Raychowdhury
- Description
Abstract: Machine learning (ML) with deep neural networks (DNNs) is a driving technology for innovation broadly across signal, image, and speech processing. For processors, application accelerators, and server computers executing any of a variety of modern data-intensive computing tasks, including machine learning inference, moving data from large memories into processing elements can be a limiting factor for energy efficiency and/or throughput (performance). The high cost of off-chip memory accesses has led to the extensive use of large, dense on-chip memories. These memories can store part (or all) of the data required by the application, such as the weights used to compute forward inference based on an input signal. The potentially large leakage energy and constrained density of on-chip volatile memory (typically static random-access memory, SRAM) has underscored the need for a new generation of embedded memory technologies including emerging logic-compatible embedded nonvolatile memories technologies (eNVMs) such as resistive random-access memory (RRAM) or phase-change memory (PCM); and on-die, back-end-of-line (BEOL) or near-die dynamic random-access memory (eDRAM/DRAM). The cost of accessing data stored in these large on-chip arrays has also motivated the re-emergence of the analog compute-in-memory (CIM) paradigm, where the stored states in multiple memory cells are selectively added together inside the memory array so that the read-out value represents a multiply-accumulate (MAC) operation result. This potentially allows improved access bandwidth and efficiency and/or reduced MAC computation area and energy. However, analog CIM confronts many challenges: systematic and random variation affecting the memory cell current, other non-idealities in the CIM operation including IR drop further hindering accuracy, and the need for expensive readout circuitry. This talk will discuss current progress to overcome these challenges toward improved current-summing CIM with foundry RRAM. This talk will also cover the advantages of RRAM beyond CIM, including implementations showing computing near-memory with RRAM with a focus on the reduced leakage and improved density offered by this technology.
Bio: Arijit Raychowdhury is the Steve W. Chaddick Chair of the School of Electrical and Computer Engineering, Georgia Institute of Technology, where he was previously the Motorola Foundation Professor. From 2013 to July 2019, he was an Associate Professor and held the ON Semiconductor Junior Professorship. His industry experience includes five years as a Staff Scientist with the Circuits Research Lab, Intel Corporation, and two years as an Analog Circuit Researcher with Texas Instruments Inc. His research interests include low-power digital and mixed-signal circuit design and exploring interactions of circuits with device technologies. He has authored over 300 articles in journals and refereed conferences and holds 27 U.S. and international patents. Dr. Raychowdhury’s students have won several prestigious fellowships and 16 best paper awards over the years. He currently serves on the technical program committee of ISSCC and the steering committee of CICC.