What's New

Welcome new Members-at-Large!

We welcome the following six new members of the society’s Administrative Committee (AdCom.) The first five were elected by members to a three-year Member-at-Large term to begin on January 1, 2024, while the sixth was elected by the AdCom to fill a vacated Member-at-Large seat for a term that expires on December 31, 2024:

  • Jung-Hwan Choi

  • Nadine Collaert

  • Pieter Harpe

  • Tim Piessens

  • Rabia Yazicigil

  • Woogeun Rhee

We also thank the Members-at-Large whose term expires on January 31, 2023, for their three years of service to the society in that position:

  • Ichiro Fujimori

  • Rikky Muller

  • Kazuko Nishimura

  • Esther Rodriguez-Villegas

  • Hoi-Jun Yoo

Feature Article

SSCS Women in Circuits presents Family Care at ISSCC 2024!

SSCS children’s program will be provided by KiddieCorp. Our goal is to provide your children with a program they want to attend, while providing you with that critical “peace of mind” feeling so you can attend sessions and events taking place during the IEEE International Solid-State Circuits Conference (ISSCC). 

KiddieCorp is in its thirty-eighth year of providing high-quality children’s programs and youth services to conventions, trade shows, and special events. We take caring for your children very seriously. KiddieCorp has enjoyed a long-time partnership with the American Academy of Pediatrics, which has helped to establish KiddieCorp as a premier provider of event children’s program services.

The program is for children ages 6 months through 12 years old. KiddieCorp will offer child care services on February 18-21, 2024, located in the San Francisco Marriott Marquis in San Francisco, California.

For mroe details, click here!

Technology Spotlight

SSCS November Technical Webinar

Opportunities and Challenges for In-and Near-memory computing with RRAM

Virtual Webinar presented by Arijit Raychowdhury

Register Here!

Join us on Thursday, November 30th at 10 AM ET

Machine learning (ML) with deep neural networks (DNNs) is a driving technology for innovation broadly across signal, image, and speech processing. For processors, application accelerators, and server computers executing any of a variety of modern data-intensive computing tasks, including machine learning inference, moving data from large memories into processing elements can be a limiting factor for energy efficiency and/or throughput (performance). The high cost of off-chip memory accesses has led to the extensive use of large, dense on-chip memories. These memories can store part (or all) of the data required by the application, such as the weights used to compute forward inference based on an input signal. The potentially large leakage energy and constrained density of on-chip volatile memory (typically static random-access memory, SRAM) has underscored the need for a new generation of embedded memory technologies including emerging logic-compatible embedded nonvolatile memories technologies (eNVMs) such as resistive random-access memory (RRAM) or phase-change memory (PCM); and on-die, back-end-of-line (BEOL) or near-die dynamic random-access memory (eDRAM/DRAM). The cost of accessing data stored in these large on-chip arrays has also motivated the re-emergence of the analog compute-in-memory (CIM) paradigm, where the stored states in multiple memory cells are selectively added together inside the memory array so that the read-out value represents a multiply-accumulate (MAC) operation result. This potentially allows improved access bandwidth and efficiency and/or reduced MAC computation area and energy. However, analog CIM confronts many challenges: systematic and random variation affecting the memory cell current, other non-idealities in the CIM operation including IR drop further hindering accuracy, and the need for expensive readout circuitry. This talk will discuss current progress to overcome these challenges toward improved current-summing CIM with foundry RRAM.  This talk will also cover the advantages of RRAM beyond CIM, including implementations showing computing near-memory with RRAM with a focus on the reduced leakage and improved density offered by this technology.