Distinguished Lecturer Roster

Terms through 31 December 2024

Department of Electrical and Systems Engineering, Associate Professor
Firooz Aflatouni (Senior Member, IEEE) received the Ph.D. degree in electrical engineering from the University of Southern California, Los Angeles, CA, USA, in 2011. In 1999, he co-founded Pardis Bargh Company, where he served as the CTO for five years working on the design and manufacturing of inclined-orbit satellite tracking systems. From 2004 to 2006, he was a Design Engineer with MediaWorks Integrated Circuits Inc., Irvine, CA. He was a Post-Doctoral Scholar with the Department of Electrical Engineering, California Institute of Technology, Pasadena, CA. He joined the University of Pennsylvania, Philadelphia, PA, USA, in 2014, where he is currently an Associate Professor with the Department of Electrical and Systems Engineering. His research interests include electronic–photonic co-design and low-power RF and millimeter-wave integrated circuits. Dr. Aflatouni received the 2020 Bell Labs Prize, the Young Investigator Program (YIP) Award from the Office of Naval Research in 2019, the NASA Early Stage Innovation Award in 2019, and the 2015 IEEE Benjamin Franklin Key Award. He is a Distinguished Lecturer of the Solid-State Circuit Society and has served on several IEEE program committees (ISSCC, CICC, and IMS). He is an Associate Editor of the IEEE Open Journal of the Solid-State Circuits Society and currently serves as the chair of IEEE Solid State Circuits Society (SSCS) Philadelphia chapter.
Electronic-photonic co-design; from imaging to optical phase control
Integrated electronic-photonic co-design can profoundly impact both fields resulting in advances in several areas such as energy efficient communication, computation, signal processing, imaging, and sensing. Examples of integrated electronic-photonic co-design may be categorized into two groups: (a) electronic assisted photonics, where integrated analog, RF, mm-wave, and THz circuits are employed to improve the performance of photonic systems, and (b) photonic assisted electronics, where photonic systems and devices are used to improve the performance of integrated RF, mm-wave, and THz systems. In this talk, examples of electronic-photonic co-design such as photonic assisted near-field imaging, photonic-mmWave deep networks, and low power laser stabilization and linewidth reduction will be presented.
Integrated photonic deep networks for image classification
The typical hardware platform for neural networks operates based on clocked computation and consists of advanced parallel graphics processing units (GPU) and/or application specific integrated circuits (ASIC), which are reconfigurable, multi-purpose and robust. However, for such platforms the input data often needs to be converted to electrical domain, digitized, and stored. Furthermore, a clocked computation system typically has a high power consumption, suffers from a limited speed, and requires a large data storage device. To address the ever-increasing demand for more sophisticated and complex AI based systems, deeper neural networks with a large number of layers and neurons are required, which result in even higher power consumption and longer computation time. Photonic deep networks could address some of these challenges by utilizing the large bandwidth available around the optical carrier and low propagation loss of CMOS-compatible photonic devices and blocks. In this talk, a low-cost integrated highly-scalable photonic architecture for implementation of deep neural networks for image/video/signal classification is presented, where the input images are taken using an array of pixels and directly processed in the optical domain. The implemented system performs computation by propagation and, as such, is several orders-of-magnitude faster than state-of-the-art clocked based systems and operates at a significantly lower power consumption. This system, which is scalable to a network with a large number of layers, performs in-domain processing (i.e. processing in the optical domain) and as a result, opto-electronic conversion, analog-to-digital conversion, and requirement for a large memory module are eliminated.
IBM Research, Research Staff Member
Sudipto Chakraborty received his B. Tech from Indian Institute of Technology, Kharagpur in 1998 and Ph.D in EE from Georgia Institute of Technology in 2002. He worked as a researcher in Georgia Electronic Design Center (GEDC) till 2004. Since 2004 to 2016, he was a senior member of technical staff at Texas Instruments where he contributed to low power integrated circuit design in more than 10 product families in the areas of automotive, wireless, medical and microcontrollers. Since 2017, he has been working at the IBM T. J. Watson Research Center where he leads the low power circuit design for next generation quantum computing applications using nano CMOS technology nodes. He has authored or co-authored more than 75 papers, two books and holds 76 US patents. He has served in the technical program committees of various conferences including CICC, RFIC, IMS and has been elected as an IBM master inventor in 2022 for his contributions.
Current mode design techniques for low power transceivers
This talk will cover principles and application of current mode design techniques for ultra low power transceivers/signal generators. Current mode design techniques are becoming popular in recent times due to emergence of beamforming and AI/ML applications. In this talk, a few novel constructs for current mode designs shall be presented that are implemented using 14nm CMOS FinFET technology for qubit state controller (QSC) used for next generation quantum computing applications. The QSC includes an augmented general-purpose digital processor that supports waveform generation and phase rotation operations combined with a low power current-mode single sideband upconversion I/Q mixerbased RF arbitrary waveform generator (AWG). Implemented in 14nm CMOS FinFET technology, the QSC generates control signals in its target 4.5GHz to 5.5 GHz frequency range, achieving an SFDR > 50dB for a signal bandwidth of 500MHz. With the controller operating in the 4K stage of a cryostat and connected to a transmon qubit in the cryostat’s millikelvin stage, measured transmon T1 and T2 coherence times were 75.7μs and 73μs, respectively, in each case comparable to results achieved using conventional room temperature controls. In further tests with transmons, a qubit-limited error rate of 7.76x10-4 per Clifford gate is achieved, again comparable to results achieved using room temperature controls. The QSC’s maximum RF output power is -18 dBm, and power dissipation per qubit under active control is 23mW.
Low power cryo-CMOS design for quantum computing applications
This talk will cover practical challenges for cryogenic CMOS designs for next generation quantum computing. Starting from system level, it will detail the design considerations for a non-multiplexed, semi-autonomous, transmon qubit state controller (QSC) implemented in 14nm CMOS FinFET technology. The QSC includes an augmented general-purpose digital processor that supports waveform generation and phase rotation operations combined with a low power currentmode single sideband upconversion I/Q mixer-based RF arbitrary waveform generator (AWG). Implemented in 14nm CMOS FinFET technology, the QSC generates control signals in its target 4.5GHz to 5.5 GHz frequency range, achieving an SFDR > 50dB for a signal bandwidth of 500MHz. With the controller operating in the 4K stage of a cryostat and connected to a transmon qubit in the cryostat’s millikelvin stage, measured transmon T1 and T2 coherence times were 75.7μs and 73μs, respectively, in each case comparable to results achieved using conventional room temperature controls. In further tests with transmons, a qubit-limited error rate of 7.76x10-4 per Clifford gate is achieved, again comparable to results achieved using room temperature controls. The QSC’s maximum RF output power is -18 dBm, and power dissipation per qubit under active control is 23mW.
Intel Fellow Director of New Memory Technologies
Fatih Hamzaoglu (SM’11) received his Ph.D. degree from the University of Virginia, Charlottesville, in 2002, in Electrical Engineering. After finishing the Ph.D., he joined Technology Development group at Intel Corporation, and since then, he has been working on memory technology developments such as SRAM, eDRAM, MRAM and RRAM. Currently, he’s an Intel Fellow and Director of New In-Package Memory Technologies, IP Design and Product Integration. He is the author or coauthor of more than 40 papers and inventor/co-inventor of more than 30 patents. Dr. Hamzaoglu served in both VLSI Symposium Circuits Committee and ISSCC Memory Subcommittee between 2013 and 2019.
Journey through the Memory Tunnel: SRAM, (e)DRAM, MRAM and RRAM Array Designs and Applications
Besides the classic PC and Server applications, the ever growing new AI and HPC applications are driving even more data processing, hence leads memory growth. SRAM and DRAM has been and will continue to be the main supplier of data processing memory. But there’re opportunities to invent new memories to close the performance-density gap in the memory hierarchies. This talk will go through the memory hierarchy and touch upon different memory types and their applications. It’ll also analyze emerging memories such as MRAM, RRAM and FeRAM and how it’d replace eFlash for embedded non-volatile memories.
Associate Professor, University of Alberta
Masum Hossain (M’11) received the B.Sc. degree from the Bangladesh University of Engineering and Technology, Dhaka, Bangladesh, in 2002, the M.Sc. degree from Queen’s University, Kingston, ON, Canada, in 2005, and the Ph.D. degree from the University of Toronto, Toronto, ON, in 2010. From 2007 to 2013, he worked in product development and industrial research, focusing on high-speed link design in multiple organizations, including Gennum and Rambus. In 2013, he joined the Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada. Recently in 2023, he joined Carleton University in Ottawa, Canada. Dr. Hossain received the Best Student Paper Award at the 2008 IEEE Custom Integrated Circuits Conference and the Analog Device’s Outstanding Student Designer Award in 2010. In 2021 he received EPS society nominated best paper award in IEEE Transaction in Components, Packaging and Manufacturing.
Digital equalization for Multilevel signaling in high-speed SerDes
Multilevel signaling has extended the lifeline of wireline signaling beyond 100 Gb/s. But it’s SNR penalty has mandated much more sophisticated equalization that is more suitable for digital implementation. This presentation aims at bridging the gap between well-understood analog/mixed-signal solutions and today’s DSP-based solutions. Starting from traditional analog architectures, this talk will walk through the evolution toward today’s DSP-based equalization and provide the background for tomorrow’s sequence decoding.
Evolution of the Timing Recovery techniques in High-speed Links
Timing recovery techniques have evolved significantly over the last 25 years of high-speed link design. In the first decade, this evolution was motivated by technology scaling and scalability, where it gradually moved to a fully digital implementation from an analog PLL-based approach. However, the evolution in the last decade is motivated by the adoption of multilevel signaling. The emergence of MMSE as an alternative to 2X oversampled solutions is an example of such recent developments. This talk aims to bring designers up to speed on the state-of-the-art ADC-DSP solutions, explain their motivation, and finally conclude with silicon results to validate the performance improvement achievable in these architectures.
Low-jitter flexible frequency generation for next-generation communication systems
The next-generation wireline and wireless systems promise wider bandwidth to enable a vast range of applications, including autonomous vehicles, virtual reality, and the internet of things. Such high data rates mandate precise clock generation to meet the timing budget. At the same time, flexibility to support multiple standards and scalability to meet higher integration density introduces additional dimensions to the clocking challenge. This talk will discuss recent circuit and architecture innovations to address these challenges. Starting from simple phase-locking concepts such as PLL, DLL and ILO, this talk will explain how the combination of these techniques is adopted in modern communication systems. It will also describe two example cases - i. A 28 GHz frequency synthesizer for 5G LO based beam forming, and ii. A flexible clocking solution for 10Gb/s to 112 Gb/s SerDes in 7 nm finFET technology.
Associate Professor at the Department of Electrical Engineering, National Tsing Hua University, Hsinchu, Taiwan
Ping-Hsuan Hsieh received the B.S. degree in electrical engineering from National Taiwan University, Taipei, Taiwan, in 2001, and the M.S. and Ph.D. degrees in electrical engineering from the University of California, Los Angeles, Los Angeles, CA, in 2004 and 2009, respectively. From 2009 to 2011, she was with the IBM T.J. Watson Research Center, Yorktown Heights, NY. In 2011 she joined the Electrical Engineering Department of National Tsing Hua University, Hsinchu, Taiwan, where she is currently an Associate Professor. Her research interests focus on mixed-signal integrated circuit designs for high-speed electrical data communications, clocking and synchronization systems, and energy-harvesting systems. Prof. Hsieh served in the Technical Program Committee of the IEEE International Solid-State Circuits Conference, and is currently a member of the Technical Program Committees of the IEEE Asian Solid-State Circuits Conference and the IEEE Custom Integrated Circuits Conference. She served as an Associate Editor for the IEEE Internet of Things Journal from 2014 to 2018, a Guest Editor for the IEEE Journal of Solid-State Circuits Special Issue in 2021, and is currently an Associate Editor for the IEEE Open Journal of Circuits and Systems and IEEE Solid-State Circuits Letters.
An Overview on Interface Circuits and MPPT for Piezoelectric Energy Harvesting

Piezoelectric vibration-to-electricity conversion provides a feasible solution to self-sustainability due to its relatively high power density, wider voltage range, and the compatibility with IC technology. In the past decade, we have seen a booming of various interface circuits developed for piezoelectric energy harvesting. The drastic difference between the operating speed of integrated circuits and mechanical vibrations provides a perfect venue for performing nonlinear switching and control in the interface operation with low power, allowing orders of magnitude of improvement in power extracting ability.

This tutorial will cover a wide range of state-of-the-art interface designs and MPPT methods for piezoelectric energy harvesting, while emphasizing the circuit implementation considerations. Specifically, after describing the basic full-bridge and half-bridge rectifiers, the Synchronized-Switch-Harvesting (SSH) technique that is the foundation of all modern nonlinear interface circuits will be introduced. Two major categories, namely, the open-circuit and the short-circuit structures, are then discussed in details. After that, the common MPPT algorithms and implementations will be reviewed. This talk will also cover topics such as non-resonant operations and multiple-input piezoelectric energy harvesting systems.

Digitally-Enhanced Clock Generation and Distribution

Advancements in technology scaling have ushered in larger systems boasting enhanced functionality, increased operational speed, and expanded data bandwidth. However, these benefits come with more demanding clocking requirements, including extended distribution distances and heightened timing precision. Furthermore, technology scaling has rendered traditional analog design challenging. Wider PVT variations necessitate intensive calibration efforts, and increased integration levels call for resilience against external noise sources. Moreover, the fact that reference frequency and loop bandwidth do not scale at the same rate as technology leads to prohibitive costs for oversized loop filters. While pure analog implementations offer intuitive operation and elegant analysis, clocking circuits incorporating digital elements offer effective solutions to these challenges.

This presentation will cover how digital circuits can enhance clock generation and distribution through techniques like calibration and signal processing. Beginning with well-established methods that harness the mixed-signal nature of PLLs, such as delta-sigma modulation for the MDD in fractional-N PLLs, the presentation will shift toward digital-intensive architectures. It will focus on techniques that leverage digital implementations for error detection and enhance timing accuracy through either analog or digital correction. State-of-the-art designs featuring runtime calibration and power noise cancellation for clock generation and distribution will also be introduced. This talk will conclude with insights into future challenges and trends.

Seoul National University, Seoul, Korea/ Associate Professor
Dongsuk Jeon received a B.S. degree in electrical engineering from Seoul National University, Seoul, South Korea, in 2009 and a Ph.D. degree in electrical engineering from the University of Michigan, Ann Arbor, MI, USA, in 2014. From 2014 to 2015, he was a Post-doctoral Associate with the Massachusetts Institute of Technology, Cambridge, MA, USA. He is currently an Associate Professor with the Graduate School of Convergence Science and Technology, Seoul National University. His current research interests include hardware-oriented machine learning algorithms, hardware accelerators, and low-power circuits.
Designing an optimal hardware solution for deep neural network training
The size and complexity of recent deep learning models continue to increase exponentially, causing a serious amount of hardware overheads for training those models. Contrary to inference-only hardware, neural network training is very sensitive to computation errors; hence, training processors must support high-precision computation to avoid a large performance drop, severely limiting their processing efficiency. This talk will introduce a comprehensive design approach to arrive at an optimal training processor design. More specifically, the talk will discuss how we should make important design decisions for training processors in more depth, including i) hardware-friendly training algorithms, ii) optimal data formats, and iii) processor architecture for high precision and utilization.
When circuits meet machine learning: circuit-based machine learning acceleration and machine learning-based circuit design

Pretrained deep learning models are strong against computation errors up to some extent, and this has sparked numerous ways of deep learning acceleration through circuit design techniques. Examples include time-domain computing, charge-domain computing, and feature extraction using analog circuits. However, non-ideal characteristics of transistors unavoidably lower their accuracy compared to their digital counterparts. Even if small, this drop could limit their usage in real-world applications. This talk discusses how we can close this performance gap at each design hierarchy level.

On the other hand, deep learning is now actively employed for hardware design automation. Large-scale digital systems have greatly benefited from these efforts, but automating low-level circuit design might be just around the corner. This talk introduces recent advances in circuit design automation with machine learning, from topology generation to size optimization and layout generation.

Professor, School of Electrical Engineering, KAISR
Joo-Young Kim (Senior Member, IEEE) received the B.S., M.S., and Ph.D. degrees in electrical engineering from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea, in 2005, 2007, and 2010, respectively. He is currently an Assistant Professor with the School of Electrical Engineering, KAIST. He is also the Director of the AI Semiconductor Systems Research Center, KAIST. His research interests span various aspects of hardware design, including VLSI design, computer architecture, field-programmable gate array (FPGA), domain-specific accelerators, hardware/software co-design, and agile hardware development. Before joining KAIST, he was a Senior Hardware Engineering Lead at Microsoft Azure, Redmond, WA, USA, working on hardware acceleration for its hyper-scale big data analytics platform named Azure Data Lake. He was also one of the initial members of Catapult project at Microsoft Research, Redmond, where he deployed a fabric of field-programmable gate arrays (FPGAs) in datacenters to accelerate critical cloud services, such as machine learning, data storage, and networking. Dr. Kim was a recipient of the 2016 IEEE Micro Top Picks Award, the 2014 IEEE Micro Top Picks Award, the 2010 DAC/ISSCC Student Design Contest Award, the 2008 DAC/ISSCC Student Design Contest Award, and the 2006 A-SSCC Student Design Contest Award. He has served as a Guest Editor for the IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS), a Guest Editor for the IEEE Journal of Solid-State Circuits (JSSC), and an Associate Editor for the IEEE Transactions on Circuits and Systems—I: Regular Papers (TCAS-I).
A Multi-Accelerator Appliance for Accelerating Inference of Hyperscale Transformer Models

Deep learning technology has made significant progress on various cognitive tasks, such as image classification, object detection, speech recognition, and natural language processing. However, the vast adaptation of deep learning also highlights its shortcomings, such as limited generalizability and lack of interpretability, also requiring manually-annotated training samples with sophisticated learning schemes. Witnessing the performance saturation of early models such as MLP, CNN, and RNN, one notable recent innovation in deep learning architecture is the transformer model introduced in 2017. It has two good properties towards artificial general intelligence over conventional models. First, the performance of transformer models continues to grow with their model sizes and training data. Second, transformers can be pre-trained with lots of unlabeled data through self-supervised learning and can be easily fine-tuned for each application.

In this talk, I will present a multi-FPGA acceleration appliance named DFX for accelerating hyperscale transformer-based AI models. Optimized for OpenAI’s GPT (Generative Pre-trained Transformer) models, it manages to execute an end-to-end inference with low latency and high throughput. DFX uses model parallelism and optimized dataflow that is model-and-hardware-aware for fast simultaneous workload execution among multiple devices. Its compute cores operate on custom instructions and support entire GPT operations including multi-head attentions, layer normalization, token embedding, and LM head. We implement the proposed hardware architecture on four Xilinx Alveo U280 FPGAs and utilize all of the channels of the high bandwidth memory (HBM) and the maximum number of compute resources for high hardware efficiency. Finally, DFX achieves 5.58× speedup and 3.99× energy efficiency over four NVIDIA V100 GPUs on the modern GPT-2 model. DFX is also 8.21× more cost-effective than the GPU appliance, suggesting that it can be a promising alternative in cloud datacenters.
Processing-in-Memory for AI: From Circuits to Systems

Artificial intelligence (AI) and machine learning (ML) technology are revolutionizing many fields of study as well as a wide range of industry sectors such as information technology, mobile communication, automotive, and manufacturing. As more industries are adopting the technology, we are facing an ever-increasing demand for a new type of hardware that enables faster and more energy efficient processing for AI workloads.

Traditional compute-centric computers such as CPU and GPU, which fetch data from the memory devices to on-chip processing cores, have been improving their compute performances rapidly with the scaling of process technology. However, in the era of AI and ML, as most workloads involve simple but data-intensive processing between large-scale model parameters and activations, data transfer between the storage and compute device becomes the bottleneck of the system (i.e., von-Neumann bottleneck). Memory-centric computing takes an opposite approach to solve this data movement problem. Instead of fetching data from the storage to compute, data stays in the memory while the processing units are merged into it so that computations can be done in the same location without moving any data.

In this talk, I will briefly summarize the challenges of the latest AI accelerators focusing on the data movement issue mentioned above. Then, I will go through various processing-in-memory (PIM) architectures that can improve the performance and energy efficiency of the AI accelerators. I will also describe several notable circuit techniques on how we merge the logic into the memory and how we accelerate desired computations. Finally, I will propose a holistic approach to bridge the gap between the architectures and circuits and to make a practical and feasible PIM based solution for AI hardware.

Professor of Electronics at Università di Pavia
Andrea Mazzanti (S’02–M’09–SM’13) received the Laurea and Ph.D. degrees in electrical engineering from the University of Modena and Reggio Emilia, Modena, Italy, in 2001 and 2005, respectively. During the summer of 2003, he was with Agere Systems, Allentown, PA as an Intern. From 2006 to 2009, he was Assistant Professor with the University of Modena and Reggio Emilia. In January 2010, he joined the University di Pavia where he is now Full Professor of electronics. He has authored over 150 technical papers. His main research interests cover device modeling and IC design for high-speed communications, RF and millimeter-wave systems. Dr. Mazzanti has been a member of the Technical Program Committee of the IEEE Custom Integrated Circuit Conference (CICC) from 2008 to 2014, IEEE European Solid State Circuits Conference (ESSCIRC) and IEEE International Solid State Circuits Conference (ISSCC) from 2014 to 2018. He has been Associate Editor for the Transactions on Circuits and Systems-I from 2012 to 2015 and Guest Editor for special issues of the Journal of Solid State Circuits dedicated to CICC 2013-14 and ESSCIRC-2015. Since 2017, he has been serving as an Associate Editor for the IEEE Solid-State Circuits Letters.
Breaking the Phase-Noise Barrier with Multi-Core and Series-Resonance Harmonic Oscillators in BiCMOS Technology
The talk begins with a review of the fundamental and technological limiting factors to the spectral purity of integrated RF oscillators, and then proposes circuit solutions to break the phase noise barrier in silicon technology. Phase noise can be scaled by resorting to the multi-core approach, provided mismatches among multiple oscillators are carefully considered. As an example, a 16-core (voltage-controlled) oscillator demonstrates -130dBc/Hz at 1MHz offset from 20 GHz minimum phase. A more elegant and efficient approach is then introduced. Leveraging the series resonance of a tank, the remarkably lower resistance rises considerably the tank active power, thus enabling a remarkable improvement on the spectral purity. Two 10GHz BiCMOS VCOs exploiting the concept are proposed. The measured minimum phase noise is −138 dBc/Hz at 1-MHz offset with 600mW from 1.2-V supply. Experimental results demonstrate the lowest phase noise ever reported by fully integrated RF oscillators in a silicon technology.
System and Processor Architect at Renesas Electronics Corporation
Sugako Otani is a system and processor architect at Renesas Electronics Corporation. Her current research focuses on application-specific architectures, ranging from IoT devices to automotive. She joined Mitsubishi Electric Corporation, Japan, in 1995 after receiving an M.S. in physics from Waseda University, Tokyo. She received a Ph.D. in Electrical Engineering and Computer Science from Kanazawa University in 2015. From 2005 to 2006, she was a Visiting Scholar at Stanford University. She is a committee member of ISSCC, VLSI Symposium, ESSCIRC, and Cool Chips. Since 2019, she has been a Visiting Associate Professor at Nagoya University, Japan.
Automotive System Design
The automotive industry is in the midst of a significant transformation. “CASE: Connected, Autonomous, Shared & Service, Electric” has been advocated as a trend. Along with this trend, automotive E/E (Electrical/Electronic) architecture will evolve from the current distributed architecture to a domain architecture and then to the future zone architecture in the autonomous driving era. The lecture introduces the requirement of automotive system design for in-vehicle devices and their key technologies, including processors for the infotainment system and advanced vehicle control. The lecture also covers automotive functional safety, security, and maintenance & upgrades with OTA(Over the air).
CTO of ICsense
Tim Piessens received the M.Sc. and Ph.D. degrees in electrical engineering from the Katholieke Universiteit Leuven, Leuven, Belgium, in 1998 and 2003, respectively. During his Ph.D., he focused on a new type of power amplifier/line driver for xDSL applications. In 2004, he co-founded ICsense, where he is the CTO and is responsible for the technical content of projects in the medical, automotive and consumer fields. His current research interests include analog sensor readouts, non-linear system design, power management, high-voltage design and low-power, low-noise analog front-end design. From 2014 till 2021, he was a member of the IEEE International Solid-State Circuits Conference Technical Program Committee. He was a member of the ISSCC EU leadership, the ISSCC executive committee and the ISSCC vision committee from 2019 till 2021 and ITPC EU chair in 2021. From 2020 on, he is a member of the ESSDERC-ESSCIRC Steering Committee.
Challenges in Battery Monitoring Systems for Electrical Vehicles
Electrical vehicles will become the standard in private transport the coming decade. Since the battery is still the main component that will determine cost and driving range, a good battery control is the main component in an eV system. A battery management system consists of 2 components : a current measurement system and a voltage monitoring system. Both have their own specific problems. We will first discuss the different techniques to measure current : using a shunt, a Rogowski coil or using a magnetic sensor. Next the voltage measurement chain will be tackled including techniques for dealing with the high voltages of a battery pack.
Design of Fully Integrated Charge Pumps
In this lecture we will discuss the basics of charge pump design. Starting with the fundamental observation that half of the energy is lost when charging a capacitor, the principle of quasi-static charging is explained. From these charge balancing laws, the basic model of a charge pump can be derived together with the slow switching limit and fast switching limit. This on its turn leads to the PFM control loop to regulate the output power. All this is illustrated with several examples like a Dickson Charge Pump, a series-parallel converter, the Fibonacci converter, … and more advanced techniques like multi-phase are touched. In the second part, a real life design plan is shown where starting from the specifications a topology is developed and simulated.
Design of High Performance Readout Chains for MEMS Barometric Pressure Sensors
Barometric pressure sensor are indispensable features in wearable consumer devices. Modern designs can sense absolute height difference of less than 8.5cm (1Pa), improving indoor navigation significantly and enabling new applications such as activity tracking and crash detection. In this talk, we will take a deep dive into the design challenges of readout chains for capacitive pressure sensors. The main driving requirements are noise and power. To reach the demanding targets for wearable devices, heavy duty cycling and advanced analog front-end design is needed. But these are not the only challenges. Since they must be exposed to the atmosphere, pressure sensors in smartphones are often located in their outer cases, and have long connections to the main PCB. This leads to high demands on their PSRR and RF immunity. Both topics will be discussed, as well as several methods to improve the robustness of pressure sensor readout chains.
High Performance, Low Power 3D Magnetic Hall Sensor design and challenges
Magnetic sensors are everywhere, from the accurate current measurement applications over wheel speed sensors to magnetic switches that measure the angle of your laptop case. In the field of magnetic sensors, the Hall effect sensor plays an important role for its high linearity and since it is fully integrated in almost any semiconductor technology. In this lecture we will cover the basics of a hall plate sensor, starting with the 1D hall plate and extending this to integrated 3D sensors. One of the drawbacks of a hall plate is its high offset. Techniques for reducing this non-ideality like quadrature layout and spinning will be elaborated. A full analog readout chain will then be further discussed with an emphasis on the instrumentation amplifier including best design practices and simulation techniques.
Associate Professor, Columbia University
Mingoo Seok is an associate professor of Electrical Engineering at Columbia University. He received his B.S. from Seoul National University, South Korea, in 2005, and his M.S. and Ph.D. degree from the University of Michigan in 2007 and 2011, respectively, all in electrical engineering. His research interests are various aspects of VLSI circuits and architecture, including ultra-low-power integrated systems, cognitive and machine-learning computing, an adaptive technique for the process, voltage, temperature variations, transistor wear-out, integrated power management circuits, event-driven controls, and hybrid continuous and discrete computing. He won the 2015 NSF CAREER award and the 2019 Qualcomm Faculty Award. He is the technical program committee member for multiple conferences, including IEEE International Solid-State Circuits Conference (ISSCC). In addition, He has been an IEEE SSCS Distinguished Lecturer for Feb/2023-Feb/2025 and an associate editor for IEEE Transactions on Circuits and Systems Part I (TCAS-I) (2014-2016), IEEE Transactions on VLSI Systems (TVLSI) (2015-present), IEEE Solid-State Circuits Letter (SSCL) (2017-2022), and as a guest associate editor for IEEE Journal of Solid-State Circuits (JSSC) (2019).
Review, Survey, and Benchmark of Recent Digital LDO Voltage Regulators
In this seminar, we will present a thorough review of the recent digital low-dropout voltage regulators (DLDOs). We have reviewed them in five aspects: control laws, triggering methods, power-FET circuit design, digital-analog hybridization, and single vs. distributed architectures. We then surveyed and benchmarked over 50 DLDOs published in the last decade. In addition, we have offered a new figure of merit (FoM) to address the shortcomings of the previously proposed FoMs. The benchmark provides insights into which techniques contribute to better dynamic load regulation performance. The survey and benchmark results are uploaded to a public repository.
SRAM-based In-Memory Computing Hardware: Analog vs Digital and Macros to Microprocessors
In the last decade, SRAM-based in-memory computing (IMC) hardware has received significant research attention for its massive energy efficiency and performance boost. In this seminar, first, we will introduce two very recent macro prototypes which achieve state-of-the-art performance and energy efficiency yet leverage very different computing mechanisms. Specifically, one adopted analog-mixed-signal (AMC) computing mechanisms (capacitive coupling and charge sharing), whereas the other adopted a fully digital approach. After this macro-level introduction, we will present recent microprocessor prototypes that employ IMC-based accelerators, which can perform on-chip inferences at very high energy efficiency and low latency.
Principal Engineer at Intel’s Programmable Solution Group’s CTO Team
Dr. Farhana Sheikh is a Principal Engineer at Intel’s Programmable Solutions Group.  She has over 15 years of experience in ASIC and DSP/communications research including adaptive DSP, crypto, graphics, quantum wireless control, and 5G+ wireless. Since joining PSG, after 10+ years in Intel Labs, Farhana’s research focuses on 2D and 3D chiplet + FPGA integration research, with a focus on 3D heterogeneous integration for next generation wireless and sensing applications. Farhana has published over 50 papers and filed 22 patents, has initiated the AIB-3D open-source specification for 3D chiplet heterogeneous integration.  Farhana was instrumental in enabling Intel 16 for Intel’s IDM2.0 and is the co-creator of Intel’s University Shuttle Program.  Outside of Intel she volunteers for IEEE Solid-State Circuits Society (SSCS) and is the SSCS Women in Circuits Committee Chair.  Farhana is a co-recipient of 2020, 2019, and 2012 IEEE ISSCC Outstanding Paper Awards. In 2021, Farhana was recognized for her mentorship work with students and faculty by the Semiconductor Research Corporation (SRC) that awarded her the 2021 Mahboob Khan Outstanding Industry Liaison Award. She is IEEE SSCS Member-at-Large for 2022-2024, and IEEE SSCS Distinguished Lecturer for 2023 and 2024.
FPGA-Chiplet Architectures and Circuits for 2.5D/3D 6G Intelligent Radios
The number of connected devices is expected to reach 500 billion by 2030, which is 59-times larger than the expected world population. Objects will become the dominant users of next-generation communications and sensing at untethered, wireline-like broadband performance, bandwidths, and throughputs. This sub-terahertz 6G communication and sensing will integrate security and intelligence. It will enable a 10x to 100x increase in peak data rates. FPGAs are well positioned to enable intelligent radios for 6G when coupled with high-performance chiplets incorporating RF circuits, data converters, and digital baseband circuits incorporating machine learning and security.  This talk presents use of 2.5D and 3D heterogeneous integration of FPGAs with chiplets, leveraging Intel’s EMIB/Foveros technologies with focus on one emerging application driver: FPGA-based 6G sub-THz intelligent wireless systems.  Nano-, micro-, and macro-3D heterogeneous integration is summarized, and previous research in 2.5D chiplet integration with FPGAs is leveraged to forge a path towards new 3D-FPGA based 6G platforms.  Challenges in antenna, packaging, power delivery, system architecture design, thermals, and integrated design methodologies/tools are briefly outlined. Opportunities to standardize die-to-die interfaces for modular integration of internal and external circuit IPs are also discussed.
Laying the Foundation for Intelligently Adaptive Radios
System adaptivity has been studied since the mid-60s and recently there has been a surge in interest in self-adaptive systems, especially in the software engineering community, with its main application is cybernetics. This talk introduces the concept of a self-adaptive system as it is extended to wireless communication, where channel characteristics are exploited to intelligently modify operation of the physical layer of the radio to optimize energy consumption and connectivity. The architectural and circuit foundations required to realize a wide-band “learning-based” transceiver architecture are detailed in the design and implementation of configurable PHY subsystems that can be dynamically programmed to realize intelligent radios. Current advancements in the application of AI/ML to wireless systems and how these may be leveraged at the PHY level are briefly discussed followed by an overview of future research required to build intelligently adaptive radio systems.
Principal Research Scientist/Manager, IBM Thomas J. Watson Research Center
Alberto Valdes-Garcia is currently a Principal Research Scientist and Manager of the RF Circuits and Systems Group at the IBM T. J. Watson Research Center. In his current role, he leads a multi-disciplinary team that investigates and develops technologies that bridge the gap between antennas and edge-compute-based AI, enabling new millimeter-wave systems and applications for imaging and communications. Dr. Valdes-Garcia received the Ph.D. degree in Electrical Engineering from Texas A&M University in 2006. He holds >130 issued US patents and has authored >100 peer-reviewed publications. Recent awards include the 2017 Lewis Winner Award for Outstanding Paper, presented by the IEEE International Solid-State Circuits Conference, and the 2017 IEEE Journal of Solid-State Circuits Best Paper Award. In 2013, he was selected by the National Academy of Engineering for its Frontiers of Engineering Symposium. He currently serves in the Inaugural Editorial Board of the IEEE Journal of Microwaves and was the Chair of the IEEE MTT-S Microwave and Millimeter-Wave Integrated Circuits Committee in 2020-2021. Dr. Valdes-Garcia is a Senior Member of IEEE, was inducted into the IBM Academy of Technology in 2015, and was recognized as an IBM Master Inventor in 2016, 2019, and 2022.
3D millimeter-wave imaging and sensing with Si-based phased arrays, edge computing, and AI
The use of millimeter-wave frequencies for 5G networks has been a primary contributor for transitioning Si-based phased array technology from R&D to real-world deployments. While the commercial use of millimeter-wave sensing so far has been dominated by low-cost, compact MIMO radars for automotive and industrial applications, the on-going wide deployment and advancement of Si-based phased arrays opens a new horizon of opportunities for sensing and event recognition. This talk will first cover the fundamentals and KPIs of 3D radar systems using phased arrays including associated key circuit design and packaging design techniques. Examples of such 3D radar systems at 28-GHz, 60-GHz and 94-GHz will be provided. Next, the presentation will describe how the full potential of such systems can be realized through synergistic co-design with algorithms and edge computing assets. Key examples of emerging applications based on these vertically integrated antennas-to-software/AI systems will be provided including multi-spectral imaging, 5G mmWave joint sensing and communications, and AI-based recognition of human gestures and concealed objects.
Packaging and module integration as a catalyst for innovation in Si-based millimeter-wave systems
Over the last two decades, advancements on Si-based electronics enabled the development of wireless systems operating at millimeter-wave frequencies. Along the way, advanced package  designs and component integration technologies were crucial in enabling those systems to reach their full potential and, more importantly, achieve commercial impact. As we enter the maturing stage of 5G and start seeing the dawn of 6G on the horizon, mmWave systems are expected to play a growing role in high-throughput wireless communications and a large variety of sensing applications. This talk will first review key milestones in the history of mmWave module development with focus on phased arrays. The lessons learned from this journey will be described, with emphasis on antenna-in-package design and IC-package co-design. Then, emerging mmWave systems and applications will be described including the software-defined phased array, a multi-spectral imaging platform, and AI-based feature extraction from 3D radar. The presentation will conclude by outlining the next set of challenges that IC, package, and module integration technologies need to address to enable the realization of those system concepts and as compact “Antennas-to-AI” heterogeneous modules.
Associate Professor, Virgina Tech
Dr. Walling received the B.S. degree from the University of South Florida, Tampa, in 2000, and the M.S. and Ph. D. degrees from the University of Washington, Seattle, in 2005 and 2008, respectively. He was employed at Motorola, Plantation, FL working in cellular handset development. He interned for Intel from 2006-2007, working on highly-digital transmitters and CMOS PAs and continued this research while a Postdoctoral Researcher at the University of Washington. He was an associate professor in the ECE department at University of Utah, Head of RF Transceivers in the Microelectronic Circuits Centre Ireland at the Tyndall National Institute in Ireland, a Principal Engineer in Qualcomm CR&D, and a Senior Principal Engineer at Skyworks Solutions, Inc. He is presently an Associate Professor in the Bradley ECE Department at Virginia Tech. His research focuses on solutions for the next generation of wireless communication.Dr. Walling has authored ~90 journal articles and conference papers and holds four patents with three pending. He gave a Keynote address at IEEE ESSCIRC on Digital Power Amplifiers in 2019, he received the Outstanding Teaching Award at University of Utah in 2015, the HKN Award for Excellence in Teaching in 2012, Best Paper Award at Mobicom 2012, the Yang Award for outstanding graduate research from the EE Department at University of Washington in 2008, an Intel Predoctoral Fellowship in 2007-2008, and the Analog Devices Outstanding Student Designer Award in 2006.
CMOS Power Amplifiers and Transmitters: The Evolution from 'Digital-Friendly' RF to 'Digital' RF
Abstract not yet available.
Digitally Friendly Transmitters for Next Generation Communications
Abstract not yet available.
Mixed-Mode Transceivers in CMOS
Abstract not yet available.
Associate Professor, University of Michigan, Ann Arbor
Zhengya Zhang received the B.A.Sc. degree from the University of Waterloo in Canada in 2003, and the M.S. and Ph.D. degrees from UC Berkeley in 2005 and 2009. He has been a faculty member at the University of Michigan, Ann Arbor since 2009. His research is in low-power and high-performance integrated circuits and systems for computing, communications, and signal processing. Dr. Zhang was a recipient of the NSF CAREER Award, the Intel Early Career Faculty Award, the Neil Van Eenam Memorial Award from the University of Michigan, and the David J. Sakrison Memorial Prize from UC Berkeley. He served on the program committees of the Symposia on VLSI Technology and Circuits and CICC, and the editorial board of the IEEE Transactions on VLSI Systems.
Machine Learning Hardware Design for Efficiency, Flexibility and Scalability
Machine learning (ML) is the driving application of the next-generation computational hardware. How to design ML hardware to achieve a high performance, efficiency, and flexibility to support fast growing ML workloads is a key challenge. Besides dataflow-optimized systolic arrays and single instruction, multiple data (SIMD) engines, efficient ML accelerators have been designed to take advantage of static and dynamic data sparsity. To accommodate the fast-evolving ML workloads, matrix engines can be integrated with an FPGA to provide the efficiency of kernel computation and the flexibility of control. To support the increasing ML model complexity, modular chiplets can be tiled on a 2.5D interposer and stacked in a 3D package. We envision that a combination of these techniques will be required to address the needs of future ML applications.
The Challenges and Opportunities in the Path Towards Chipletization
The modular partition into chiplets and the integration of chiplets in 2.5D or 3D forms offer a promising path towards constructing large-scale systems to deliver a performance comparable to single-chip integration, but without the high cost, risks and effort associated with monolithic integration. Industry has shown us what are possible with chipletization. However, for the technology to truly take off, we need three key elements to be in place: chiplets equipped with a standard interface, advanced bumping and packaging technology, and design automation. I will share our journey in conducting research on chipletization and the development of chiplets, all equipped with a standard-conforming, sub-pJ/b, synthesizable I/O interface. Through collaborations, we demonstrated multi-chip packages by tiling homogeneous chiplets and integrating heterogeneous multi-functional chiplets to efficiently scale up systems and improve their performance and versatility. Our lessons taught us that the success of chipletization must rely on an ecosystem that lowers the entry barriers, and the best practice in chipletization is to employ cross-domain endeavors that span system design, IC design, packaging, and assembly.

Terms through 31 December 2025

Keith A. Bowman is a Principal Engineer and Manager in the System-on-Chip (SoC) Research Lab at Qualcomm Technologies, Inc. in Raleigh, NC, USA.  He directs the research and development of circuit and system technologies to improve the performance, energy efficiency, yield, reliability, and security of Qualcomm processors.  He pioneered the invention, design, and test of Qualcomm’s first commercially successful circuit for mitigating the adverse effects of supply voltage droops on processor performance, energy efficiency, and yield.  He received the B.S. degree from North Carolina State University in 1994 and the M.S. and Ph.D. degrees from the Georgia Institute of Technology in 1995 and 2001, respectively, all in electrical engineering.  From 2001 to 2013, he worked in the Technology Computer-Aided Design (CAD) Group and the Circuit Research Lab at Intel Corporation in Hillsboro, OR, USA.  In 2013, he joined the Qualcomm Corporate Research and Development (CRD) Processor Research Team.   Dr. Bowman has published 90+ technical papers in refereed conferences and journals, authored one book chapter, received 30+ US patents and 50+ international patents, and presented 50+ tutorials on variation-tolerant circuit designs.  He received the 2016 Qualcomm CRD Distinguished Contributor Award for Technical Contributions, representing CRD’s highest recognition, for the pioneering invention of the auto-calibrating adaptive clock distribution circuit, which significantly enhances processor performance, energy efficiency, and yield and is integral to the success of the Qualcomm® Snapdragon™ 820 and future processors.  He received the 2022 Qualcomm IP Achievement Award for high-quality inventions, leading to strong processor performance and energy-efficiency improvements and differentiated products.  Since 2018, he served on the Qualcomm Low-Power Circuit Design Patent Review Board.  In 2019 and 2020, he was as an IEEE SSCS Distinguished Lecturer (DL).  He is currently serving a 2nd 2-year term as an IEEE SSCS DL.  From 2020 to 2023, he served as an IEEE SSCS Mentor.  He was the International Technical Program Committee (ITPC) Chair and the General Conference Chair for ISQED in 2012 and 2013, respectively, and for ICICDT in 2014 and 2015, respectively.  He has served on the ISSCC ITPC as a member of the Digital Circuits (DCT) Subcommittee from 2016 to 2020 and as the DCT Chair from 2020 to 2024.  He currently serves as the ISSCC Program Vice Chair.  He is a Fellow of the IEEE. Dr. Bowman has published over 80 technical papers in refereed conferences and journals, authored one book chapter, received 19 patents, and presented 38 tutorials on variation-tolerant circuit designs.  He received the 2016 Qualcomm Corporate Research and Development (CRD) Distinguished Contributor Award for Technical Contributions, representing CRD’s highest recognition, for the pioneering invention of the auto-calibrating adaptive clock distribution circuit, which significantly enhances processor performance, energy efficiency, and yield and is integral to the success of the Qualcomm® Snapdragon™ 820 and future processors.  He was the Technical Program Committee (TPC) Chair and the General Conference Chair for ISQED in 2012 and 2013, respectively, and for ICICDT in 2014 and 2015, respectively.  Since 2016, he has served on the ISSCC TPC.
Adaptive Processor Designs
System-on-chip (SoC) processors across a wide range of market segments, including Internet of Things (IoT), mobile, laptop, automotive, and datacenter, experience dynamic device, circuit, and system parameter variations during the operational lifetime. These dynamic parameter variations, including supply voltage droops, temperature changes, transistor aging, and workload fluctuations, degrade processor performance, energy efficiency, yield, and reliability. This lecture introduces the primary variation sources and the negative impact of these variations across voltage and clock frequency operating conditions. Then, this lecture presents adaptive processor designs to mitigate the adverse effects from dynamic parameter variations while highlighting the key trade-offs and considerations for product deployment.
Tsinghua University
Wei Deng received the B.S. and M.S. degrees from the University of Electronic Science and Technology of China (UESTC), China, in 2006 and 2009, respectively, and the Ph.D. degree from the Tokyo Institute of Technology, Japan, in 2013. He was with Apple Inc., Cupertino, CA, USA, working on RF, mm-wave, and mixed-signal IC design for wireless transceivers and Apple A-series processors. Currently he is with Tsinghua University, Beijing, China, as an Associate Professor. His research interests include RF, mm-wave, terahertz, and mixed-signal integrated circuits and systems for wireless communications and radars systems. He has authored or co-authored more than 160 IEEE journal and conference articles. Dr. Deng is a Technical Program Committee (TPC) Member of ISSCC, VLSI, A-SSCC, CICC and ESSCIRC. He has been an Associate Editor and a Guest Editor of the IEEE Solid-State Circuits Letters (SSC-L), and a Guest Editor of the IEEE Journal of Solid-state Circuits (JSSC).
High-Performance PLLs: Evolution, Challenges, and Future Directions
High-performance phase-locked loop (PLL) is one of the key techniques for both communication and radar systems, which makes it to be the cutting-edge topic in the field of integrated circuit and system design. It involves various research directions such as mixed-signal circuit design, digital algorithms, and system-level architecture. This lecture will discuss the high-performance PLL circuit and architecture evolution, review the latest research progress and discuss the future development trends of high-performance PLLs, with particular emphasis on ultra-low jitter PLLs toward 10-fs.rms and FMCW PLLs with ultra-fast and linearized chirp in CMOS technology.
Joint Radar-communication CMOS Transceiver: From System Architecture to Circuit Design
Recent years, millimeter-wave and Terahertz radar systems for sensing and radio systems for communication have attracted substantial attention both from the academia and industry. In addition, there is an increasing demanding for fusing both the hardware platform and frequency band of the radar and radio system, which has advantages of energy efficiency, performance optimization, spectrum sharing/efficiency, compact size, interference management, and the overall cost, as compared to assembling of two distinct systems. This lecture will introduce the current and future trends in the emerging joint radar-communication CMOS transceiver from system architecture to circuit design.
IBM T.J. Watson Research Center
Timothy O. (Tod) Dickson received dual B.Sc. degrees in electrical and computer engineering with highest honors from the University of Florida in 1999.  He completed the M. Eng degree at the University of Florida in 2002 and the Ph.D. degree at the University of Toronto in 2006, both in electrical engineering. His Ph.D. work was in the area of serial transceivers operating up to 80 Gb/s in SiGe BiCMOS technologies, focusing on the development of low-noise and low-power design methodologies.  In 2006, he joined the IBM T.J. Watson Research Center in Yorktown Heights, N.Y where he is currently a Principal Research Scientist.  His research focuses on the design of high-speed, low-power serial transceivers for electrical and optical links.  Since 2014, he has served on the Technical Advisory Board of the Semiconductor Research Corporation Analog-Mixed Signal Circuits, Systems, and Devices (AMS-CSD) thrust.  He is also an Adjunct Professor at Columbia University, where he has taught graduate level courses in analog and mixed-signal integrated circuit design since 2007. Dr. Dickson has been a recipient or co-recipient of several best paper awards, including the Best Paper Award for the 2009 IEEE Journal of Solid-State Circuits, the Beatrice Winner Award for Editorial Excellence at the 2009 ISSCC, the Best Paper Award at the 2015 IEEE Custom Integrated Circuits Conference (CICC), and the Best Student Paper Award at the 2004 Symposium on VLSI Circuits  He was a member of the Technical Program Committee (TPC) of the IEEE Compound Semiconductor Integrated Circuit Symposium from 2007-2009, and of the IEEE CICC from 2017-2023 where he chaired the wireline subcommittee. He was a guest editor of the October 2010 issue of the IEEE Journal of Solid-State Circuits. From 2018-2023, he was an Associate Editor for the IEEE Solid-State Circuits Letters. Since 2024 he has been an Associate Editor for the IEEE Open Journal of the Solid-State Circuits Society. He is an IEEE Senior Member
High-Speed DACs for 100+ Gb/s Wireline Links
Digital-to-analog converters (DACs) operating above 50GS/s are critical components of modern transmitters for wireline applications. These circuits permit data modulation and equalization to be moved from the analog domain (as was common in links operating below 50Gb/s) to the digital domain, thereby enabling today’s serial links operating at 100-200Gb/s. This lecture explores DAC design for wireline applications. Driver and multiplexer design techniques will be introduced, including those used for current-mode (CML) and voltage-mode (SST) drivers found in state-of-the-art serial links. As systems explore the use of more sophisticated modulation formats such as higher-order time domain pulse amplitude modulation (e.g., PAM6 or PAM8) or frequency domain modulation (e.g., OFDM), higher linearity DACs will be required than those employed in existing PAM4 systems. Techniques for adaptive calibration of DAC static linearity will be discussed. Designs of two different 8b DACs operating at 56 and 72GS/s in 7nm and 4nm FinFET technologies will be described as case studies.
IBM T. J. Watson Research Center
Dr. Daniel Friedman is currently a Distinguished Research Scientist and Senior Manager of the Communication Circuits and Systems department, IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA; he is also an IEEE Fellow. He received his doctorate from Harvard University and then subsequently completed post-doctoral work at Harvard and consulting work at MIT Lincoln Laboratory. At IBM, he initially developed field-powered RFID tags before turning to high data rate wireline and wireless communication. His current research interests include accelerator designs for AI, high-speed I/O design, phase-locked-loop design, millimeter-wave circuits and systems, and circuit/system approaches to enabling new computing paradigms, the latter including cryogenic electronics for use in quantum computing systems. He holds more than 90 patents and has authored or coauthored more than 85 publications. He was a co-recipient of the Beatrice Winner Award for Editorial Excellence at the 2009 International Solid-State Circuits Conference (ISSCC), the 2009 Journal of Solid-State Circuits (JSSC) Best Paper Award (given in 2011), the 2017 ISSCC Lewis Winner Outstanding Paper Award, and the 2017 JSSC Best Paper Award (given in 2019). He has served on the technical program committees of the Bipolar Circuits and Technology Meeting (2003-2008) and of the ISSCC (2008-2016); since 2016, he has served as the ISSCC Short Course chair. He served as a member-at-large of the IEEE Solid-State Circuits Society (SSCS) Adcom from 2018-20, as the SSCS Distinguished Lecture chair from 2020 to 2021, and as Associate Editor of the JSSC from 2019-2023. He is the current Vice President of the SSCS.
AI accelerators and the chiplet paradigm
The growth in the application of machine learning and artificial intelligence technology to problems across virtually all spheres of endeavor has been and is expected to remain extraordinary. Hardware acceleration for machine learning tasks is a critical vector that has enabled this exceptionally rapid growth. Further accelerator advances are necessary to drive everything from improved efficiency for inference, to support ever-growing network sizes to improvements in support for network training, to enabling broadening of ML deployments across platforms with a wide range of power and performance envelopes. The emerging chiplet paradigm will drive not only the scaling of compute density in AI solutions, but also promises to enable a proliferation of customized AI solutions for a range of workloads. In this presentation, we will describe example AI accelerator designs in the context of a solution framework, how communication advances are linked AI accelerator advancement, and will discuss approaches to accelerate the emergence of a chiplet ecosystem, including how this emergence might drive new accelerator implementation opportunities.
Cryogenic CMOS for future scaled quantum computing systems
Quantum computing represents a new paradigm that has the potential to transform problems that are computationally intractable today into solvable problems in the future.  Significant advances in the last decade have lent support to the idea that quantum computers can be implemented, and further that the goal of demonstrating true performance advantages over traditional computing techniques on one or more problems may be achieved in the not so distant future. Delivering on this promise is expected to require quantum error correction solutions, in turn demanding large qubit counts that pose significant challenges for quantum computer implementations, especially in the area of qubit interface electronics. An active area of research to address this challenge is the use of integrated cryogenic CMOS designs.  In this presentation, we will present a superconducting qubit-based quantum computing system framework, opportunities for cryogenic CMOS introduction into future systems, example cryogenic CMOS implementations and results, and next challenges that must be met to enable cryogenic CMOS adoption.
The University of Tokyo, Tokyo, Japan
Makoto Ikeda received the BE, ME, and Ph.D. degrees in electrical engineering from the University of Tokyo, Tokyo, Japan, in 1991, 1993 and 1996, respectively. He joined the University of Tokyo as a research associate, in 1996, and now professor and director of Systems Design Lab(d.lab), the University of Tokyo. At the same time he has been involving the activities of VDEC(VLSI Design and Education Center, the University of Tokyo), to promote VLSI design educations and researches in Japanese academia. He worked for hardware security, asynchronous circuits design, smart image sensor for 3-D range finding, and time-domain signal processing. He has been serving various positions of various international conferences, including ISSCC ITPC Chair(ISSCC 2021), IMMD sub-committee chair (ISSCC 2015-2018), A-SSCC 2015 TPC Chair, VLSI Circuits Symposium PC Chair(2017)&Symposium Chair(2019). He is a senior member of IEEE, IEICE Japan, and a member of IPSJ and ACM.
Acceleration of Encryption Algorithms, Elliptic Curve, Pairing, Post Quantum Cryptoalgorithm (PQC), and Fully Homomorphic Encryption (FHE)
This lecture will cover basics of public-key encryption, and example design optimization of elliptic-curve based encryption algorithm, including pairing operations, and its security measures. Then extend design optimization on lattice-based encryption algorithms including post quantum crypto-algorithm, CRISTALS-Kyber/Dilithium, isogeny based encryption argorithms, and fully homomorphic encryption algorithm.
Basics of Asynchronous circuits design
This lecture will overlook basics and variety of asynchronous controlling, from the view point of advantages for low-voltage & variation rich conditions. This lecture takes two extreme example of complete completion detection type asynchronous designs as examples and demonstrate details of operation and performance. In addition, this talk will cover recent trial on design flow of random logic by the self-synchronous circuits.
Low-voltage design with autonomous control by gate-level hand-shaking
This lecture will cover basics of asynchronous control including complete completion detection type control, and demonstrate the autonomous gate-level power gating to reduce energy consumption at the energy minimum operating point with asynchronous FPGA as the example. This lecture will also cover low-voltage operations, operation tolerance with power bounce, as well as aging. In addition, this talk will cover recent trial on design flow of random logic by the self-synchronous circuits.
ETH Zürich
Taekwang Jang (S’06-M’13-SM’19) received his B.S. and M.S. in electrical engineering from KAIST, Korea, in 2006 and 2008, respectively. From 2008 to 2013, he worked at Samsung Electronics Company Ltd., Yongin, Korea, focusing on mixed-signal circuit design, including analog and all-digital phase-locked loops for communication systems and mobile processors. In 2017, he received his Ph.D. from the University of Michigan and worked as a post-doctoral research fellow at the same institution. In 2018, he joined ETH Zürich as an assistant professor and is leading the Energy-Efficient Circuits and Intelligent Systems group. He is also a member of the Competence Center for Rehabilitation Engineering and Science, and the chair of the IEEE Solid-State Circuits Society, Switzerland chapter. His research focuses on circuits and systems for highly energy-constrained applications such as wireless sensor nodes and biomedical interfaces. Essential building blocks such as a sensor interface, energy harvester, power converter, communication transceiver, frequency synthesizer, and data converters are his primary interests. He holds 15 patents and has (co)authored more than 80 peer-reviewed conferences and journal articles. He is the recipient of the 2024 IEEE Solid-State Circuits Society New Frontier Award, the SNSF Starting Grant, the IEEE ISSCC 2021 and 2022 Jan Van Vessem Award for Outstanding European Paper, the IEEE ISSCC 2022 Outstanding Forum Speaker Award, and the 2009 IEEE CAS Guillemin-Cauer Best Paper Award. Since 2022, he has been a TPC member of the IEEE International Solid-State Circuits Conference (ISSCC), IMMD Subcommittee, and IEEE Asian Solid-State Circuits Conference (ASSCC), Analog Subcommittee. He also chaired the 2022 IEEE International Symposium on Radio-Frequency Integration Technology (RFIT), Frequency Generation Subcommittee. Since 2023, he has been serving as an Associate Editor for the Journal of Solid-State Circuits (JSSC) and was appointed as a Distinguished Lecturer for the Solid-State Circuits Society in 2024.
Energy-Efficient Sensor Interface
In the IoT era, a miniaturized sensor system serves as a key leaf node by collecting environment signals and bio-potentials. However, due to the small form factor and limited battery capacity, the energy efficiency of analog and mixed-signal circuits is a critical concern for the long-term operation of the sensor system. Especially, it poses a crucial challenge for the sensor interface circuits whose power consumption needs to be minimized while maintaining acquired signal accuracy and bandwidth. This short course discusses various sensor interface designs with improved noise and power efficiency.
Fully integrated DC-DC Conversion
Power management integrated circuits are essential building blocks of consumer electronics for the Internet of Things. Among various architectures, fully integrated power management circuits are promising candidates to provide small form factors and meet the high power density demand of modern computing platforms. However, several characteristics of on-chip passive components limit the performance of the fully integrated DC-DC converters, such as small inductance and Q-factor of the on-chip inductors and large parasitic bottom capacitance or low density of the on-chip capacitors. This short course will introduce the fundamentals of on-chip DC-DC converter designs as well as the latest designs with improved performance.
Low Power Frequency Generation
Miniaturization and Interactive communication have been the two main topics dominating recent research on the internet-of-things. The high demand for continuous monitoring of environmental and bio-medical information has accelerated sensor technologies as well as circuit innovations. Simultaneously, the advances in communication methods and the widespread use of cellular and local data links enabled the networking of miniaturized sensor systems. In such systems, the reduction of sleep power is critical to make them sustainable with limited battery capacity or harvested energy. It makes the ultra-low-power wake-up timer a critical building block that must be designed with a stringent power budget. At the same time, precise frequency accuracy is also essential to maintaining synchronization for data communication. This short course will present fundamentals and recent innovations in ultra-low-power frequency reference circuits for miniaturized IoT systems. Two commonly adopted architectures, on-chip RC oscillators, and crystal oscillators, are introduced and discussed in terms of power consumption, noise, temperature sensitivity, line sensitivity, and calibration methods. Finally, a summary of the state-of-the-art designs and related challenges will be introduced.
KAIST, Daejeon, Korea
Hyun-Sik Kim is currently an Associate Professor of Electrical Engineering at the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea. He received his B.S. degree (Hons.) in electronic engineering from Hanyang University, Seoul, South Korea, in 2009, and his M.S. and Ph.D. degrees in electrical engineering from KAIST, in 2011 and 2014, respectively. His research interests include the CMOS analog-integrated circuit designs, with an emphasis on display drivers, power managements, and sensory readout chips. Prof. Kim was a recipient of two Gold Prizes in the 18th and 19th Samsung Human-Tech Paper Awards in 2012 and 2013, respectively, the IEEE SSCS Pre-Doctoral Achievement Award in 2014, the IEEE SSCS Seoul Chapter Best Student JSSC Paper Award in 2014, and the KAIST Technology Innovation Award in 2022. He served as a guest editor of the IEEE Solid-State Circuits Letters (SSC-L) and is currently serving on the Technical Program Committees (TPC) for the IEEE International Solid-State Circuits Conference (ISSCC), the IEEE Asian Solid-State Circuits Conference (A-SSCC), and the IEEE Custom Integrated Circuits Conference (CICC).
Current-Shared Cluster of Multiple Integrated Voltage Regulators (IVRs)

Ganging multiple integrated voltage regulators (IVRs) is crucial for delivering sufficient power in system-on-chip (SoC) applications, particularly those with heavy workloads. However, the utilization of ganged voltage regulators in a shared power-grid is often faces challenges due to supply-current imbalances among the regulators, resulting in larger voltage ripples and thermal hotspots that compromise reliability. The issue is further intensified in inductive-switching IVRs by their tightly spaced on-chip inductors and high switching frequencies, which exacerbate current imbalances.

In this talk, we will explore the state-of-the-art current-sharing techniques for multi-phase IVRs and distributed digital LDOs. We will closely investigate two chip design cases: a 6-phase IVR using bond-wire inductors and a 4-phase IVR with on-chip spiral inductors, focusing on flying-capacitor-based topological solutions for inter-inductor current balancing. Additionally, this talk will include cost-effective solutions to enhance current-sharing accuracy in distributed digital LDO systems without the need for complicated global control mechanisms.

Display Driver ICs – From Basics to Recent Design Challenges

Display driver ICs (DDIs), tasked with digital-to-analog conversion (DAC) and signal drive into pixels, drastically impact image quality in OLED and LED displays. The growing demand for higher visual realism, even in mobile displays, necessitates integrating more source channels into each DDI. Given that the DAC occupies a significant portion of the die area, boosting area efficiency becomes imperative in DDI design to accommodate more channels on a single chip without sacrificing color depth. Moreover, the trend towards high-frame-rate displays (beyond 60Hz) aims to enhance user experience, but faces limitations due to the output buffer’s slew rate in DDIs. Additionally, the emergence of VR/AR micro-displays is introducing new design challenges for CMOS driver and pixel circuits.

This talk will provide a comprehensive investigation of DDI designs, covering foundational principles, technical challenges, and the latest innovations. We will start with an overview of DDIs, assessing their performance across key metrics such as data resolution, die area per channel, linearity, inter-channel deviation, conversion speed, drivability, and power consumption. This talk will then pivot to strategies for balancing some of the performance metrics in DDI design and showcase advanced architectural solutions. It will also cover the progress and hurdles in micro-display drivers, particularly for OLED-on-Silicon and μLED-on-Silicon technologies. In addition, DDIs embedding pixel-current sensing capabilities will be introduced, examining their potential for preemptively addressing burn-in issues.

Exploring Ways to Minimize Dropout Voltage for Energy-Efficient LDO Regulators

Low-dropout (LDO) regulators are ideal off- and on-chip solutions for powering noise-sensitive loads due to their ripple-less output. LDOs also have many benefits over switch-mode dc-dc converters, such as rapid transient response, excellent power supply rejection (PSR), and compact footprint. Unfortunately, they suffer from an inescapable disadvantage: poor power efficiency; this is primarily caused by a considerable dropout voltage (VDO). Reducing VDO to improve efficiency often leads to a significant drop in LDO’s regulation performance. Because of this, most LDOs are designed with a large VDO, making them perceived as energy-consuming components of power management systems.

This talk will delve into effective ways to extremely minimize the dropout voltage without compromising performance, aiming for energy-efficient LDO regulators. We will begin with a thorough investigation of operational principles, analyses, and strategies, exploring trade-offs among key performance metrics. Next, several promising approaches to realizing energy-efficient LDO regulators will be investigated, including traditional digital LDOs, a dual-rail analog/digital-hybrid LDO, a triode-region LDO, and a voltage/current-hybrid (VIH) LDO. Finally, the technical merits and flaws of each high-efficiency LDO topology will be investigated by comparing them. In this talk, I will also share my insights from my experience developing the VIH LDO regulator that achieves 98.6% efficiency and a -75dB PSR at 30kHz.

Boston University
Rabia Yazicigil is an Assistant Professor of ECE Department at Boston University and a Network Faculty at Sabanci University. She was a Postdoctoral Associate at MIT and received her Ph.D. degree from Columbia University in 2016. Her research interests lie at the interface of integrated circuits, bio-sensing, signal processing, security, and wireless communications to innovate system-level solutions for future energy-constrained applications. She has received numerous awards, including the NSF CAREER Award (2024), Early Career Excellence in Research Award for the Boston University College of Engineering (2024), the Catalyst Foundation Award (2021), Boston University ENG Dean Catalyst Award (2021), and “Electrical Engineering Collaborative Research Award” for her Ph.D. research (2016). Dr. Yazicigil is an active member of the Solid-State Circuits Society (SSCS) Women-in-Circuits committee and is a member of the 2015 MIT EECS Rising Stars cohort. She was recently selected as an IEEE SSCS Distinguished Lecturer and elected to the IEEE SSCS AdCom as a Member-at-Large. Lastly, she serves as an Associate Editor of the IEEE Transactions on Circuits and Systems-I (TCAS-I) and on the IEEE ISSCC, RFIC, ESSCIRC, and DAC Technical Program Committees.
All-In-One Data Decoders Using GRAND

In 1948, Shannon stated that the best error correction performance comes at longer code lengths. In 1978, Berlekamp, McEliece, and Tilborg established that optimally accurate decoding of linear codes is NP-complete in code length, so there is no optimally accurate universal decoder at long code lengths. Forward error-correction decoding has traditionally been a code-specific endeavor. Since the design of conventional decoders is tightly coupled to the code structure, one needs a distinct implementation for each code. The standard co-design paradigm either leads to significantly increased hardware complexity and silicon area to decode various codes or restrictive code standardization to limit hardware footprint. An innovative recent alternative is noise-centric guessing random additive noise decoding (GRAND). This approach uses modern developments in the analysis of guesswork to create a universal algorithm where the effect of noise is guessed according to statistical knowledge of the noise behavior or through phenomenological observation. Because of the universal nature of GRAND, it allows efficient decoding of a variety of different codes and rates in a single hardware instantiation. The exploration of the use of different codes, including heretofore undecodable ones, e.g., Random Linear Codes (RLCs), is an interesting facet of GRAND. This talk will introduce universal hard-detection and soft-detection decoders using GRAND, which enables low-latency, energy-efficient, secure wireless communications in a manner that is future-proof since it will accommodate any type of code.

This work is joint with Muriel Medard (MIT) and Ken Duffy (Northeastern University).

Cyber-Secure Biological Systems (CSBS)
This talk will introduce Cyber-Secure Biological Systems, leveraging living sensors constructed from engineered biological entities seamlessly integrated with solid-state circuits. This unique synergy harnesses the advantages of biology while incorporating the reliability and communication infrastructure of electronics, offering a unique solution to societal challenges in healthcare and environmental monitoring. In this talk, examples of Cyber-Secure Biological Systems, such as miniaturized ingestible bioelectronic capsules for gastrointestinal tract monitoring and hybrid microfluidic-bioelectronic systems for environmental monitoring, will be presented.
Physical-Layer Security for Latency- and Energy-Constrained Integrated Systems
The boom of connected IoT nodes and ubiquity of wireless communications are projected to increase wireless data traffic by several orders of magnitude in the near future. While these future scalable networks support increasing numbers of wireless devices utilizing the EM spectrum, ensuring the security of wireless communications and sensing is also a critical requirement under tight resource constraints. The physical layer has increasingly become the target of attacks by exploiting hardware weaknesses, e.g., side-channel attacks, and signal properties, e.g., time, frequency, and modulation characteristics. This talk introduces common security vulnerabilities within wireless systems such as jamming, eavesdropping, counterfeiting, and spoofing, followed by physical-layer countermeasures, while assessing the trade-offs between performance and security. It examines recent research directions, e.g., secure spatio-temporal modulated arrays, temporal swapping of decomposed constellations, RF fingerprinting, and bit-level frequency hopping, and finally discusses research opportunities looking forward.
Kobe University
Makoto Nagata (Senior Member, IEEE) received the B.S. and M.S. degrees in physics from Gakushuin University, Tokyo, Japan, in 1991 and 1993, respectively, and the Ph.D. degree in electronics engineering from Hiroshima University, Hiroshima, Japan, in 2001. He was a Research Associate at Hiroshima University from 1994 to 2002, an Associate Professor at Kobe University, Kobe, Japan, from 2002 to 2009, where he was promoted to a Full Professor in 2009. His research interests include design techniques targeting high-performance mixed analog, RF and digital VLSI systems with particular emphasis on power/signal/substrate integrity and electromagnetic compatibility, testing and diagnosis, 2.5D and 3D system integration, as well as their applications for hardware security and hardware safety, and cryogenic electronics for quantum computing. Dr. Nagata is a Senior Member of IEICE. He has been a member of a variety of technical program committees of international conferences, such as the Symposium on VLSI Circuits (2002–2009), Custom Integrated Circuits Conference (2007–2009), Asian Solid-State Circuits Conference (2005–2009), International Solid-State Circuits Conference (2014-2022), European Solid- State Circuits Conference (since 2020), and many others. He chaired the Technology Directions subcommittee for International Solid-State Circuits Conference (2018-2022) and served for an Executive Committee Member (2023-present). He was the Technical Program Chair (2010–2011), the Symposium Chair (2012–2013), and an Executive Committee Member (2014–2015) for the Symposium on VLSI circuits. He was the IEEE Solid-State Circuits Society (SSCS) AdCom member (2020-2022), the distinguished lecturer (2020-2021, and 2024-present), and currently serves as the chapters vice chair (2022-) of the society. He is an associate editor for IEEE Transactions on VLSI Systems (since 2015).
Hardware Security and Safety of IC Chips and Systems
IC chips are key enablers to a smartly networked society and need to be more compliant to security and safety. For example, semiconductor solutions for autonomous vehicles must meet stringent regulations and requirements. While designers develop circuits and systems to meet the performance and functionality of such products, countermeasures are proactively implemented in silicon to protect against harmful disturbances and even intentional adversarial attacks. This talk will start with electromagnetic compatibility (EMC) techniques applied to IC chips for safety to motivate EMC-aware design, analysis, and implementation. It will discuss IC design challenges to achieve the higher levels of hardware security (HWS). Crypto-based secure IC chips are investigated to avoid the risks of side-channel leakages and side-channel attacks, corroborated with silicon demonstrating analog techniques to protect digital functionality. The EMC and HWS disciplines derived from electromagnetic principles are key to establishing IC design principles for security and safety.
IC Chip and Packaging Interactions for Performance Improvements and Security Protections
Interactions of IC chips and packaging structures differentiate the electronic performance among traditional 2D chips and advanced 2.5D and 3D technologies. This presentation starts with their impacts on signal integrity (SI), power integrity (PI), electromagnetic compatibility (EMC) and electrostatic discharge protection (ESD), through in-depth Si experiments with in-place noise measurements as well as full-chip and system level noise simulation. Additionally, the backside of an integrated circuit (IC) chip, more precisely, the backside surface of its Silicon substrate, provides open areas for circuit performance improvements and adversarial security attacks, that are potentially contradictory or traded off in design for performance and security. The talk also explores the security threats over the Si-substrate backside from both passive and active side-channel attack viewpoints and then discusses countermeasure principles.
RF Noise Coupling -- Understanding, Mitigation and Impacts on Wireless Communication Performance of IC Chips and Systems
Noise coupling in RF system-on-chip integration is studied by on-chip and on-board measurements as well as by full-chip and system level simulation. Power and substrate integrity simulation uses chip-package-system board combined models and verifies noise coupling in mixed RF and analog-digital integration. Wireless system level simulation evaluates the impacts of coupled RF noises on wireless performance through quantitative metrics (e.g. communication throughput and minimum receivable power) among various wireless systems of such as 4G, 5G and GPS. In addition, post-silicon techniques at packaging and assembly stages are potential options to mitigate RF noise coupling problems. The presentation will also include experimental test cases of wireless power transfer modules and unmanned aerial vehicles (drones).
Secure Packaging, Tamper Resistance, and Supply Chain Security of IC Chips
Semiconductor products are potentially compromised for theft, falsification or invalidation by adversarial attempts and even due to unexpected disturbances. This talk provides an overview of physical security threats among semiconductors, and then discusses a broad range of countermeasure techniques. Secure packing exploits vertical structures using post wafer process technologies such as through Si vias, Si backside membranes and Si interposers for proactive prevention from destructive or nondestructive intrusions. Tamper resistance is achieved at the IC level with analog techniques to protect digital functionality. Supply chain security uses hardware Trojan free design verification as well as authentication strategies. Silicon examples will be demonstrated.
University of Ulm
Maurits Ortmanns (Senior Member, IEEE) received the degree of Dr.-Ing. from the University of Freiburg, Germany, in 2004. From 2004 to 2005, he worked at Sci-Worx GmbH, Hannover, Germany, in the area of mixed-signal circuits for biomedical implants. In 2006, he joined the University of Freiburg as an assistant professor. Since 2008, he is a full professor at the University of Ulm, Germany, where he is the director of the Institute of Microelectronics. He is the author of the textbook Continuous-Time Sigma-Delta A/D Conversion, author or co-author of several other book chapters, and over 350 IEEE journal articles and conference papers. He holds many patents. He has served as a program committee member of ISSCC, ESSCIRC, DATE, and ECCTD, as ISSCC EU regional chair, and ISSCC analog subcommittee chair. He has served as an Associate Editor for TCAS, Guest Editor for JSSC, and as a Distinguished Lecturer for SSCS. His research interests include mixed-signal integrated circuit design with special emphasis on data converters and biomedical applications.
Efficient High Resolution Incremental ADCs
High-resolution, high-efficiency converters are dominated by noise-shaping and oversampling architectures. However, in applications where true Nyquist-rate conversion is required, such as single-shot conversion, multiplexing, or time-interleaving, neither oversampling nor noise-shaping can be used. This is due to the very concepts that allow them to combine efficiency with performance, introduce memory into the system, and thus prevent sample-to-sample operation. This talk presents approaches to get around this, i.e. to combine Nyquist rate conversion with high power efficiency through innovative architecture and circuit design with a focus on incremental delta-sigma ADCs.
Implantable Integrated Circuits and Systems for Neurostimulation and Neuromodulation
Implantable medical devices (IMD) are widely used today to restore function to people with disabilities such as deafness, blindness, heart failure, incontinence, neurological disorders, and many others. Such implantable systems become increasingly challenging when a large number of sensing or stimulating sites need to be realized - space and power budget, safety issues, high bidirectional data rates, as well as the large number of electrical interfaces make the electronic circuit design a complex task of research and development. This talk will highlight some of the recent progress towards the realization of high channel count implantable neural interfaces, covering applications and system examples of neural modulators with high efficiency frontends.
Indian Institute of Technology
Shanthi Pavan received the B.Tech. degree in electronics and communication engineering from IIT Madras, Chennai, India, in 1995, and the M.S. and D.Sc. degrees from Columbia University, New York, NY, USA, in 1997 and 1999, respectively. From 1997 to 2000, he was with Texas Instruments, Warren, NJ, USA, where he worked on high-speed analog filters and data converters. From 2000 to June 2002, he worked on microwave ICs for data communication at Bigbear Networks, Sunnyvale, CA, USA. Since July 2002, he has been with IIT Madras, where he is currently the NT Alexander Institute Chair Professor of Electrical Engineering.Prof.Pavan is the author of Understanding Delta-Sigma Data Converters (second edition, with Richard Schreier and Gabor Temes), which received the Wiley-IEEE Press Professional Book Award for the year 2020. His research interests are in the areas of high-speed analog circuit design and signal processing. Dr. Pavan is a fellow of the Indian National Academy of Engineering, and the Indian National Science Academy, and the recipient of several awards, including the IEEE Circuits and Systems Society Darlington Best Paper Award in 2009. He has served as the Editor-in-Chief of the IEEE Transactions on Circuits and Systems—I: Regular Papers and on the Technical Program Committee of the International Solid-State Circuits Conference (ISSCC). He has served as a Distinguished Lecturer of the IEEE Circuits and Systems Society and is a two-term Distinguished Lecturer of the Solid-State Circuits Society. He currently serves as the Vice-President of Publications of the IEEE Solid-State Circuits Society and on the editorial board of the IEEE Journal of Solid-State Circuits. He is an IEEE Fellow.
Continuous-Time Pipelined Analog-to-Digital Converters - Where Filtering Meets Analog-to-Digital Conversion
If someone told you that the power, noise, distortion, and area of a mixed-signal block could be reduced all at the same time, you'd probably think that this was a lie. It turns out that it is indeed possible sometimes - and this talk will present an example called the continuous-time pipeline (CTP) ADC. The CTP is an emerging technique that combines filtering with analog-to-digital conversion. Like a continuous-time delta-sigma modulator (CTDSM), a CTP has a "nice" input impedance that is easy to drive and has inherent anti-aliasing. However, unlike a CTDSM, a CTP does not require a high-speed feedback loop to be closed. As a result, it can achieve significantly higher bandwidth (like a Nyquist ADC). After discussing the operating principles behind the CTP, we describe the fundamental benefits of the CTP over a conventional signal chain that incorporates an anti-alias filter and a Nyquist-rate converter. We will then show design details and measurement results from a 12-bit ENOB, 100MHz 800MS/s CTP designed in a 65nm CMOS process.
Design Challenges in Precision Continuous-Time Delta Sigma Data Conversion
Energy-efficient, high-resolution continuous-time delta-sigma modulators need to overcome several issues that are typically neglected in the design of data converters that target more modest in-band noise spectral densities. Examples of such problems include flicker noise, interconnect resistance and DAC inter-symbol-interference. This talk aims to provide some insight into these issues and describe techniques that can be used to address the formidable challenge of designing such converters. To place things in perspective, the techniques will be discussed in the context of single- and multi-bit CTDSMs that achieve about 105dB SNDR in a 250kHz bandwidth designed in a 180nm CMOS process.
Elmore Family School of Electrical & Computer Engineering, Purdue University
Shreyas Sen is an Elmore Associate Professor of ECE & BME, Purdue University. His current research interests span mixed-signal circuits/systems and electromagnetics for the Internet of Bodies (IoB) and Hardware Security. He has authored/co-authored 3 book chapters, over 200 journal and conference paper and has 25 patents granted/pending. Dr. Sen serves as the Director of the Center for Internet of Bodies (C-IoB) at Purdue. Dr. Sen is the inventor of the Electro-Quasistatic Human Body Communication (EQS-HBC), or Body as a Wire technology, for which, he is the recipient of the MIT Technology Review top-10 Indian Inventor Worldwide under 35 (MIT TR35 India) Award in 2018 and Georgia Tech 40 Under 40 Award in 2022. To commercialize this invention Dr. Sen founded Ixana and serves as the Chairman and CTO and led Ixana to awards such as 2x CES Innovation Award 2024, EE Times Silicon 100, Indiana Startup of the Year Mira Award 2023, among others. His work has been covered by 250+ news releases worldwide, invited appearance on TEDx Indianapolis, NASDAQ live Trade Talks at CES 2023, Indian National Television CNBC TV18 Young Turks Program, NPR subsidiary Lakeshore Public Radio and the CyberWire podcast. Dr. Sen is a recipient of the NSF CAREER Award 2020, AFOSR Young Investigator Award 2016, NSF CISE CRII Award 2017, Intel Outstanding Researcher Award 2020, Google Faculty Research Award 2017, Purdue CoE Early Career Research Award 2021, Intel Labs Quality Award 2012 for industrywide impact on USB-C type, Intel Ph.D. Fellowship 2010, IEEE Microwave Fellowship 2008, GSRC Margarida Jacome Best Research Award 2007, and nine best paper awards including IEEE CICC 2019, 2021 and in IEEE HOST 2017-2020, for four consecutive years. Dr. Sen's work was chosen as one of the top-10 papers in the Hardware Security field (TopPicks 2019). He serves/has served as an Associate Editor for IEEE Solid-State Circuits Letters (SSC-L), Nature Scientific Reports, Frontiers in Electronics, IEEE Design & Test, Executive Committee member of IEEE Central Indiana Section and Technical Program Committee member of TPC member of ISSCC, CICC, DAC, CCS, IMS, DATE, ISLPED, ICCAD, ITC, and VLSI Design. Dr. Sen is a Senior Member of IEEE.
Recent Circuit Advances for Resilience to Side-Channel Attacks
Computationally secure cryptographic algorithms, when implemented on physical hardware, leak correlated physical signatures (e.g. power supply current, electromagnetic radiation, acoustic, thermal) which could be utilized to break the crypto engine. Physical-layer countermeasures, guided by understanding of the physical leakage, including circuit-level and layout-level countermeasures promise strong resilience by reducing the physical leakage at the source of the leakage itself. The past decade has seen significant advancements in circuit-level countermeasures, advancing resilience to side-channel attacks. In this talk, we will cover the fundamentals of the leakages and how each countermeasure increases resilience, by diving into the working mechanism of each and comparing the pros and cons of these techniques. The talk concludes by highlighting the open problems and future needs of this field.
Secure and Efficient Internet of Bodies using Electro-Quasistatic Human Body Communication

Radiative communication using electromagnetic (EM) fields is the state-of-the-art for connecting wearable and implantable devices enabling prime applications in the fields of connected healthcare, electroceuticals, neuroscience, augmented and virtual reality (AR/VR) and human-computer interaction (HCI), forming a subset of the Internet of Things called the Internet of body (IoB). However, owing to such radiative nature of the traditional wireless communication, EM signals propagate in all directions, inadvertently allowing an eavesdropper to intercept the information. Moreover, since only a fraction of the energy is picked up by the intended device, and the need for high carrier frequency compared to information content, wireless communication tends to suffer from poor energy-efficiency (>nJ/bit). Noting that all IoB devices share a common medium, i.e. the human body, utilizing the conductivity of the human the body allows low-loss transmission, termed as human body communication (HBC) and improves energy-efficiency. Conventional HBC implementations still suffer from significant radiation compromising physical security and efficiency. Our recent work has developed Electro-Quasistatic Human Body Communication (EQS-HBC), a method for localizing signals within the body using low-frequency transmission, thereby making it extremely difficult for a nearby eavesdropper to intercept critical private data, thus producing a covert communication channel, i.e., the human body as a ‘wire’ along with reducing interference.

In this talk, I will explore the fundamentals of radio communication around the human body to lead to the evolution of EQS-HBC and show recent advancements in the field which has a strong promise to become the future of Body Area Network (BAN). I will show the theoretical development of the first Bio-Physical Model of EQS-HBC and how it was leveraged to develop the world’s lowest-energy (<10pJ/b) and world’s first sub-uW Physically and Mathematically Secure IoB Communication SoC, with >100x improvement in energy-efficiency over Bluetooth. Finally, I will highlight the possibilities and applications in the fields of HCI, Medical Device Communication, and Neuroscience including a few videos demonstrations. We will also highlight how such low-power communication in combination with in-sensor intelligence is paving the way forward for Secure and Efficient IoB Sensor Nodes.

Associate Professor, University of Macau
Sai-Weng Sin (Terry) (Senior Member, IEEE) received the B.S., M.S., and Ph.D. degrees in electrical and electronics engineering from the University of Macau, Macao, China, in 2001, 2003, and 2008, respectively. He is currently an Associate Professor in the Faculty of Science and Technology, University of Macau, and is the Deputy Director of State-Key Laboratory of Analog and Mixed-Signal VLSI, University of Macau, Macao, China. He has published 1 book entitled “Generalized Low-Voltage Circuit Techniques for Very High-Speed Time-Interleaved Analog-to-Digital Converters” in Springer, holds 12 patents and over 170 technical journals and conference papers in the field of high-performance data converters and analog mixed-signal integrated circuits.   Dr. Sin currently serves as Student Demonstration Program Chair in the Technical Program Committee of the IEEE Asian Solid-State Circuits Conference (A-SSCC), subcommittee chair of the International Conference on Integrated Circuits, Technologies and Applications (ICTA). He served as a Review Committee Member of the International Symposium on Circuits and Systems (ISCAS). He is currently an Associate Editor-in-Chief (Digital Communications) of the IEEE Transaction on Circuits and Systems II – Express Briefs, and also the Associate Editors of IEEE Access and Journal of Semiconductors. He is an IEEE SSCS Distinguished Lecturer for 2024 and 2025. He was the co-recipient of the 2011 ISSCC Silk Road Award, Student Design Contest Award in A-SSCC 2011, and the 2011 State Science and Technology Progress Award (second-class), China.
The Historical Development of Data Converters – ADCs that last from 1954 to 2024
Data Converters are one of the key building blocks and the performance bottleneck in the various applications of integrated circuits in our daily lives. The development of data converters is the fundamental driving force behind the modern technology of smart mobile devices based on sensors, communication, and artificial intelligence. However, Data Converters, e.g. SAR ADCs, already have a long history; they served as the key to improving human electronics technology even in the long past, modern, and the foreseeable future. This talk will present the historical development of data converters and review the key data converter development trends in the current era and what we can do.
Weightings in Incremental ADCs – How the weights can break and make the Incremental ADCs
Incremental delta-sigma analog-to-digital converters (IADC) are widely used in modern high-fidelity audio, sensors, and IoT low-power applications. Over the past years, the techniques to implement high-resolution IADCs have been significantly improved, for example, in handling the weighting problems inside the IADCs to overcome thermal noise and DAC mismatch issues. This talk offers a comprehensive review of the considerations of weightings in IADCs. The influence of weightings on thermal noise and DAC mismatches are analyzed, and the use of weighting in algorithms is described specifically. The advanced architectures to take advantage of the weightings based on recent academic achievements are presented respectively, with design examples to illustrate the successful practical implementations.
Massachusetts Institute of Technology
Vivienne Sze is an Associate Professor in the Electrical Engineering and Computer Science Department at MIT. She works on computing systems that enable energy-efficient machine learning, computer vision, and video compression/processing for a wide range of applications, including autonomous navigation, digital health, and the internet of things. She is widely recognized for her leading work in these areas and has received awards, including faculty awards from Google, Facebook, and Qualcomm, the Symposium on VLSI Circuits Best Student Paper Award, the IEEE Custom Integrated Circuits Conference Outstanding Invited Paper Award, and the IEEE Micro Top Picks Award. As a member of the Joint Collaborative Team on Video Coding, she received the Primetime Engineering Emmy Award for the development of the High-Efficiency Video Coding video compression standard.  She is a co-editor of High Efficiency Video Coding (HEVC): Algorithms and Architectures (Springer, 2014) and co-author of Efficient Processing of Deep Neural Networks (Synthesis Lectures on Computer Architecture, Morgan Claypool, 2020). For more information about Prof. Sze’s research, please visit: http://sze.mit.edu
Efficient Computing for AI and Robotics: From Hardware Accelerators to Algorithm Design
The compute demands of AI and robotics continue to rise due to the rapidly growing volume of data to be processed; the increasingly complex algorithms for higher quality of results; and the demands for energy efficiency and real-time performance. In this talk, we will discuss the design of efficient hardware accelerators and the co-design of algorithms and hardware that reduce the energy consumption while delivering real-time and robust performance for applications including deep neural networks, data analytics with sparse tensor algebra, and autonomous navigation.  We will also discuss our recent work that balances flexibility and efficiency for domain-specific accelerators and reduce the cost of analog-to-digital converters for processing-in-memory accelerators. Throughout the talk, we will highlight important design principles, methodologies, and tools that can facilitate an effective design process.
Efficient Computing for Autonomy and Navigation

A broad range of next-generation applications will be enabled by low-energy autonomous vehicles including insect-size flapping wing robots that can help with search and rescue, chip-size satellites that can explore nearby stars, and blimps that can stay in the air for years to provide communication services in remote locations. Autonomy capabilities for these vehicles will be unlocked by building their computers from the ground up, and by co-designing the algorithms and hardware for autonomy and navigation. In this talk, I will present various methods, algorithms, and computing hardware that deliver significant improvements in energy consumption and processing speed for tasks such as visual-inertial navigation, depth estimation, motion planning, mutual-information-based exploration, and deep neural networks for robot perception. We will also discuss the importance of efficient computing to reduce carbon footprint for sustainable large-scale deployment of autonomous vehicles.

Much of the work presented in this talk was developed in the Low-Energy Autonomy and Navigation (LEAN) interdisciplinary group at MIT (http://lean.mit.edu), which is co-directed by Vivienne Sze and Sertac Karaman.

Seoul National University
Jerald Yoo (S’05-M’10-SM’15) received the B.S., M.S., and Ph.D. degrees in Department of Electrical Engineering from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 2002, 2007, and 2010, respectively.  From 2010 to 2016, he was with the Department of Electrical Engineering and Computer Science, Masdar Institute, Abu Dhabi, United Arab Emirates, where he was an Associate Professor. From 2010 to 2011, he was also with the Microsystems Technology Laboratories (MTL), Massachusetts Institute of Technology (MIT) as a visiting scholar. Between 2017 and 2024, he was with the Department of Electrical and Computer Engineering, National University of Singapore, Singapore, as an Associate Professor. Since 2024, he has been with the Department of Electrical and Computer Engineering, Seoul National University, where he is currently an Associate Professor. He has pioneered research on Body-Area Network (BAN) transceivers for communication/powering and wearable body sensor network using the planar-fashionable circuit board for a continuous health monitoring system. He authored book chapters in Biomedical CMOS ICs (Springer, 2010), Enabling the Internet of Things—From Circuits to Networks (Springer, 2017), The IoT Physical Layer (Chapter 8, Springer, 2019) and Handbook of Biochips (Biphasic Current Stimulator for Retinal Prosthesis, Springer, 2021). His current research interests include low-energy circuit technology for wearable bio-signal sensors, flexible circuit board platform, BAN for communication and powering, ASIC for piezoelectric Micromachined Ultrasonic Transducers (pMUT), and System-on-Chip (SoC) design to system realization for wearable healthcare applications. Dr. Yoo is an IEEE Solid-State Circuits Society (SSCS) Distinguished Lecturer (2024-2025 and 2017-2018). He also served an IEEE Circuits and Systems Society (CASS) Distinguished Lecturer (2019-2021). He is the recipient or a co-recipient of several awards: IEEE International Solid-State Circuits Conference (ISSCC) 2020 and 2022 Demonstration Session Award (Certificate of Recognition), IEEE International Symposium on Circuits and Systems (ISCAS) 2015 Best Paper Award (BioCAS Track), ISCAS 2015 Runner-Up Best Student Paper Award, the Masdar Institute Best Research Award in 2015 and the IEEE Asian Solid-State Circuits Conference (A-SSCC) Outstanding Design Award (2005). He was the founding vice-chair of the IEEE SSCS United Arab Emirates (UAE) Chapter and is the chair of the IEEE SSCS Singapore Chapter. Currently, he serves as an Executive Committee as well as a Technical Program Committee Member of the IEEE International Solid-State Circuits Conference (ISSCC), ISSCC Student Research Preview (chair), and IEEE Asian Solid-State Circuits Conference (A-SSCC, Emerging Technologies, and Applications Subcommittee Chair), and Steering Committee Member of IEEE Transactions on Biomedical Circuits and Systems (TBioCAS). He is also an Analog Signal Processing Technical Committee Member of IEEE Circuits and Systems Society and was an Associate Editor of IEEE Transactions on Biomedical Circuits and Systems (TBioCAS) and IEEE Open Journal of Solid-State Circuits Society (OJ-SSCS).
Body Area Network – Connecting and powering things together around the body

Body Area Network (BAN) is an attractive means for continuous and pervasive health monitoring, providing connectivity and power to the sensors around the human body. Yet its unique and harsh environment gives circuit designers many challenges. As the human body absorbs the majority of RF energy around the GHz band, existing RF radio may not be ideal for communications between and on-body sensors, and so is the RF wireless power transfer. When it comes to energy harvesting, often the harvesting location is not aligned with the sensor location (a.k.a. location mismatch).

To solve the issues, this talk presents the Body Coupled Communication (BCC)-based BAN. BCC BAN utilizes the human body itself as a communication medium, which has orders of magnitude less pathloss when compared to RF in the BAN environment. We will begin with channel characteristics followed by design considerations and transceiver implementation examples. We will then look into what circuit designers should consider in such non-conventional environments. Low energy circuit techniques to overcome their limitations will also be addressed. Lastly, we will discuss the various system aspects of the BAN, including powering up the wearables using the wearable BAN.

Low-power, Low-noise Sensor Interface Circuits for Biomedical Applications

Biomedical and healthcare applications provide attractive opportunities for the semiconductor sector. In both fields, the target is to gather data from multiple sensor nodes with minimal power consumption while maintaining low noise operation. However, designing a sensor interface circuit for such applications is challenging due to its harsh environment. Also, in such cases, the trade-off between available resources and performance among the components both in analog front-end and in the digital back-end is crucial.

This talk will cover the design strategies of sensor interface circuits. Starting from a basic op-amp, we will first explore the difficulties, limitations, and potential pitfalls in sensor interface, and strategy to overcome such issues. Low noise operation leads to two dynamic offset compensation techniques, auto-zeroing, and chopper stabilization. After that, system-level considerations for better key metrics such as energy efficiency will be introduced. Several state-of-the-art instrumentation amplifiers that emphasize on different parameters will also be discussed. We will then see how the signal analysis part impacts the analog sensor interface circuit design. The lecture will conclude with interesting aspects and opportunities that lie ahead.

On-Chip Epilepsy Detection: Where Machine Learning Meets Patient-Specific Wearable Healthcare

Epilepsy is a severe and chronic neurological disorder that affects over 65 million people worldwide. Yet current seizure/epilepsy detection and treatment mainly rely on a physician interviewing the subject, which is not effective in infant/children group. Moreover, patient-to-patient and age-to-age variation on seizure pattern make such detection particularly challenging. To expand the beneficiary group to even infants and also to effectively adapt to each patient, a wearable form-factor, the patient-specific system with machine learning is of crucial. However, the wearable environment is challenging for circuit designers due to unstable skin-electrode interface, huge mismatch, and static/dynamic offset.

This lecture will cover the design strategies of patient-specific epilepsy detection System-on-Chip (SoC). We will first explore the difficulties, limitations, and potential pitfalls in wearable interface circuit design and strategies to overcome such issues. Starting from a one op-amp instrumentation amplifier (IA), we will cover various IA circuit topologies and their key metrics to deal with offset compensation. Several state-of-the-art instrumentation amplifiers that emphasize on different parameters will also be discussed. Moving on, we will cover the feature extraction and the patient-specific and patient-independent classification using Machine Learning technique. Finally, an on-chip epilepsy detection and recording sensor SoC will be presented, which integrates all the components covered during the lecture. The lecture will conclude with interesting aspects and opportunities that lie ahead.

Towards Monolithic Mobile Ultrasound Imaging System for Medical and Drone Applications

Ultrasound Imaging System (UIS) has been widely used in medical imaging with its non-invasive, non-destructive monitoring nature; but so far the UIS has large form factor, making it difficult to integrate in mobile form factor. For drone and robotic vision and navigation, low-power 3-D depth sensing with robust operations against strong/weak light and various weather conditions is crucial. CMOS image sensor (CIS) and light detection and ranging (LiDAR) can provide high-fidelity imaging. However, CIS lacks depth sensing and has difficulty in low light conditions. LiDAR is expensive with issues of dealing with strong direct interference sources. UIS, on the other hand, is robust in various weather and light conditions and is cost-effective. However, in air channel, it often suffers from long image reconstruction latency and low framerate.

To address these issues, this talk introduces UIS ASICs for medical and drone applications. The medical UIS ASIC is designed to transmit pulse and receive echo through a 36-channel 2-D piezoelectric Micromachined Ultrasound Transducer (pMUT) array. The 36-channel ASIC integrates a transmitter (TX), a receiver (RX), and an analog-to-digital converter (ADC) within the 250- μm pitch channel while consuming low-power and supporting calibration to compensate for the process variation of the pMUT. With its small form factor, Intervascular Ultrasound (IVUS) and Intracardiac Echocardiography (ICE) becomes a viable application. The ASIC in 0.18- μm 1P6M Standard CMOS is verified with both electrical and acoustic experiments with a 6×6 pMUT array. Also, the ASIC for drone applications generates 28 Vpp pulses in standard CMOS and the digital back-end (DBE) achieves 9.83M-FocalPoint/s throughput to effectively translate real-time 3-D image streaming at 24 frames/s. With an 8×8 bulk piezo transducer array, the UIS ASIC is installed on an entry-level consumer drone to demonstrate 7-m range detection while the drone is flying. The talk will conclude with interesting research directions lying ahead in UIS.