CICC

Keynote Speakers

2023 CICC Keynote Speakers

Keynote 1 

Monday, April 24 8:20 am-9:10 am

Daniel Cooley, Chief Technology Officer, Silicon Labs, USA

Title:  Charting the Connected Future

Bio: Daniel Cooley joined Silicon Labs in 2005 and has served in various leadership roles throughout his tenure. He most recently served as chief strategy officer and was responsible for Silicon Labs’ overall strategy, corporate development, M&A, emerging markets, and security. Cooley joined Silicon Labs as a chip design engineer developing broadcast products such as AM/FM radios and short-range wireless devices. He has a Master of Science degree in Electrical Engineering from Stanford University and a Bachelor of Science degree in Electrical Engineering from The University of Texas at Austin. He holds four patents in radio and low-power technology. Cooley currently serves on the board of directors for the Thinkery, Texas 4000, and the Texas Crew Foundation.

Abstract:

Cloud connectivity transforms devices. We saw it first with PCs, then phones, and now, embedded devices and the IoT. Silicon Labs CTO and Senior Vice President Daniel Cooley will discuss the final steps needed to achieve the full potential of cloud-connected embedded computing. The IoT is building towards an authenticated software model critical to establishing privacy and trust in the data being transferred to and from the cloud. Designing for highly constrained embedded computing is challenging but satisfying as engineers unlock more applications for the billions of devices being deployed. Silicon Labs is charting the course for cloud-connected embedded computing and, together with its customers and partners, is building a smarter, more connected world.

Luncheon Keynote

Tuesday, April 25 12:00 pm-1:30 pm

*Registration Required

Kenneth O, Professor – Electrical Engineering, Texas Instruments Distinguished University Chair, USA

Title: Terahertz CMOS Going Anywhere?

Bio: Kenneth O received his S.B, S.M, and Ph.D. degrees in Electrical Engineering and Computer Sci­ence from the Massachusetts Institute of Technology, Cambridge, MA in 1984, 1984, and 1989, respectively. From 1989 to 1994, Dr. O worked at Analog Devices Inc. developing sub-micron CMOS processes for mixed signal applications, and high speed bipolar and BiCMOS processes. He was a professor at the University of Florida, Gainesville from 1994 to 2009. He is currently the Director of Texas Analog Center of Excellence and Texas Instruments Distinguished University Chair Professor of Analog Circuits and Systems at the University of Texas at Dallas. His research group is developing circuits and components required to implement analog and digital systems operating at frequencies up to 40THz using silicon IC technologies. Dr. O was the President of the IEEE Solid-State Circuits Society in 2020 and 2021. He has authored and co-authored 290 journal and conference publications, as well as holding 15 patents. Dr. O has received the 2014 Semiconductor Research Association University Researcher Award. Prof. O is also an IEEE Fellow.

 

Abstract: Terahertz operation of CMOS circuits which once appeared to be wishful hopes of a few has become a reality. Signal generation up to 1.33 THz, coherent detection up to 1.2 THz and incoherent detection up to ~10 THz have been demonstrated using CMOS integrated circuits. Furthermore, highly integrated transceivers operating at frequencies up to ~400 GHz have been demonstrated. In addition, demonstrations of affordable approaches for packaging and testing terahertz CMOS circuits have been demonstrated. The performances of these CMOS circuits are or close to being sufficient to support electronic smelling using rotational spectroscopy that can detect and quantify concentrations of a wide variety of gases; imaging that can enable operation in a wide range of visually impaired conditions; and high-bandwidth communication. Despite these, wide deployments of terahertz CMOS circuits and systems are not imminent. This talk will review the state of the art for the CMOS terahertz circuit and system performance and will discuss applications that can be supported by these performance profile. Additionally, this talk will discuss potential advances, technologies and research efforts that can broaden application areas as well as potentially more rapidly enable a large-scale commercialization of the technology.

Keynote 2

Wednesday, April 26 8:00 am-8:50 am

Title:  Directions in Deep Learning Hardware

Bill Dally, Chief Scientist, NVIDIA, USA

Bio: Bill Dally joined NVIDIA in January 2009 as chief scientist, after spending 12 years at Stanford University, where he was chairman of the computer science department. Dally and his Stanford team developed the system architecture, network architecture, signaling, routing and synchronization technology that is found in most large parallel computers today. Dally was previously at the Massachusetts Institute of Technology from 1986 to 1997, where he and his team built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanism from programming models and demonstrated very low overhead synchronization and communication mechanisms. From 1983 to 1986, he was at California Institute of Technology (CalTech), where he designed the MOSSIM Simulation Engine and the Torus Routing chip, which pioneered “wormhole” routing and virtual-channel flow control. He is a member of the National Academy of Engineering, a Fellow of the American Academy of Arts & Sciences, a Fellow of the IEEE and the ACM, and has received the ACM Eckert-Mauchly Award, the IEEE Seymour Cray Award, and the ACM Maurice Wilkes award. He has published over 250 papers, holds over 120 issued patents, and is an author of four textbooks. Dally received a bachelor’s degree in Electrical Engineering from Virginia Tech, a master’s in Electrical Engineering from Stanford University and a Ph.D. in Computer Science from CalTech.  He was a cofounder of Velio Communications and Stream Processors.

Abstract: 

The current resurgence of artificial intelligence is due to advances in deep learning. Systems based on deep learning now exceed human capability in speech recognition, object classification, and playing games like Go. Deep learning has been enabled by powerful, efficient computing hardware. The algorithms used have been around since the 1980s, but it has only been in the last decade – when powerful GPUs became available to train networks – that the technology has become practical.   Advances in DL are now gated by hardware performance.   In the last decade, the efficiency of DL inference on GPUs had improved by 1000x.  Much of this gain was due to improvements in data representation starting with FP32 in the Kepler generation of GPUs and scaling to Int8 and FP8 in the Hopper generation.  This talk will review this history and discuss further improvements in number representation including logarithmic representation, optimal clipping, and per-vector quantization.   It will also discuss sparsity, memory organization, optimized circuits, and analog computation.