Professor Hai (Helen) Li,
Duke University, USA
Hai “Helen” Li is the Clare Boothe Luce Professor and Department Chair of the Electrical and Computer Engineering Department at Duke University. She received her B.S and M.S. from Tsinghua University and Ph.D. from Purdue University. Her research interests include neuromorphic circuit and system for brain-inspired computing, machine learning acceleration and trustworthy AI, conventional and emerging memory design and architecture, and software and hardware co-design. Dr. Li served/serves as the Associate Editor for multiple IEEE and ACM journals. She was the General Chair or Technical Program Chair of multiple IEEE/ACM conferences and the Technical Program Committee members of over 30 international conference series. Dr. Li is a Distinguished Lecturer of the IEEE CAS society (2018-2019) and a distinguished speaker of ACM (2017-2020). Dr. Li is a recipient of the NSF Career Award, DARPA Young Faculty Award, TUM-IAS Hans Fischer Fellowship from Germany, ELATE Fellowship, nine best paper awards and another nine best paper nominations. Dr. Li is a fellow of ACM and IEEE.
As artificial intelligence (AI) transforms various industries, state-of-the-art models have grown exponentially in size and capability. However, previous optimization efforts have primarily concentrated on computational aspects, often overlooking the significant bottleneck in AI system efficiency caused by the storage, retrieval, and orchestration of data. To address this challenge, we adopt a data-centric approach, performing collaborative optimization across the algorithms, systems, architecture, and circuit layers. In this presentation, we will first discuss the memory capacity and bandwidth bottleneck that has emerged with the advancement of AI models. Furthermore, we will present our optimization efforts aimed at addressing this bottleneck, which include compressing the AI model, tailoring the computation schedule, and customizing the memory hierarchy. Additionally, we explore the potential of compute- in-memory as a comprehensive solution to these challenges, presenting a holistic approach that integrates computation and memory for enhanced efficiency in AI systems. At the conclusion of our presentation, we will share our insights and vision for a data-centric approach to optimizing AI systems.
Professor David Atienza,
EPFL, Switzerland
David Atienza is a professor of Electrical and Computer Engineering, and leads both the Embedded Systems Laboratory (ESL) and the new EcoCloud Sustainable Computing Center at EPFL, Switzerland. He received his M.Sc. and Ph.D. degrees in Computer Science and Engineering from UCM (Spain) and IMEC (Belgium). His research interests include system-level design methodologies for high-performance multi-processor system-on-chip (MPSoC) and low-power Internet-of-Things (IoT) systems, including edge AI architectures for wearables and IoT systems as well as sustainable computing approaches for many-core servers. He is a co- author of more than 400 papers, two books, and has 14 licensed patents in these topics. He served as DATE General Chair and Program Chair, and is currently Editor-in-Chief of IEEE TCAD. Among others, Dr. Atienza has received the ICCAD 2020 10-Year Retrospective Most Influential Paper Award, the 2018 DAC Under-40 Innovators Award, and an ERC Consolidator Grant. He is a Fellow of IEEE, a Fellow of ACM, served as IEEE CEDA President (period 2018-2019) and in the IEEE CASS BoG, and is currently the Chair of the European Design Automation Association (EDAA).
Edge computing is becoming an essential concept covering multiple domains nowadays as our world becomes increasingly connected to enable the smart world concept. In addition, the new wave of Artificial Intelligence (AI), particularly complex Machine Learning (ML) and Deep Learning (DL) models, is demanding new computing paradigms and edge AI architectures beyond traditional general-purpose computing to viable a reality in a sustainable smart world. In this keynote, Prof. Atienza will discuss new approaches to effectively design the next generation of neuro-inspired edge AI computing architectures by taking inspiration from how the brain processes incoming information and adapts to changing conditions. In particular, these novel edge AI architectures include two key concepts. First, it exploits the idea of accepting computing inexactness at the system level while integrating multiple computing accelerators (such as in-memory computing or coarse-grained reconfigurable accelerators). Second, these edge AI architectures can operate ensembles of neural networks to improve the ML/DL outputs' robustness at system level, while minimizing memory and computation resources for the target final application. These two concepts have enabled the new open- source eXtended and Heterogeneous Energy-Efficient Hardware Platform (called X-HEEP). X- HEEP will be showcased in this presentation under complex real-life working conditions of edge AI systems in healthcare toward our dreamed sustainable smart world.
Abu Dhabi, UAE
aicas2024@ku.ac.ae
xx xxx xxx xxx
©2024 Khalifa University. All Rights Reserved