Michelle Effros, California Institute of Technology
Title: On Practical, Optimal Random Access Communication
Abstract: Random access communication plays a central role in modern communication systems. For example, wireless devices that communicate through WiFi hotspots and cell phone towers, access the network through a random access channel. The key challenge that distinguishes random access communication from other multiple access communication scenarios is that the number of transmitters can vary widely and unpredictably from one use to the next, and neither the transmitters nor the receivers knows how many transmitters are in operation at any given time. Traditional methods for dealing with the resulting uncertainty either sacrifice performance for simplicity or pay a heavy price in overhead to eliminate transmitter-set uncertainty. This talk considers new methods for tackling random access communication, focusing on the competing goals of building practical codes and achieving the best possible performance. Central results include new coding strategies and bounds to capture some of their underlying complexity-performance tradeoffs.
Bio: Michelle Effros received the B.S. degree with distinction in 1989, the M.S. degree in 1990, and the Ph.D. degree in 1994, all in electrical engineering from Stanford University. During the summers of 1988 and 1989 she worked at Hughes Aircraft Company, researching modulation schemes, real-time implementations of fast data rate error-correction schemes, and future applications for fiber optics in space technology.
She is currently Professor of Electrical Engineering at the California Institute of Technology; from 1994 - 2000 she was Assistant Professor of Electrical Engineering; and from 2000 - 2005, Associate Professor. Her research interests include information theory, data compression, communications, pattern recognition, speech recognition, and image processing.
Professor Effros received Stanford's Frederick Emmons Terman Engineering Scholastic Award (for excellence in engineering) in 1989, the Hughes Masters Full-Study Fellowship in 1989, the National Science Foundation Graduate Fellowship in 1990, the AT&T Ph.D. Scholarship in 1993, the NSF CAREER Award in 1995, the Charles Lee Powell Foundation Award in 1997, and the Richard Feynman-Hughes Fellowship in 1997. She is a member of Tau Beta Pi, Phi Beta Kappa, Sigma Xi, and IEEE Information Theory, Signal Processing, and Communications societies. She served as the Editor of the IEEE Information Theory Society Newsletter from 1995-1998, as Co-Chair of the NSF Sponsored Workshop on Joint Source-Channel Coding in 1999, and has been a Member of the Board of Governors of the IEEE Information Theory Society since 1998.
Yi Ma, University of California, Berkeley
Title: CTRL: Closed-Loop Data Transcription via Rate Reduction
Abstract: In this talk we introduce a principled computational framework for learning a compact structured representation for real-world datasets, that is both discriminative and generative. More specifically, we propose to learn a closed-loop transcription between the distribution of a high-dimensional multi-class dataset and an arrangement of multiple independent subspaces, known as a linear discriminative representation (LDR). We argue that the encoding and decoding mappings of the transcription naturally form a closed-loop sensing and control system. The optimality of the closed-loop transcription, in terms of parsimony and self-consistency, can be characterized in closed-form by an information-theoretic measure known as the rate reduction. The optimal encoder and decoder can be naturally sought through a two-player minimax game over this principled measure. To a large extent, this new framework unifies concepts and benefits of auto-encoding and GAN and generalizes them to the settings of learning a both discriminative and generative representation for multi-class visual data. This work opens many new mathematical problems regarding learning linearized representations for nonlinear submanifolds in high-dimensional spaces. More broadly and significantly, it may suggest potential computational mechanisms about how visual memory of multiple object classes could be formed jointly, incrementally, or unsupervisedly through a purely internal closed-loop feedback process.
Related papers can be found at: https://jmlr.org/papers/v23/21-0631.html and https://www.mdpi.com/1099-4300/24/4/456/html.
Bio: Yi Ma is a Professor at the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. His research interests include computer vision, high-dimensional data analysis, and intelligent systems. Yi received his Bachelor’s degrees in Automation and Applied Mathematics from Tsinghua University in 1995, two Masters degrees in EECS and Mathematics in 1997, and a PhD degree in EECS from UC Berkeley in 2000.
He has been on the faculty of UIUC ECE from 2000 to 2011, the principal researcher and manager of the Visual Computing group of Microsoft Research Asia from 2009 to 2014, and the Executive Dean of the School of Information Science and Technology of ShanghaiTech University from 2014 to 2017. He then joined the faculty of UC Berkeley EECS in 2018. He has published about 60 journal papers, 120 conference papers, and three textbooks in computer vision, generalized principal component analysis, and high-dimensional data analysis. He received the NSF Career award in 2004 and the ONR Young Investigator award in 2005. He also received the David Marr prize in computer vision from ICCV 1999 and best paper awards from ECCV 2004 and ACCV 2009. He has served as the Program Chair for ICCV 2013 and the General Chair for ICCV 2015. He is a Fellow of IEEE, ACM, and SIAM.
Muriel Médard, Massachusetts Institute of Technology
Title: It's not the code, it's the noise.
Abstract: Code design is notoriously intricate, yet theory tells us it need not be. The underlying assumption has always been that codes need to be carefully constructed so that they be practically decodable. Indeed, machine learning has been deployed to aid in such design of encoders and decoders. In this talk, we argue that we should return to theory - codes need not be structured. At the physical layer, when we consider error correction, we show that Guessing Random Additive Noise Decoding (GRAND), joint work with Ken Duffy, is a universal decoder, already realized in hardware with Rabia Yazicigil, that guesses noise effects sequentially until a correct decoding is obtained. In this manner, any knowledge of the noise can be fruitfully applied to aid in decoding. We show that almost any code, from a random linear code to a humble CRC, and even systems meant for cryptography rather than for error-correction, such as AES, performs equally well.
Bio: Muriel Médard is the Cecil H. and Ida Green Professor in the Electrical Engineering and Computer Science (EECS) Department at MIT, where she leads the Network Coding and Reliable Communications Group in the Research Laboratory for Electronics at MIT. She obtained three Bachelors degrees (EECS 1989, Mathematics 1989 and Humanities 1991), as well as her M.S. (1991) and Sc.D (1995), all from MIT. She is a Member of the US National Academy of Engineering (elected 2020), a Fellow of the US National Academy of Inventors (elected 2018), American Academy of Arts and Sciences (elected 2021), and a Fellow of the Institute of Electrical and Electronics Engineers (elected 2008). She holds an Honorary Doctorate from the Technical University of Munich (2020).
She was co-winner of the MIT 2004 Harold E. Egerton Faculty Achievement Award and was named a Gilbreth Lecturer by the US National Academy of Engineering in 2007. She received the 2017 IEEE Communications Society Edwin Howard Armstrong Achievement Award and the 2016 IEEE Vehicular Technology James Evans Avant Garde Award. She received the 2019 Best Paper award for IEEE Transactions on Network Science and Engineering, the 2018 ACM SIGCOMM Test of Time Paper Award, the 2009 IEEE Communication Society and Information Theory Society Joint Paper Award, the 2009 William R. Bennett Prize in the Field of Communications Networking, the 2002 IEEE Leon K. Kirchmayer Prize Paper Award, as well as eight conference paper awards. Most of her prize papers are co-authored with students from her group.
She has served as technical program committee co-chair of ISIT (twice), CoNext, WiOpt, WCNC and of many workshops. She has chaired the IEEE Medals committee, and served as member and chair of many committees, including as inaugural chair of the Millie Dresselhaus Medal. She was Editor in Chief of the IEEE Journal on Selected Areas in Communications and has served as editor or guest editor of many IEEE publications, including the IEEE Transactions on Information Theory, the IEEE Journal of Lightwave Technology, and the IEEE Transactions on Information Forensics and Security. She was a member of the inaugural steering committees for the IEEE Transactions on Network Science and for the IEEE Journal on Selected Areas in Information Theory.She currently serves as the Editor-in-Chief of the IEEE Transactions on Information Theory. Muriel was elected president of the IEEE Information Theory Society in 2012, and serves on its board of governors, having previously served for eleven years.
Muriel received the inaugural 2013 MIT EECS Graduate Student Association Mentor Award, voted by the students. She set up the Women in the Information Theory Society (WithITS) and Information Theory Society Mentoring Program, for which she was recognized with the 2017 Aaron Wyner Distinguished Service Award. She served as undergraduate Faculty in Residence for seven years in two MIT dormitories (2002–2007). She was elected by the faculty and served as member and later chair of the MIT Faculty Committee on Student Life and as inaugural chair of the MIT Faculty Committee on Campus Planning. She was chair of the Institute Committee on Student Life. She was recognized as a Siemens Outstanding Mentor (2004) for her work with High School students. She serves on the Board of Trustees since 2015 of the International School of Boston, for which she is treasurer.
She has over fifty US and international patents awarded, the vast majority of which have been licensed or acquired. For technology transfer, she has co-founded CodeOn, for which she consults, and Steinwurf, for which she is Chief Scientist.
Wei Yu, University of Toronto
Title: Learn to Optimize for Wireless Communications
Abstract: Machine learning will have an important role to play in the optimization of future-generation physical-layer wireless communication system design for the following two reasons. First, traditional wireless communication design always relies on the channel model, but models are inherently only an approximation to the reality. In wireless environments where the models are complex and the channels are costly to estimate, a machine learning based approach that performs system-level optimization without explicit channel estimation can significantly outperform the traditional channel estimation based approaches. Second, modern wireless communication design often involves optimization problems that are high-dimensional, nonconvex, and difficult to solve efficiently. By exploring the availability of training data, a neural network may be able to learn the solution of an optimization problem directly. This can lead to a more efficient way to solve nonconvex optimization problems. In this talk, I will use examples from optimizing a reconfigurable intelligent surface (RIS) system, precoding for a massive multiple-input multiple-output (MIMO) system, and active sensing for mmWave channel initial alignment to illustrate the benefit of learning-based physical-layer communication system design. We illustrate that matching the neural network architecture to the problem structure is crucial for the success of learning based approaches.
Bio: Wei Yu received the B.A.Sc. degree in Computer Engineering and Mathematics from the University of Waterloo, and M.S. and Ph.D. degrees in Electrical Engineering from Stanford University. He has been with the Electrical and Computer Engineering Department at the University of Toronto since 2002, where he is now Professor and holds a Canada Research Chair (Tier 1) in Information Theory and Wireless Communications.
Prof. Wei Yu is a Fellow of IEEE, a Fellow of the Canadian Academy of Engineering, and a member of the College of New Scholars, Artists and Scientists of the Royal Society of Canada. He received the Steacie Memorial Fellowship in 2015, the IEEE Marconi Prize Paper Award in Wireless Communications in 2019, the IEEE Communications Society Award for Advances in Communication in 2019, the IEEE Signal Processing Society Best Paper Award in 2008, 2017 and 2021, the Journal of Communications and Networks Best Paper Award in 2017, and the IEEE Communications Society Best Tutorial Paper Award in 2015. Prof. Wei Yu was an IEEE Communications Society Distinguished Lecturer in 2015-16. He served as the Chair of the Signal Processing for Communications and Networking Technical Committee of the IEEE Signal Processing Society in 2017-18. Prof. Wei Yu was the President of the IEEE Information Theory Society in 2021.