Day 2 of the Scientific Program was to the brim with laureate lectures. The morning session was a trifecta that began with ACM Turing Award recipient, John Hopcroft, followed by Alexei Efros, ACM Prize in Computing, and Leslie Lamport, ACM Turing Award. After a brief interlude, the lectures kicked off with Joseph Sifakis, ACM Turing Award, and then Jeff Dean, ACM Prize in Computing. Two afternoon lectures sealed the deal: Richard Stearns, ACM Turing Award, and Madhu Sudan, Nevanlinna Prize. Everything came to a close with the PhD Poster Flash, a rapid-fire breakdown of each presenters’ theses.
All of the lectures are available to stream on the HLF video archive.
John E. Hopcroft
“Deep Learning Research”
This talk will cover the basics of machine learning and then talk about interest¬ing directions in deep learning. Deep learning has become an important aspect of machine learning since it has been applied very successfully to many applied problems. The focus of the talk will be on directions related to understanding why deep learning works so well rather than applications.
“Self-Supervised Visual Learning and Synthesis”
Computer vision has made impressive gains through the use of deep learning mod¬els, trained with large-scale labeled data. However, labels require expertise and curation and are expensive to collect. Can one discover useful visual representa¬tions without the use of explicitly curated labels? In this talk, I will present several case studies exploring the paradigm of self-supervised learning – using raw data as its own supervision. Several ways of defining objective functions in high-dimen¬sional spaces will be discussed, including the use of General Adversarial Networks (GANs) to learn the objective function directly from the data. Applications in image synthesis will be shown, including automatic colorization, paired and unpaired im¬age-to-image translation (aka pix2pix and cycleGAN), and, terrifyingly, #edges2cats.
“How to Write a 21st Century Proof”
Mathematicians have made a lot of progress in the last 350 years, but not in writ¬ing proofs. The proofs they write today are just like the ones written by Newton. This makes it all too easy to prove things that aren’t true. I’ll describe a better way that I’ve been using for more than 25 years.
“How Much Hard is System Design?”
The ICT revolution is dominated by the IoT vision which promises increasingly inter¬connected smart objects providing autonomous services for the optimal manage¬ment of resources and enhanced quality of life. These include smart grids, smart transport systems, smart health care services, automated banking services, smart factories, etc. Their coordination will be achieved using a unified network infra¬structure, in particular to collect data and send them to the cloud which in return will provide using data analytics, intelligent services to ensure global trustworthi¬ness and performance. (Abbreviated abstract)
Jeffrey A. Dean
“Deep Learning and the Grand Engineering Challenges”
Over the past several years, Deep Learning has caused a significant revolution in the scope of what is possible with computing systems. These advances are hav¬ing significant impact across many fields of computer science, as well as other fields of science, engineering, and human endeavor. For the past five years, the Google Brain team (g.co/brain) has conducted research on deep learning, on building large-scale computer systems for machine learning research, and, in collaboration with many teams at Google, on applying our research and systems to dozens of Google products. In this talk, I’ll describe some of the recent advances in machine learning and how they are applicable towards many of the U.S. National Academy of Engineering’s Global Challenges for the 21st Century (http://engineeringchal-lenges.org/). I will also touch on some exciting areas of research that we are cur¬rently pursuing within our group. This talk describes joint work with many people at Google.
“Curious Facts About Nested Canalyzing Functions”
If f is a binary valued function of binary variables, one of its variables v is called a “canalyzing variable” if the function can be described as follows: IF v=a THEN f=b ELSE f is a function of the remaining variables. If the “function of the remain¬ing variables” itself has a canalyzing variable and so forth, the function is called a “nested canalyzing function” or “NCF”. Because of the nesting, it is often the case that the solution to a computational problem for (n+1)-parameter NCFs is easy to obtain from the solution for n-parameter NCFs and by induction easy to solve for all NCFs. Because of this computational simplicity, it is easy to study NCFs experi¬mentally by working out examples and looking for patterns. Analysis of the asso¬ciated algorithms can then provide algebra for proving any resulting conjectures. In this talk, we discuss two such discoveries. One is the appearance of Fibonacci numbers when representing an NCF by a threshold gate. The other is a characteri¬zation of the NCFs with the worst average sensitivity and the asymmetry between odd numbered variables and even numbered variables.
“Mathematical Theories of Communication: Old and New”
Reliable and efficient digital communication today is possible largely in part due to some wonderful successes in mathematical modelling and analysis. A legendary figure in this space is Claude Shannon (1916-2001) who laid out the mathematical foundations of communication in his seminal 1948 treatise, where among other contributions he gave a mathematical definition of “entropy” and coined the now ubiquitous term “bit” (for binary digit). But Shannon is not the last word in commu¬nication. Communication extends to settings well beyond the carefully designed full information exchange model explored in Shannon’s work. In this talk, I will try to describe some of the many extensions that have been explored in the interim period including communication complexity (Yao 1980) that explores how it might be possible to achieve effective communication without a full exchange; interac¬tive communication (Schulman 1992) which explores how to cope with errors in an interactive setting, and some of our own work on uncertain communication, which explores how shared context can make communication more effective, even if the context is shared only loosely.
After the lectures, there was the Poster Flash from the PhD students where each participant had 2 minutes to explain their poster. Everyone was invited to ask more in depth questions as the presenters stood in front of the work in the display area. Tuesday evening’s HLF Octoberfest took place at Turf² in the neighboring city of Mannheim. Glasses were raised, the richness of the Bavarian culture was on display and the social atmosphere slightly turned up the volume.
Photos from the 5th HLF are available to download on the HLF flickr gallery.