One of the highlights of the Summit will be the keynote address by Yong Jae Lee, associate professor at the University of Wisconsin-Madison.
Lee will present groundbreaking research on creating intelligent systems that can learn to understand our multimodal world with minimal human supervision.
He will focus on systems that can comprehend both images and text, while also touching upon those that utilize video, audio and LiDAR.
Attendees will gain insights into how these emerging techniques can address neural network training bottlenecks, facilitate new types of multimodal machine perception and enable countless new applications.