João Carreira (University of California)
João Carreira is a Postdoctoral Scholar in Jitendra Malik’s group, at the EECS department of the University of California at Berkeley, working on object reconstruction from a single image. He received his PhD from the University of Bonn, Germany, supervised by Cristian Sminchisescu. His thesis focused on sampling class-independent object segmentation proposals using the CPMC algorithm, and on applying them in object recognition and localization. Systems authored by him and colleagues were winners of all four PASCAL VOC Segmentation challenges, 2009-2012. He is one of the main authors of several popular computer vision software packages, for object segmentation (CPMC), feature extraction with second-order pooling (O2P) and 3D reconstruction of object detection datasets (CARVI). His research interests lie at the intersection of recognition, segmentation, pose estimation and shape reconstruction of objects from a single image.
Talk: 3D Shape and Articulated Pose Recovery with Top-down Processing
Abstract: Feedforward feature extractors learned on large image datasets have brought much recent progress in image classification and object localization. However, for a biological agent or a robot to navigate and interact with the world, the ability to predict class labels and bounding boxes is not enough: it must be capable also of perceiving rough 3D surfaces, objects poses and articulation. In this talk I will argue for the crucial role of memory and top-down feedback in solving these advanced tasks. I will present results using class-specific learned 3D shape models and self-correcting articulated human pose predictors that operate with unprecedented accuracy and can be deployed "in the wild".
David P. Luebke (NVIDIA)
David Luebke helped found NVIDIA Research in 2006 after eight years on the faculty of the University of Virginia. Luebke received his Ph.D. under Fred Brooks at the University of North Carolina in 1998. His principal research interests are real-time computer graphics and GPU computing. Luebke's honors include the NVIDIA Distinguished Inventor award, the NSF CAREER and DOE Early Career PI awards, and the ACM Symposium on Interactive 3D Graphics "Test of Time Award". Dr. Luebke has co-authored a book, a SIGGRAPH Electronic Theater piece, a major museum exhibit visited by over 110,000 people, and dozens of papers, articles, chapters, and patents.
Talk: Computational Displays for Virtual & Augmented Reality
Abstract: Wearable displays — such as those used in the Oculus Rift, Google Glass, or more exotic systems like Tony Stark's fictional helmet in Iron Man 3 — all face a few fundamental challenges. One challenge is focus: how can we put a usable display as close to the eye as a pair of eyeglasses, where the human eye cannot bring it into focus? Another is field of view: how to expand beyond the "rear-view mirror" approach of Google Glass and fill the user's vision with displayed content. Still another challenge is resolution: how to fill that wide field of view with enough pixels, when human visual acuity (the limit of a so-called "retinal" display) would require displaying about 10,000x8,000 pixels per eye. A final challenge is bulk: displays should be unobtrusive and unencumbering, as light and forgettable as a pair of sunglasses, but the laws of optics mean that most VR displays are bulky boxes bigger than ski goggles. At NVIDIA we have been tackling these challenges using computational displays, which combine novel optics with computation that processes the displayed content to be "demangled" by novel optics. I will describe our recent work in the field: Near-eye light field displays replace the traditional lens of a virtual reality display like the Oculus Rift with a “bug’s eye” array of microlenses that make the display thin, light, able to display content at different focus depths, and able to accommodate user eyeglass prescriptions entirely in software. Pinlight displays use novel and very simple optics (containing no reflective, refractive, or diffractive elements) to provide the first see-through display that is both thin and wide field-of-view. Cascaded displays jointly optimize an image across two stacked offset displays to effectively double the frame rate and quadruple the resolution possible with a single display. All of these techniques exploit the computational horsepower of modern GPUs to enable unconventional optics to do something never done before in displays. I will close by highlighting some remaining challenges such as latency and power.
Xin-She Yang (Middlesex University)
Xin-She Yang obtained his DPhil in Applied Mathematics from the University of Oxford. He then worked at Cambridge University and National Physical Laboratory (UK) as a Senior Research Scientist. Now he is Research Professor/Reader at Middlesex University London, Adjunct Professor at Reykjavik University (Iceland) and Guest Professor at Xi'an Polytechnic University (China). He is the IEEE CIS Chair for the Task Force on Business Intelligence and Knowledge Management, Director of International Consortium for Optimization and Modelling in Science and Industry (iCOMSI), and the Editor-in-Chief of International Journal of Mathematical Modelling and Numerical Optimisation (IJMMNO).
Talk: Nature-Inspired Algorithms and Computational Intelligence
Abstract: Nature-inspired optimization algorithms have become effective tools for design optimization and computational intelligence. Such swarm intelligence based algorithms have attracted much attention in the current literature, this talk will review some recent developments, analyze the characteristics of these algorithms and highlight some key challenges. Some case studies in applications will also be discussed.
Diego Gutierrez (Universidad de Zaragoza)
Diego Gutierrez is an Associate Professor at the Universidad de Zaragoza, in Spain, where he is the founder and director of the Graphics and Imaging Lab. His research interests focus on global illumination, computational imaging and applied perception. He has worked as a visiting researcher in many institutions world wide, such as UC San Diego, Yale and MIT (USA), or Tsinghua (China). His work has been published in top journals (ACM Transactions on Graphics, IEEE Transactions on Computer Graphics and Visualization, Computer Graphics Forum...) and conferences (SIGGRAPH, SIGGRAPH Asia, Eurographics...). He's served on many Program Committees, including the leading conferences in all his areas of interest. He has chaired many top eventsas well (such as the Eurographics Rendering Symposium or ACM Applied Perception in Graphics and Visualization). He's the current co-Editor in Chief of ACM Transactions on Applied Pr eception, and is also an Associate Editor of three other journals (ACM Transactions on Graphics, Presence and Computers & Graphics)
Talk: Seeing the invisible
Abstract: The establishment of digital photography has meant a big revolution in the field; however, the basic process of capturing an image remains basically the same as 150 years ago: light goes through an optical system and converges on a sensor, where the image is formed. Computational photography is a novel research field whose goal is to overcome the limitations of conventional cameras, introducing computation where hardware, electronics and even physics fall short. The question now is, what are the limits of what can be captured with a camera? In this talk we'll show application examples to recover information from badly blurred images, or to detect light in motion capturing information at a trillion frames per second. Based on the latter, we will also introduce recent advances in transient light transport simulation, which can help create design novel imaging devices by means of an analysis-by-synthesis approach.