Further information


9101 1527241229

On the Confluence of Deep Learning and Proximal Methods

  • Date: Jun 1, 2018
  • Time: 14:00 - 15:00
  • Speaker: Daniel Cremers (TU München)
  • Daniel Cremers received Bachelor degrees in Mathematics (1994) and Physics (1994), and a Master's degree (Diplom) in Theoretical Physics (1997) from the University of Heidelberg. In 2002 he obtained a PhD in Computer Science from the University of Mannheim, Germany. Subsequently he spent two years as a postdoctoral researcher at the University of California at Los Angeles (UCLA) and one year as a permanent researcher at Siemens Corporate Research in Princeton, NJ. From 2005 until 2009 he was associate professor at the University of Bonn, Germany. Since 2009 he holds the chair for Computer Vision and Pattern Recognition at the Technical University, Munich. His publications received numerous awards. For pioneering research he received a Starting Grant (2009), a Proof of Concept Grant (2014) and a Consolidator Grant (2015) from the European Research Council. In December 2010 he was listed among "Germany's top 40 researchers below 40" (Capital). Prof. Cremers received the Gottfried-Wilhelm Leibniz Award 2016, the most important research award in German academia.
  • Location: MPA
  • Room: Old Lecture Hall, 401
  • Host: MPA
Abstract: While numerous low-level computer vision problems such as denoising, deconvolution or optical flow estimation were traditionally tackled with optimization approaches such as proximal methods, recently deep learning approaches trained on numerous examples demonstrated impressive and sometimes superior performance on respective tasks. In my presentation, I will discuss recent efforts to bring together these seemingly very different paradigms, showing how deep learning can profit from proximal methods and how proximal methods can profit from deep learning. This confluence allows to boost deep learning approaches both in terms of drastically faster training times as well as substantial generalization to novel problems that differ from the ones they were trained for (generalization / domain adaptation).

Go to Editor View
loading content