Speakers

Generating Content-aware Perspective Videos for Comfortable 360° Video Watching

dgif_2017_program_book_내지_171123_최종_확인용
Kuk-Jin Yoon
Gwangju Institute of Science and Technology
Dec. 1 11:15~11:45

Abstract

To watch 360˚ videos on normal two-dimensional (2D) displays, we not only need to select the specific region we want to watch (known as the viewpoint selection step) but also need to project the selected part of the 360˚ image onto the 2D display plane. In this work, we propose a fully automated online framework for generating content-aware comfortable perspective videos from 360˚ videos. During the viewpoint selection step, we estimate the spatio-temporal visual saliency based on the appearance and motion cues and choose an initial viewpoint to maximize the saliency of a perspective video and capture semantically meaningful content. The viewpoint is then refined by considering a smooth path of video viewpoints in spherical coordinates. Once the viewpoint is determined, the perspective image is generated by our content-aware projection method considering the salient content of the video (e.g., linear structures and objects) obtained during the viewpoint selection process. To generate a comfortable perspective video, we enforced temporal consistency to both viewpoint selection and content-aware projection methods. Our method does not require any user interaction and is much faster than previous content-preserving methods. Quantitative and qualitative experiments on various 360˚ videos show the superiority of our perspective video generation framework.

 

Biography

Kuk-Jin Yoon received the B.S., M.S., and Ph.D. degrees in Electrical Engineering and Computer Science from Korea Advanced Institute of Science and Technology (KAIST) in 1998, 2000, 2006, respectively. He was a post-doctoral fellow in the PERCEPTION team in INRIA-Grenoble, France, for two years from 2006 to 2008, and joined the School of Electrical Engineering and Computer Science in Gwangju Institute of Science and Technology (GIST), Korea, as an assistant professor in 2008. He is currently an associate professor and a director of the Computer Vision Laboratory in GIST. His research interests include stereo, visual object tracking, SLAM, structure-from-motion, 3D reconstruction, vision-based ADAS, etc.

 

Education

  • 2006, KAIST (Ph.D. – Electrical Engineering and Computer Science)
  • 2000, KAIST (M.S. – Electrical Engineering and Computer Science)
  • 1998, KAIST (B.S. – Electrical Engineering and Computer Science)

 

Professional Career

  • 2014-Present, Gwangju Institute of Science and Technology – Associate Professor
  • 2008-2014, Gwangju Institute of Science and Technology – Assistant Professor
  • 2006–2008, INRIA Rhone-Alpes, France – Post-Doctoral Fellow
  • 2006–2006, KAIST – Post-Doctoral Researcher

 

Awards and Honors

  • 2017, Silver Prize : Samsung HumanTech Paper Award
  • 2017, Best Poster Presentation Award : FCV
  • 2016, Outstanding Reviewer, ECCV
  • 2016, Bronze Prize : Samsung HumanTech Paper Award
  • 2015, Silver Prize : Samsung HumanTech Paper Award
  • 2015, Participation Prize : Samsung HumanTech Paper Award
  • The 1st Place at the 1st Multi-object Tracking Challenge (MOT Competition sponsored by Daimler)
  • 2014, Silver Prize : Samsung HumanTech Paper Award
  • 2014, Bronze Prize : Samsung HumanTech Paper Award
  • 2012, Silver Prize : Samsung HumanTech Paper Award
  • 2006, Grants to Post-Doctoral Fellows by INRIA
  • 2006, Government Grant to Post-Doctoral Fellows by Korea Research Foundation
  • 2006, Silver Prize: Samsung HumanTech Paper Award
  • 2005, Top 10% among the Accepted Papers: ICIP
  • 2005, Bronze Prize: Samsung HumanTech Paper Award
  • 2003, Research Prize: The Fifth Korean Intelligent Robot Contest
  • 2001, The 3rd Place: Best Poster Award in Photonics Boston