日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

成果報告書

Manipulating Attributes of Natural Scenes via Hallucination

MPS-Authors
/persons/resource/persons127761

Akata,  Zeynep
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)

arXiv:1808.07413.pdf
(プレプリント), 9MB

付随資料 (公開)
There is no public supplementary material available
引用

Karacan, L., Akata, Z., Erdem, A., & Erdem, E. (2018). Manipulating Attributes of Natural Scenes via Hallucination. Retrieved from http://arxiv.org/abs/1808.07413.


引用: https://hdl.handle.net/21.11116/0000-0002-1823-C
要旨
In this study, we explore building a two-stage framework for enabling users to directly manipulate high-level attributes of a natural scene. The key to our approach is a deep generative network which can hallucinate images of a scene as if they were taken at a different season (e.g. during winter), weather condition (e.g. in a cloudy day) or time of the day (e.g. at sunset). Once the scene is hallucinated with the given attributes, the corresponding look is then transferred to the input image while preserving the semantic details intact, giving a photo-realistic manipulation result. As the proposed framework hallucinates what the scene will look like, it does not require any reference style image as commonly utilized in most of the appearance or style transfer approaches. Moreover, it allows to simultaneously manipulate a given scene according to a diverse set of transient attributes within a single model, eliminating the need of training multiple networks per each translation task. Our comprehensive set of qualitative and quantitative results demonstrate the effectiveness of our approach against the competing methods.