Supervisor: Dr Yi-Zhe Song
Sketching comes naturally to humans. With the proliferation of touchscreens, we can now sketch effortlessly and ubiquitously by sweeping fingers on phones, tablets and smart watches. Studying free-hand sketches has thus become increasingly popular in recent years, with a wide spectrum of work addressing sketch recognition, sketch-based image retrieval, and sketching style and abstraction. While computers are approaching human level on recognizing free-hand sketches, their capability of synthesizing sketches, especially free-hand sketches, has not been fully explored. The main existing works on sketch synthesis are engineered specifically and exclusively for a single category: human faces. In this project, going beyond just one object category, we aim to introduce a generative data-driven model for free-hand sketch synthesis of diverse object categories. In contrast with prior art, (i) the model should be capable of capturing structural and appearance variations without the handcrafted structural prior, (ii) no purpose-built datasets should be required to learn from, but instead publicly available datasets of free-hand sketches are utilized, and (iii) the model should optimally fits free-hand strokes to an image via a detection process, thus capturing the specific structural and appearance variation of the image and performing synthesis in free-hand sketch style.