DeepStyle: Multimodal Search Engine for Fashion and Interior Design

Ivona Tautkute , Tomasz Trzciński , Aleksander Skorupa , Lukasz Brocki , Krzysztof Marasek

Abstract

In this paper, we propose a multimodal search engine that combines visual and textual cues to retrieve items from a multimedia database aesthetically similar to the query. The goal of our engine is to enable intuitive retrieval of fashion merchandise such as clothes or furniture. Existing search engines treat textual input only as an additional source of information about the query image and do not correspond to the real-life scenario where the user looks for "the same shirt but of denim". Our novel method, dubbed DeepStyle, mitigates those shortcomings by using a joint neural network architecture to model contextual dependencies between features of different modalities. We prove the robustness of this approach on two different challenging datasets of fashion items and furniture where our DeepStyle engine outperforms baseline methods by more than 20% on tested datasets. Our search engine is commercially deployed and available through a Web-based application.
Author Ivona Tautkute
Ivona Tautkute,,
-
, Tomasz Trzciński (FEIT / IN)
Tomasz Trzciński,,
- The Institute of Computer Science
, Aleksander Skorupa
Aleksander Skorupa,,
-
, Lukasz Brocki
Lukasz Brocki,,
-
, Krzysztof Marasek
Krzysztof Marasek,,
-
Journal seriesIEEE Access, ISSN , e-ISSN 2169-3536, (0 pkt)
Issue year2019
Pages1-1
Publication size in sheets0.3
Keywords in EnglishMultimedia computing , Multi-layer neural network , Multimodal Search , Machine Learning
DOIDOI:10.1109/ACCESS.2019.2923552
URL https://ieeexplore.ieee.org/document/8737943
Languageen angielski
Score (nominal)5
ScoreMinisterial score = 5.0, 30-06-2019, ArticleFromJournal
Citation count*6 (2019-07-11)
Cite
Share Share

Get link to the record


* presented citation count is obtained through Internet information analysis and it is close to the number calculated by the Publish or Perish system.
Back