Connecting Modalities: Semi-supervised Segmentation and Annotation of Images Using Unaligned Text Corpora
- Download: paper
We propose a semi-supervised model which segments and annotates images using very few labeled images and a large unaligned text corpus to relate image regions to text labels. Given photos of a sports event, all that is necessary to provide a pixel-level labeling of objects and background is a set of newspaper articles about this sport and one to five labeled images. Our model is motivated by the observation that words in text corpora share certain context and feature similarities with visual objects. We describe images using visual words, a new region-based representation. The proposed model is based on kernelized canonical correlation analysis which finds a mapping between visual and textual words by projecting them into a latent meaning space. Kernels are derived from context and adjective features inside the respective visual and textual domains. We apply our method to a challenging dataset and rely on articles of the New York Times for textual features. Our model outperforms the state-of-the-art in annotation. In segmentation it compares favorably with other methods that use significantly more labeled training data.
- Code + Data (preprocessed): connectingModalities+data.zip (25mb)
author = "Richard Socher and Li Fei-Fei",
title = "Connecting Modalities: Semi-supervised Segmentation and Annotation of Images Using Unaligned Text Corpora",
booktitle = "CVPR",
year = "2010",
month = "June",
address = "San Francisco, CA"
For remarks, critical comments or other thoughts on the paper.