Parsing Natural Scenes and Natural Language with Recursive Neural Networks
|
Recursive structure is commonly found in the inputs of different modalities such as natural scene images or natural language sentences. Discovering this recursive structure helps us to not only identify the units that an image or sentence contains but also how they interact to form a whole. We introduce a max-margin structure prediction architecture based on recursive neural networks that can successfully recover such structure both in complex scene images as well as sentences. The same algorithm can be used both to provide a competitive syntactic parser for natural language sentences from the Penn Treebank and to outperform alternative approaches for semantic scene segmentation, annotation and classification. For segmentation and annotation our algorithm obtains a new level of state-of-the-art performance on the Stanford background dataset (78.1%). The features from the image parse tree outperform Gist descriptors for scene classification by 4%.
ICML Distinguished Application Paper Award
|
Video of Talk
Download Paper
Download Vision Code
Download Language Parser
Download Data Set
- We use the Stanford Vision library for computing segment features for the Stanford background dataset.
- In the file below you can download all the pre-processed data which should work out-of-the-box with the code provided above. The file includes:
- original images
- superpixels/ oversegmentation
- features for each superpixel
- ground truth labels for pixels and superpixels
- superpixel adjacency matrix
- Download code and dataset: codeAndDataSocherICML2011.zip (748 MB !)
- See included README.txt for more information.
- We thank Stephen Gould and Tianshi Gao for letting us re-distribute their dataset and helping us with the feature computation.
Results
- Here are some scene image segmentation results (from the paper)
-
- Here are some more results (not in the paper)
Bibtex
- If you use the code, please cite
@InProceedings{SocherEtAl2011:RNN,
author = {Richard Socher and Cliff C. Lin and Andrew Y. Ng and Christopher D. Manning},
title = {{Parsing Natural Scenes and Natural Language with Recursive Neural Networks}},
booktitle = {Proceedings of the 26th International Conference on Machine Learning (ICML)},
year = 2011
}
Comments
For remarks, critical comments or other thoughts on the paper.
Save what you write before you post, then type in the password, post (nothing happens), then copy the text and re-post. It's to prevent spammers.
I came across this on the Facebook group "Strong Artificial Intelligence"
Thank you for the source code and all the explanations. I'd like to reference this page on the website of the hard science novel 'Memories with Maya'
You'll understand my enthusiasm, because of the creative spin on AI done, in that story...
“I granted the AI access to the cameras on your Wizer,” he said. “Remember when I told you about frames and how the AI could take snapshots of your environment, then run image and feature recognition?”
“Yeah…” was all I could manage.
“That's what It just did. The AI takes a snapshot every few seconds when you're wearing the Wizer and creates a frame. The name of the bridge on the sign above is one of the prominent symbols to be recognized.”
“The AI also gets help from GPS data. It then cross references the current frame with its database of memories… stored frames, and if it finds a match, [It] tells you.
Memories, that's all they are, but if they begin to fade, [It'll] remind you,” he said.