Scenegraph Parser

Table of contents


Scene Graphs are a graph-based semantic representation of image contents. They encode the objects in an image, their attributes and the relationships between objects. This system takes a single-sentence image description and parses it into a scene graph as described in the paper:

Sebastian Schuster, Ranjay Krishna, Angel Chang, Li Fei-Fei, and Christopher D. Manning. 2015. Generating Semantically Precise Scene Graphs from Textual Descriptions for Improved Image Retrieval. In Proceedings of the Fourth Workshop on Vision and Language (VL15). [bib]

The system requires Java 1.8+ to be installed. The current version of SceneGraph is included in Stanford CoreNLP.

The system is licensed under the GNU General Public License (v2 or later). Source is included. The package includes components for command-line invocation, and a Java API.


To run the code, you need the CoreNLP jar and the CoreNLP models jar as well as the Scene Graph Parser jar in your classpath.

Older versions

This version is updated to work with CoreNLP 4.2.0: Download the Scene Graph Parser [0.2 MB]

This is the original version, which works with CoreNLP 3.6.0 [404 MB]
Download the Scene Graph Parser [0.2 MB]
The source code for the Scene Graph Parser is included in the jar file.


You can either run the parser programmatically or in interactive mode through the command line.

To parse sentences interactively, put all the jar files from the CoreNLP distribution and the Scene Graph Parser jar into one directory and then run the following command from this directory.

java -mx2g -cp "*" edu.stanford.nlp.scenegraph.RuleBasedParser

Alternatively, you can also run the parser programmatically as following.

import edu.stanford.nlp.scenegraph.RuleBasedParser;
import edu.stanford.nlp.scenegraph.SceneGraph;

String sentence = "A brown fox chases a white rabbit.";

RuleBasedParser parser = new RuleBasedParser();
SceneGraph sg = parser.parse(sentence);

//printing the scene graph in a readable format

//printing the scene graph in JSON form


Please email Sebastian Schuster if you have any questions.


Once downloaded, the code can be invoked either programmatically or through the command line. The program can be invoked with the following command. This will read lines from standard in, and produce relation triples in a tab separated format: (confidence; subject; relation; object).

  java -mx1g -cp stanford-openie.jar:stanford-openie-models.jar edu.stanford.nlp.naturalli.OpenIE

To process files, simply pass them in as arguments to the program. For example,

  java -mx1g -cp stanford-openie.jar:stanford-openie-models.jar edu.stanford.nlp.naturalli.OpenIE  /path/to/file1  /path/to/file2 

In addition, there are a number of flags you can set to tweak the behavior of the program.

-format{reverb, ollie, default}Change the output format of the program. Default will produce tab separated columns for confidence, the subject, relation, and the object of a relation. ReVerb will output a TSV in the ReVerb format. Ollie will output relations in the default format returned by Ollie.
-filelist/path/to/filelistA path to a file, which contains files to annotate. Each file should be on its own line. If this option is set, only these files are annotated and the files passed via bare arguments are ignored.
-threadsintegerThe number of threads to run on. By default, this is the number of threads on the system.
-max_entailments_per_clauseintegerThe maximum number of entailments to produce for each clause extracted in the sentence. The larger this value is, the slower the system will run, but the more relations it can potentially extract. Setting this below 100 is not recommended; setting it above 1000 is likewise not recommended.
-resolve_corefbooleanIf true, resolve pronouns to their canonical antecedent. This option requires additional CoreNLP annotators not included in the distribution, and therefore only works if used with the CoreNLP OpenIE annotator, or invoked via the command line from the CoreNLP jar.
-ignore_affinitybooleanIgnore the affinity model for prepositional attachments.
-affinity_probability_capdoubleThe affinity value above which confidence of the extraction is taken as 1.0. Default is 1/3.
-triple.strictbooleanIf true (the default), extract triples only if they consume the entire fragment. This is useful for ensuring that only logically warranted triples are extracted, but puts more burden on the entailment system to find minimal phrases (see -max_entailments_per_clause).
-triple.all_nominalsbooleanIf true, extract nominal relations always and not only when a named entity tag warrants it. This greatly overproduces such triples, but can be useful in certain situations.
-splitter.model/path/to/model.ser.gz[rare] You can override the default location of the clause splitting model with this option.
-splitter.nomodel [rare] Run without a clause splitting model – that is, split on every clause.
-splitter.disable [rare] Don’t split clauses at all, and only extract relations centered around the root verb.
-affinity_model/path/to/model_dir[rare] A custom location to read the affinity models from.

The code can also be invoked programatically, using Stanford CoreNLP. For this, simply include the annotators natlog and openie in the annotators property, and add any of the flags described above to the properties file prepended with the string “openie.” Note that openie depends on the annotators “tokenize,ssplit,pos,depparse”. An example working code snippet is provided below. This snippet will annotate the text “Obama was born in Hawaii. He is our president,” and print out each extraction from the document to the console.


  1. java-nlp-user This is the best list to post to in order to send feature requests, make announcements, or for discussion among JavaNLP users. (Please ask support questions on Stack Overflow using the stanford-nlp tag.) You have to subscribe to be able to use this list. Join the list via this webpage or by emailing (Leave the subject and message body empty.) You can also look at the list archives.

  2. java-nlp-announce This list will be used only to announce new versions of Stanford JavaNLP tools. So it will be very low volume (expect 1-3 messages a year). Join the list via this webpage or by emailing (Leave the subject and message body empty.)

  3. java-nlp-support This list goes only to the software maintainers. It’s a good address for licensing questions, etc. For general use and support questions, you’re better off joining and usingjava-nlp-user. You cannot join java-nlp-support, but you can mail questions to


Release History

3.6.02015-12-09First releasecode / models / source