Link

Data Objects and Annotations

Table of contents


This page describes the data objects and annotations used in Stanza, and how they interact with each other.

Document

A Document object holds the annotation of an entire document, and is automatically generated when a string is annotated by the Pipeline. It contains a collection of Sentences and entities (which are represented as Spans), and can be seamlessly translated into a native Python object.

Document contains the following properties:

PropertyTypeDescription
textstrThe raw text of the document.
sentencesList[Sentence]The list of sentences in this document.
entities (ents)List[Span]The list of entities in this document.
num_tokensintThe total number of tokens in this document.
num_wordsintThe total number of words in this document.

Document also contains the following method(s):

MethodReturn TypeDescription
to_dictList[List[Dict]]Dumps the whole document into a list of list of dictionaries, each dictionary representing a token, which are grouped by sentences in the document.

Sentence

A Sentence object represents a sentence (as is segmented by the TokenizeProcessor or provided by the user), and contains a list of Tokens in the sentence, a list of all its Words, as well as a list of entities in the sentence (represented as Spans).

Sentence contains the following properties:

PropertyTypeDescription
docDocumentA “back pointer” to the parent doc of this sentence.
textstrThe raw text for this sentence.
dependenciesList[(Word, str, Word)]The list of dependencies for this sentence, where each item contains the head Word of the dependency relation, the type of dependency relation, and the dependent Word in that relation.
tokensList[Token]The list of tokens in this sentence.
wordsList[Word]The list of words in this sentence.
entities (ents)List[Span]The list of entities in this sentence.

Sentence also contains the following methods:

MethodReturn TypeDescription
to_dictList[Dict]Dumps the sentence into a list of dictionaries, where each dictionary represents a token in the sentence.
print_dependenciesNonePrint the syntactic dependencies for this sentence.
print_tokensNonePrint the tokens for this sentence.
print_wordsNonePrint the words for this sentence.

Token

A Token object holds a token, and a list of its underlying syntactic Words. In the event that the token is a multi-word token (e.g., French au = à le), the token will have a range id as described in the CoNLL-U format specifications (e.g., 3-4), with its words property containing the underlying Words corresponding to those ids. In other cases, the Token object will function as a simple wrapper around one Word object, where its words property is a singleton.

Token contains the following properties:

PropertyTypeDescription
idTuple[int]The index of this token in the sentence, 1-based. This index contains two elements (e.g., (1, 2)) when the corresponding token is a multi-word token, otherwise it contains just a single element (e.g., (1, )).
textstrThe text of this token. Example: ‘The’.
miscstrMiscellaneous annotations with regard to this token. Used in the pipeline to store whether a token is a multi-word token, for instance.
wordsList[Word]The list of syntactic words underlying this token.
start_charintThe start character index for this token in the raw text of the document. Particularly useful if you want to detokenize at one point, or apply annotations back to the raw text.
end_charintThe end character index for this token in the raw text of the document. Particularly useful if you want to detokenize at one point, or apply annotations back to the raw text.
nerstrThe NER tag of this token, in BIOES format. Example: ‘B-ORG’.

Token also contains the following methods:

MethodReturn TypeDescription
to_dictList[Dict]Dumps the token into a list of dictionares, each dictionary representing one of the words underlying this token.
pretty_printstrPrint this token with the words it expands into in one line.

Word

A Word object holds a syntactic word and all of its word-level annotations. In the event of multi-word tokens (MWT), words are generated as a result of applying the MWTProcessor, and are used in all downstream syntactic analyses such as tagging, lemmatization, and parsing. If a Word is the result from an MWT expansion, its text will usually not be found in the input raw text. Aside from multi-word tokens, Words should be similar to the familiar “tokens” one would see elsewhere.

Word contains these properties:

PropertyTypeDescription
idintThe index of this word in the sentence, 1-based (index 0 is reserved for an artificial symbol that represents the root of the syntactic tree).
textstrThe text of this word. Example: ‘The’.
lemmastrThe lemma of this word.
upos (pos)strThe universal part-of-speech of this word. Example: ‘NOUN’.
xposstrThe treebank-specific part-of-speech of this word. Example: ‘NNP’.
featsstrThe morphological features of this word. Example: ‘Gender=Fem|Person=3’.
headintThe id of the syntactic head of this word in the sentence, 1-based for actual words in the sentence (0 is reserved for an artificial symbol that represents the root of the syntactic tree).
deprelstrThe dependency relation between this word and its syntactic head. Example: ‘nmod’.
depsstrThe combination of head and deprel that captures all syntactic dependency information. Seen in CoNLL-U files released from Universal Dependencies, not predicted by our Pipeline.
miscstrMiscellaneous annotations with regard to this word. The pipeline uses this field to store character offset information internally, for instance.
parentTokenA “back pointer” to the parent token that this word is a part of. In the case of a multi-word token, a token can be the parent of multiple words.

Word also contains the following methods:

MethodReturn TypeDescription
to_dictDictDumps the word into a dictionary with all its information.
pretty_printstrPrints the word in one line with all its information.

Span

A Span object stores attributes of a contiguous span of text. A range of objects (e.g., named entities) can be represented as a Span.

Span contains the following properties:

PropertyTypeDescription
docDocumentA “back pointer” to the parent document of this span.
textstrThe text of this span.
tokensList[Token]The list of tokens that correspond to this span.
wordsList[Word]The list of words that correspond to this span.
typestrThe entity type of this span. Example: ‘PERSON’.
start_charintThe start character offset of this span in the document.
end_charintThe end character offset of this span in the document.

Span also contains the following methods:

MethodReturn TypeDescription
to_dictDictDumps the span into a dictionary containing all its information.
pretty_printstrPrints the span in one line with all its information.

Adding new properties to Stanza data objects

New in v1.1

All Stanza data objects can be extended easily should you need to attach new annotations of interest to them, either through a new Processor you are developing, or from some custom code you’re writing.

To add a new annotation or property to a Stanza object, say a Document, simply call

Document.add_property('char_count', default=0, getter=lambda self: len(self.text), setter=None)

And then you should be able to access the char_count property from all instances of the Document class. The interface here should be familiar if you have used class properties in Python or other object-oriented language – the first and only mandatory argument is the name of the property you wish to create, followed by default for the default value of this property, getter for reading the value of the property, and setter for setting the value of the property.

By default, all created properties are read-only, unless you explicitly assign a setter. The underlying variable for the new property is named _{property_name}, so in our example above, Stanza will automatically create a class variable named _char_count to store the value of this property should it be necessary. This is the variable your getter and setter functions should use, if needed.