Dependency Parsing
NLPA syntactic analysis task that identifies grammatical relationships between words in a sentence, producing a directed dependency tree.
Dependency parsing analyzes the grammatical structure of a sentence by establishing directed relationships between words. Each word (except the root) has exactly one head word it depends on, and the relationship is labeled with a grammatical function such as subject (nsubj), object (dobj), modifier (amod), or complement (comp). The result is a tree rooted at the main verb.
For example, in 'The quick fox jumped', 'jumped' is the root, 'fox' is its subject, 'quick' modifies 'fox', and 'The' is a determiner of 'fox'. These typed dependency relations encode who does what to whom, which is essential for extracting semantic content from text.
Dependency parsing is used in information extraction, question answering, relation extraction, and machine translation. Modern parsers use neural models — typically biaffine attention over transformer representations — and achieve high accuracy on standard treebanks. Libraries like spaCy, Stanford CoreNLP, and Stanza expose dependency parsing as a core pipeline component.
Last updated: March 6, 2026