home edit page issue tracker

This page pertains to UD version 2.

UD for Beja

Tokenization and Word Segmentation

The dependencies presented in the Universal Dependencies framework are based on a lexical approach of the syntax, the first step of the processing chain is then naturally constituted by a consideration of the tokenization. The idea is to extract the syntactic information related to the words in the discourse chain through the realized split.


This is an overview only. For more detailed discussion and examples, see the list of Beja POS tags and Beja features.





There are 1 Beja UD treebank: