Link

Chinese and Arabic Segmenter

Table of contents


This software is for “tokenizing” or “segmenting” the words of Chinese or Arabic text. Tokenization of raw text is a standard pre-processing step for many NLP tasks. For English, tokenization usually involves punctuation splitting and separation of some affixes like possessives. Other languages require more extensive token pre-processing, which is usually called segmentation.

The Stanford Word Segmenter currently supports Arabic and Chinese. (The Stanford Tokenizer can be used for English, French, and Spanish.) The provided segmentation schemes have been found to work well for a variety of applications.

The system requires Java 1.8+ to be installed. We recommend at least 1G of memory for documents that contain long sentences. For files with shorter sentences (e.g., 20 tokens), you can decrease the memory requirement by changing the option java -mx1g in the run scripts.

Arabic

Arabic is a root-and-template language with abundant bound clitics. These clitics include possessives, pronouns, and discourse connectives. The Arabic segmenter segments clitics from words (only). Segmenting clitics attached to words reduces lexical sparsity and simplifies syntactic analysis.

The Arabic segmenter model processes raw text according to the Penn Arabic Treebank 3 (ATB) standard. It is an implementation of the segmenter described in:

Will Monroe, Spence Green, and Christopher D. Manning. 2014. Word Segmentation of Informal Arabic with Domain Adaptation. In ACL.

Chinese

Chinese is standardly written without spaces between words (as are some other languages). This software will split Chinese text into a sequence of words, defined according to some word segmentation standard. It is a Java implementation of the CRF-based Chinese Word Segmenter described in:

Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky and Christopher Manning. 2005. A Conditional Random Field Word Segmenter. In Fourth SIGHAN Workshop on Chinese Language Processing.

Two models with two different segmentation standards are included: Chinese Penn Treebank standard and Peking University standard.

On May 21, 2008, we released a version that makes use of lexicon features. With external lexicon features, the segmenter segments more consistently and also achieves higher F measure when we train and test on the bakeoff data. This version is close to the CRF-Lex segmenter described in:

Pi-Chuan Chang, Michel Galley and Chris Manning. 2008. Optimizing Chinese Word Segmentation for Machine Translation Performance. In WMT.

The older version (2006-05-11) without external lexicon features is still available for download, but we recommend using the latest version.

Another new feature of recent releases is that the segmenter can now output k-best segmentations. An example of how to train the segmenter is now also available.

Tutorials

Download

The Chinese and Arabic versions of CoreNLP use the segmenter for tokenization, and the segmenter package is available in all recent versions of CoreNLP. Bugfixes are primarily released through CoreNLP.

Previous versions of the segmenter are also available for download, licensed under theGNU General Public License (v2 or later). Source is included. The package includes components for command-line invocation and a Java API. The segmenter code is dual licensed (in a similar manner to MySQL, etc.). Open source licensing is under the full GPL, which allows many free uses. For distributors of proprietary software, commercial licensing is available. If you don’t need a commercial license, but would like to support maintenance of these tools, we welcome gift funding.

The download is a zipped file consisting of model files, compiled code, and source files. If you unpack the tar file, you should have everything needed. Simple scripts are included to invoke the segmenter.

Download Stanford Word Segmenter version 4.2.0

Mailing Lists

We have 3 mailing lists for the Stanford Word Segmenter all of which are shared with other JavaNLP tools (with the exclusion of the parser). Each address is at @lists.stanford.edu:

  1. java-nlp-user This is the best list to post to in order to send feature requests, make announcements, or for discussion among JavaNLP users. (Please ask support questions on Stack Overflow using the stanford-nlp tag.)

You have to subscribe to be able to use this list. Join the list via this webpage or by emailing java-nlp-user-join@lists.stanford.edu. (Leave the subject and message body empty.) You can also look at the list archives.

  1. java-nlp-announce This list will be used only to announce new versions of Stanford JavaNLP tools. So it will be very low volume (expect 2-4 messages a year). Join the list via this webpage or by emailing java-nlp-announce-join@lists.stanford.edu. (Leave the subject and message body empty.)

  2. java-nlp-support This list goes only to the software maintainers. It’s a good address for licensing questions, etc. For general use and support questions, you’re better off using Stack Overflow or joining and usingjava-nlp-user. You cannot join java-nlp-support, but you can mail questions to java-nlp-support@lists.stanford.edu.

Extensions: Packages by others using Stanford Word Segmenter

Release History

VersionDateDescription
4.2.02020-11-17Update for compatibility
4.0.02020-04-19New Chinese segmenter trained off of CTB 9.0
3.9.22018-10-16Updated for compatibility
3.9.12018-02-27Updated for compatibility
3.8.02017-06-09Update for compatibility
3.7.02016-10-31Update for compatibility
3.6.02015-12-09Updated for compatibility
3.5.22015-04-20Updated for compatibility
3.5.12015-01-29Updated for compatibility
3.5.02014-10-26Upgrade to Java 8
3.4.12014-08-27Updated for compatibility
3.42014-06-16Updated Arabic model
3.3.12014-01-04Bugfix release
3.3.02013-11-12Updated for compatibility
3.2.02013-06-20Improved line by line handling
1.6.82013-04-04ctb7 model, -nthreads option
1.6.72012-11-11Bugfixes for both Arabic and Chinese, Chinese segmenter can now load data from a jar file
1.6.62012-07-09Improved Arabic model
1.6.52012-05-22Fixed encoding problems, supports stdin for Chinese segmenter
1.6.42012-05-07Included Arabic model
1.6.32012-01-08Minor bug fixes
1.6.22011-09-14Improved thread safety
1.6.12011-06-19Fixed empty document bug when training new models
1.62011-05-15Models updated to be slightly more accurate; code correctly released so it now builds; updated for compatibility with other Stanford releases
1.52008-05-21(with external lexicon features; able to output k-best segmentations)
1.02006-05-11Initial release