Installation
About
Convert natural language text into tokens. Includes tokenizers for shingled n-grams, skip n-grams, words, word stems, sentences, paragraphs, characters, shingled characters, lines, Penn Treebank, regular expressions, as well as functions for counting characters, words, and sentences, and a function for splitting longer texts into separate documents, each with the same number of words. The tokenizers have a consistent interface, and the package is built on the 'stringi' and 'Rcpp' packages for fast yet correct tokenization in 'UTF-8'.
Citation | tokenizers citation info |
docs.ropensci.org/tokenizers/ | |
github.com/ropensci/tokenizers | |
Bug report | File report |
Key Metrics
Downloads
Yesterday | 1.330 +4% |
Last 7 days | 7.867 -12% |
Last 30 days | 32.666 -3% |
Last 90 days | 92.267 -2% |
Last 365 days | 393.107 -22% |
Depends
R | ≥ 3.1.3 |