原文:
Tagged and Cleaned Wikipedia (TC Wikipedia) and its Ngram
Wikipedia is a relatively big and consistent resource for NLP researchers to work with. However, it is not straightforward even to extract meaningful sentences and portions which are useful for the research. In order to avoid the duplication of the laborious efforts, we will make our "Tagged and Cleand Wikipedia (TC Wikipedia) available for the community.
5 Data (See some samples below)
Static Wikipedia data : Wikipedia as of 18:12, June 8, 2008 version. The static data was downloaded in November 2008.
List of headwords : List of headword with ID number (used in the other data), Filename, Category and Redirection list
articles.lst.gz (134MB)
Infobox data : Infobox data. html format of infobox data (html) and text format seperated by newlines and tabs (txt).infoboxes.html.gz (551MB)
infoboxes.txt.gz (207MB)
Sentences tagged by NLP tools : All sentences which are thought to be contents are tagged by NLP tools (now it has tokenization, POS, NE and link URL information).wikipedia-tagged2_1.txt.gz (12GB, Get through FTP)
This new version includes two POS, one chunk and two NE information.
Old version (1.1) includes only one POS (Stanford), one chunk and one NE (Stanford) information. (SEE BELOW)
TWO BUGS in version 1.1
There are several empty or junk lines outside of #s-doc and #e-doc tags. Use the tag to extract meaningful text contents, rather than just excluding the lines starting with "#[se]-(label)".
Some NE labels have "0 (zero)" rather than "O (oh)". Please replace them to "O".
These bugs will be fixed in the next distribution (which includes more annotations). Sorry for the inconvenience.
Ngram : ngram data of the corpus (no frequency threshold, no filtering of any types of tokens, no normalization of numbers)README
1gram (31MB)
2gram (447MB)
3gram (1.9GB)
4gram (4.3GB)
5gram (7.1GB)
6gram (10GB)
7gram (13GB)
描述: 包含4400000篇文章 及19亿单词,可用来做语言建模
大家可以到官网地址下载数据集,我自己也在百度网盘分享了一份。可关注本人公众号,回复“2020092501”获取下载链接。