Update README.md

This commit is contained in:
Duzeyao
2019-11-07 12:09:48 +08:00
parent c4288cdba5
commit a0e35b924b

View File

@@ -3,7 +3,7 @@
## Description
- Chinese version of GPT2 training code, using BERT tokenizer or BPE tokenizer. It is based on the extremely awesome repository from HuggingFace team [Transformers](https://github.com/huggingface/transformers). Can write poems, news, novels, or train general language models. Support char level, word level and BPE level. Support large training corpus.
- 中文的GPT2训练代码使用BERT的Tokenizer或GPT2自带的BPE Tokenizer或Sentencepiece的BPE model感谢[kangzhonghua](https://github.com/kangzhonghua)的贡献。可以写诗新闻小说或是训练通用语言模型。支持字为单位或是分词模式或是BPE模式。支持大语料训练。
- 中文的GPT2训练代码使用BERT的Tokenizer或GPT2自带的BPE Tokenizer或Sentencepiece的BPE model感谢[kangzhonghua](https://github.com/kangzhonghua)的贡献实现BPE模式需要略微修改train.py的代码。可以写诗新闻小说或是训练通用语言模型。支持字为单位或是分词模式或是BPE模式需要略微修改train.py的代码。支持大语料训练。
- 微信交流群请见Issue第一条。
## UPDATE 10.25