自定义博客皮肤VIP专享

*博客头图:

格式为PNG、JPG,宽度*高度大于1920*100像素,不超过2MB,主视觉建议放在右侧,请参照线上博客头图

请上传大于1920*100像素的图片!

博客底图:

图片格式为PNG、JPG,不超过1MB,可上下左右平铺至整个背景

栏目图:

图片格式为PNG、JPG,图片宽度*高度为300*38像素,不超过0.5MB

主标题颜色:

RGB颜色,例如:#AFAFAF

Hover:

RGB颜色,例如:#AFAFAF

副标题颜色:

RGB颜色,例如:#AFAFAF

自定义博客皮肤

-+
  • 博客(11)
  • 资源 (7)
  • 收藏
  • 关注

原创 中文情感分类任务如何对bert语言模型微调,微调后的模型如何使用

要想在中文情感分类任务中完成bert语言模型的微调,需要有bert开源的代码,然后在bert开源数据中下载chinese_L-12_H-768_A-12,最后还要有中文情感数据,数据格式为(类别id\t句子)。如果bert代码和中文情感数据没有,可以在我分享的资源中下载。如果三者都有了按照以下操作即可完成微调,并对微调后的模型进行使用。run_classifier.py中找到proces...

2019-05-21 16:58:47 7283 15

原创 zsh: command not found: activate/conda等报错等解决办法

在安装之前zsh之前是可以使用这些命令的,安装之后不能使用只需要一个命令:source ~/.bash_profile

2019-05-21 16:24:47 4205

原创 sort: Illegal byte sequence Error 编码问题 Bash Shell Mac

#命令:cat dvd_positive_1000.txt | sort -R > shuffle_dvd_positive.txt#报错:sort: Illegal byte sequence解决办法1,在terminal中输入LANG=C 然后输入命令如果上述方法不管用就尝试解决办法2:在terminal中输入LC_ALL=C然后输入命令...

2019-05-18 22:45:33 4335

原创 Mac终端显示中文乱码 VIM打开CSV文件中文乱码

先说解决办法:打开termina依次输入以下命令:echo "export LANGUAGE=en_US.UTF-8export LANG=en_US.UTF-8export LC_ALL=en_US.UTF-8">>~/.bash_profile然后source ~/.bash_profile同时也可以解决Perl的warningsperl: w...

2019-05-18 16:57:17 1005

原创 读文件时报错UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbd in position 1326: invalid start byte

file = open(open_file, 'r', encoding='utf-8', errors='ignore')解决办法见上

2019-05-18 12:34:35 3604 1

原创 Keras Bi-lstm 报错AttributeError:'Tensor' object has no attribute 'get_config' 如何用Keras实现双向LSTM

已知单向LSTM可以通过以下两行命令获取x = Embedding(max_features, embedding_dims, input_length=maxlen)(input_layer)​​​​​​​lstm_layer=LSTM(128)(x)后想将lstm换成bi-lstm,也可直接调用下述包来实现:from keras.layers import Bidir...

2019-05-16 14:04:20 2971 2

原创 Keras显示召回率(classification metrics can't handle a mix of multi-label-indicator targets) model.predict

本来程序中用了model.evaluate来求loss和准确率score, acc = model.evaluate(X_test, y_test, batch_size=batch_size)后来想加上recall,查了半天也没找到model.evaluate能返回recall。后来就想换个函数:y_pred=model.predict(X_test, batch_size=...

2019-05-15 19:54:42 47132 23

原创 tf.reshap()和tf.tail() Contents: [Dimension(None), 1]. Consider casting elements to a supported types

在tensorflow/Keras中batch_size是变化的且构图时初始化为None而造成的Cannot convert unknown dimension (None) 或者Contents: [Dimension(None), 1]. Consider casting elements to a supported types报错的解决办法。我在Kears构图过程中中想用tf.t...

2019-05-04 13:07:41 3626

原创 Keras 如何增加可训练的变量作为权重weight 并给已有的layer加权

这个小问题为难了自己一天,所以fix后决定记录以下。这篇博客是按照自己发现问题到定位问题再到解决问题的顺序来记录的,如果各位大神有更好的解决办法请指出~如果想直接上手大家可以直接看最后代码。在我的项目中需要一个可训练的vector(1,hidden_num)来对一个(batch_size,hidden_num)的矩阵每一维完成点乘在代码中看,目标是完成以下操作semantic...

2019-05-04 12:50:40 14213 9

原创 VAE手写体识别项目实现(详细注释)从小项目通俗理解变分自编码器(Variational Autoencoder, VAE)tu

项目及代码来源:https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/variational_autoencoder.py在看代码前可以简单理解vae的基本概念上,推荐一篇知乎文章:https://zhuanlan.zhihu.com/p/55557709还有...

2019-05-01 03:23:21 2322 2

原创 可直接运行的gan小项目(详细注释)结合代码通俗理解gan的原理

大家先看底下对这个gan小项目来源和介绍,然后再看代码,copy下来可直接运行。中文注释是我自己理解的过程中加的,对好不了解gan但是想直接上手的同志比较友好,如果注释有不合理的地方欢迎指出~这个小项目有利于不了解gan原理的同志迅速上手和了解gan原理#-*-coding= utf-8 -*-import tensorflow as tfimport numpy as np#i...

2019-05-01 02:39:42 2890 1

中科院高级人工智能符号主义知识点总结.pdf

中国科学院大学 高级人工智能课程 符号主义部分 罗平老师部分 所有考点 重点 总结和证明 有完整的思路曲线 对每一个考点都有涵盖和展开证明 如归结原理的完备性

2020-01-04

ag_news_csv.tgz

496,835 条来自 AG 新闻语料库 4 大类别超过 2000 个新闻源的新闻文章,数据集仅仅援用了标题和描述字段。每个类别分别拥有 30,000 个训练样本及 1900 个测试样本。 README: AG's News Topic Classification Dataset Version 3, Updated 09/09/2015 ORIGIN AG is a collection of more than 1 million news articles. News articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine which has been running since July, 2004. The dataset is provided by the academic comunity for research purposes in data mining (clustering, classification, etc), information retrieval (ranking, search, etc), xml, data compression, data streaming, and any other non-commercial activity. For more information, please refer to the link http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html . The AG's news topic classification dataset is constructed by Xiang Zhang ([email protected]) from the dataset above. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015). DESCRIPTION The AG's news topic classification dataset is constructed by choosing 4 largest classes from the original corpus. Each class contains 30,000 training samples and 1,900 testing samples. The total number of training samples is 120,000 and testing 7,600. The file classes.txt contains a list of classes corresponding to each label. The files train.csv and test.csv contain all the training samples as comma-sparated values. There are 3 columns in them, corresponding to class index (1 to 4), title and description. The title and description are escaped using double quotes ("), and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n".

2019-05-28

ag_news数据集

496,835 条来自 AG 新闻语料库 4 大类别超过 2000 个新闻源的新闻文章,数据集仅仅援用了标题和描述字段。每个类别分别拥有 30,000 个训练样本及 1900 个测试样本。 README: AG's News Topic Classification Dataset Version 3, Updated 09/09/2015 ORIGIN AG is a collection of more than 1 million news articles. News articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine which has been running since July, 2004. The dataset is provided by the academic comunity for research purposes in data mining (clustering, classification, etc), information retrieval (ranking, search, etc), xml, data compression, data streaming, and any other non-commercial activity. For more information, please refer to the link http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html . The AG's news topic classification dataset is constructed by Xiang Zhang ([email protected]) from the dataset above. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015). DESCRIPTION The AG's news topic classification dataset is constructed by choosing 4 largest classes from the original corpus. Each class contains 30,000 training samples and 1,900 testing samples. The total number of training samples is 120,000 and testing 7,600. The file classes.txt contains a list of classes corresponding to each label. The files train.csv and test.csv contain all the training samples as comma-sparated values. There are 3 columns in them, corresponding to class index (1 to 4), title and description. The title and description are escaped using double quotes ("), and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n".

2019-05-28

fine_tuning_data.zip 可直接用bert进行微调的中文情绪数据

具体使用方法可以看我的博客:https://blog.csdn.net/weixin_40015791/article/details/90410083 下面也会简单介绍一下:在bert开源代码中的run_classifier.py中找到 processors = { "cola": ColaProcessor, "mnli": MnliProcessor, "mrpc": MrpcProcessor, "xnli": XnliProcessor, "intentdetection":IntentDetectionProcessor, "emotion":EmotionProcessor, #新加上这一行 } 然后在该文件中增加一个class: class EmotionProcessor(DataProcessor): """Processor for the MRPC data set (GLUE version).""" def get_train_examples(self, data_dir): """See base class.""" return self._create_examples( self._read_tsv(os.path.join(data_dir, "fine_tuning_train_data.tsv")), "train") #此处的名字和文件夹中的训练集的名字要保持一致 def get_dev_examples(self, data_dir): """See base class.""" return self._create_examples( self._read_tsv(os.path.join(data_dir, "fine_tuning_val_data.tsv")), "dev") def get_test_examples(self, data_dir): """See base class.""" return self._create_examples( self._read_tsv(os.path.join(data_dir, "fine_tuning_test_data.tsv")), "test") def get_labels(self): """See base class.""" return ["0", "1","2","3","4","5","6"] #七分类则从0到6 def _create_examples(self, lines, set_type): """Creates examples for the training and dev sets.""" examples = [] for (i, line) in enumerate(lines): if i == 0: continue guid = "%s-%s" % (set_type, i) if set_type == "test": label = "0" text_a = tokenization.convert_to_unicode(line[0]) else: label = tokenization.convert_to_unicode(line[0]) text_a = tokenization.convert_to_unicode(line[1]) examples.append( InputExample(guid=guid, text_a=text_a, text_b=None, label=label)) return examples 最后直接调用即可,运行的命令如下: python run_classifier.py \ --task_name=emotion \ --do_train=true \ --do_eval=true \ --data_dir=data \ #把数据解压到同一级的文件夹中,此处是该文件夹名字data --vocab_file=chinese_L-12_H-768_A-12/vocab.txt \ #中文数据要微调的原始bert模型 --bert_config_file=chinese_L-12_H-768_A-12/bert_config.json \ --init_checkpoint=chinese_L-12_H-768_A-12/bert_model.ckpt \ --max_seq_length=128 \ --train_batch_size=32 \ --learning_rate=2e-5 \ --num_train_epochs=3.0 \ --output_dir=output #生成文件所在的文件夹 大概9个小时,最后文件夹中会有三个文件 后缀分别为index/meta/00000-of-00001,分别将这个改成bert_model.ckpt.index/bert_model.ckpt.meta/bert_model.ckpt.data-00000-of-00001,再在同一个文件夹中放入chinese_L-12_H-768_A-12中的vocab.txt和bert_config.json 即最后该文件夹中有5个文件。然后像调用chinese_L-12_H-768_A-12一样将文件夹名改成自己的文件夹名即可。 bert-serving-start -model_dir output -num_worfer=3 即可调用微调后的语言通用模型。

2019-05-21

来自于NLPCC2013,解析成txt文件 不均衡分类 中文情感分析7类情感.zip

来自于NLPCC2013,解析后每一行为 情感\t句子 共有七类情感,且分布不均衡,划分训练集和测试集后数据数量为 1488 anger_data.txt 186 anger_test.txt 186 anger_val.txt 8:1:1 2459 disgust_data.txt 307disgust_test.txt 307 disgust_val.txt 8:1:1 201 fear_data.txt 50 fear_test.txt 50 fear_val.txt 4:1:1 2298happiness_data.txt 287happiness_test.txt 287happiness_val.txt 8:1:1 3286 like_data.txt 410 like_test.txt 410 like_val.txt 8:1:1 1917 sadness_data.txt 239 sadness_test.txt 239 sadness_val.txt 8:1:1 626 surprise_data.txt 78 surprise_test.txt 78 surprise_val.txt 8:1:1

2019-05-17

Multi-Domain Sentiment Datase英文情感分析数据 正负情感 semantic_data.zip

Multi-Domain Sentiment Dataset解析成txt文件,只提取出文本和对应标签。 positive和negative二分类。包括dvd,kitchen,books,electronics四个domain数据,每一个domain分别有positive和negative数据各1000条。 每一行为lable\tSentence。 详见https://www.cs.jhu.edu/~mdredze/publications/sentiment_acl07.pdf

2019-05-17

CapsuleNetWork:从TensorFlow复现代码理解胶囊网络(DynamicRoutingBetweenCapsules)

从TensorFlow复现代码理解胶囊网络(Dynamic Routing Between Capsules) 论文链接:https://arxiv.org/abs/1710.09829 Tensorflow代码复现链接:https://github.com/naturomics/CapsNet-Tensorflow

2018-10-17

空空如也

TA创建的收藏夹 TA关注的收藏夹

TA关注的人

提示
确定要删除当前文章?
取消 删除