stanford sentiment treebank 数据集

 

datasetSentences.txt  格式:句子索引 句子内容

datasetSplit.txt  格式:句子索引 句子属于哪个集合(1 = train   2 = test   3 = dev)

train有8544条,dev有1101条,test有 2210条

dictionary.txt  格式 :句子(或者短语)| 索引值

sentiment_labels.txt  格式:索引值 | 情感值

句子和短语总有239232条

情感值对应类别:[0, 0.2], (0.2, 0.4], (0.4, 0.6], (0.6, 0.8], (0.8, 1.0] 分别对应五分类情感

 

将其处理成一句对应一个分数,并且分成训练集和验证集和测试集,和原本的数据些微差别,训练集,验证集,测试集都比原来少了几条数据,因为datasetSentences.txt 中有些句子里面的人名表示有特殊字符,和 dictionary.txt 中的匹配不上,你也可以手动加上。

python代码如下:

# Copyright 2018 lww. All Rights Reserved.
# coding: utf-8
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import

def delblankline(infile1, infile2, trainfile, validfile, testfile):
    with open(infile1, 'r') as info1, open(infile2, 'r') as info2, \
            open(trainfile, 'w') as train, open(validfile, 'w') as valid, open(testfile, 'w') as test:
        lines1 = info1.readlines()
        lines2 = info2.readlines()
        for i in range(1, len(lines1)):
            t1 = lines1[i].replace("-LRB-", "(")
            t2 = t1.replace("-RRB-", ")")
            k = lines2[i].strip().split(",")
            t = t2.strip().split('\t')
            if k[1] == '1':
                train.writelines(t[1])
                train.writelines("\n")
            elif k[1] == '2':
                test.writelines(t[1])
                test.writelines("\n")
            elif k[1] == '3':
                valid.writelines(t[1])
                valid.writelines("\n")
        print("end")

def tag_sentiment(infile,infile0, infile1, infile2):
    # ("sentiment_labels.txt", "dictionary.txt", "train.txt","train_final.txt")
    with open(infile, 'r') as info, open(infile0, 'r') as info0, open(infile1, 'r') as info1, \
        open(infile2, 'w') as info2:
        lines = info.readlines()
        lines0 = info0.readlines()
        lines1 = info1.readlines()
        
        text2id = {}
        for i in range(0, len(lines0)):
            s = lines0[i].strip().split("|")
            text2id[s[0]] = s[1]

        id2sentiment = {}
        for i in range(0, len(lines)):
            s = lines[i].strip().split("|")
            id2sentiment[s[0]] = s[1]
        
        for line in lines1:
            if line.strip() not in text2id:
                print(line.strip())
                # 由于特殊字符不匹配造成
                continue
            else:
                text_id = text2id[line.strip()]
            sentiment_score = id2sentiment[text_id]
            info2.write(line.strip() + "\n" + str(sentiment_score) + "\n")
        print("end3d1")


delblankline("datasetSentences.txt", "datasetSplit.txt", "train.txt", "valid.txt", "test.txt")
# 获取原始的训练集,测试集,验证集
# train有8544条,dev有1101条,test有 2210条
tag_sentiment("sentiment_labels.txt", "dictionary.txt", "train.txt","train_final.txt")
tag_sentiment("sentiment_labels.txt", "dictionary.txt", "test.txt","test_final.txt")
tag_sentiment("sentiment_labels.txt", "dictionary.txt", "valid.txt","valid_final.txt")
# 获取训练集,测试集,验证集句子对应的情感值
# 由于文本里面的特殊字符造成的不匹配,训练集,测试集,验证集会相对于上一步少几条

处理过后得到的数据为 train_final.txt,test_final.txt,valid_final.txt

 

  • 7
    点赞
  • 25
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 8
    评论
评论 8
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

旺旺棒棒冰

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值