山东大学RISC-V公共开放平台开发记录6

本文介绍了如何利用爬虫技术从维基百科和wikidata抓取RISC-V的相关实体及其关系,构建了一个知识图谱,包括实体识别、关系抓取和存储。重点展示了如何合并关系和中文名称,以及在实体间爬取并构建三元组的过程。
摘要由CSDN通过智能技术生成

山东大学RISC-V公共开放平台开发记录

RISC-V知识图谱

在建立risc-v知识图谱时,考虑使用爬虫搜集特定网站的相关知识,并且建立实体间的关系,通过实体和关系构建三元体,将三元体存入neo4j非关系数据库。

维基百科中存有专门用于爬取的关系表,方便在实体间进行关系跳转,爬取wikidata上定义的所有关系,wikidataCrawler将该网页下的汇总的所有关系及其对应的中文名称爬取下来,存储为json格式。

* relation.json内容: 关系的id,关系所属的大类,关系所属子类,对应的链接,关系的英文表示

* chrmention.json内容: 关系的id,关系的中文表示(对于不包含中文表示的数据暂时不做处理)。

relation.jsonchrmention.json的数据进行合并,运行mergeChrmentionToRelation.ipynb即可,得到的结果存储在result.json中,匹配失败的存在fail.json

之后爬取内容实体,存为json格式。

			for line in f:
				entity = line.split(" ")[0]
				print(entity)
				if(len(line.split(" ")) >= 2):
					entityNumber = line.split(" ")[1][0:-1]
				else:
					entityNumber = "999"
				entityList.append(entity)
				entityNumberList.append(entityNumber)
				entityCount += 1
		url_list = list()
		
		count = 0 
		for entity in entityList:
			if(self.containChinese(entity)):
				url = "https://www.wikidata.org/w/api.php?action=wbsearchentities&search="+entity+"&language=zh&format=json"
			else:
				url = "https://www.wikidata.org/w/api.php?action=wbsearchentities&search="+entity+"&language=en&format=json"

			url_list.append(url)
		proxies={'https':'http://127.0.0.1:1080'}

		headers = {
			"user-agent" : "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.108 Safari/537.36",
			"accept-language" : "zh-CN,zh;q=0.9,en;q=0.8",
			"keep_alive" : "False"
		}
		for url in url_list:
			
			print(1.0*count/entityCount)
			#httpRequest = requests.session()
			#httpRequest.keep_alive = False
			httpRequest = requests.session()
			# httpRequest.mount('https://', HTTPAdapter(max_retries=30))
			httpRequest.mount('https://', HTTPAdapter(max_retries=30))
			entityjson = httpRequest.get(url,headers=headers,proxies=proxies,verify=False).json()
			httpRequest.close()
			entityNumber = str()
			entityOriginName = str()
			if(len(entityNumberList)>count):
				entityNumber = entityNumberList[count]
			else:
				entityNumber = "999"
			if(len(entityList)>count):
				entityOriginName = entityList[count]
			else:
				entityOriginName = "NULL"
			if(len(entityjson['search']) !=0 ):
				tmp = WikientitiesItem()
				tmp['jsonItem'] = entityjson
				tmp['jsonNumber'] = entityNumber
				tmp['entityOriginName'] = entityOriginName
				yield tmp
				jsonItemList.append(tmp)
			count += 1

爬取实体和实体间的关系三元组,返回三元组

Wikidata是一个开放的全领域的知识库,其中包含大量的实体以及实体间的关系。下图是一个wikidata的实体页面

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-VRQXtjA5-1654331001447)(https://raw.githubusercontent.com/CrisJk/SomePicture/master/blog_picture/wikidataPage.png)]

从图中可以看到wikidata实体页面包含实体的描述和与该实体相关联的其它实体及对应的关系。

class entityRelationSpider(scrapy.spiders.Spider):
	name = "entityRelation"
	allowed_domains = ["wikidata.org"]
	start_urls = [
		"http://www.wikidata.org/w/api.php?action=wbsearchentities&search=abc&language=en"
	]

	def parse(self, response):
		

		#读取relation及对应的中文名
		entityRelationItem = WikidatarelationItem()
		relationName = dict()
		filePath = os.path.abspath(os.path.join(os.getcwd(),".."))
		#获取已经爬取的数据(避免重复爬)
		alreadyGet = []
		if(os.path.exists(os.path.join(filePath,"entity1_entity2.json"))):
			#读取文件
			with open(os.path.join(filePath,"entity1_entity2.json"),'r') as fr:
				for line in fr:
					entityIds = json.loads(line)
					alreadyGet.append(entityIds['entity1']+entityIds['relatedEntityId'])
		with open(filePath+"/wikidataRelation/relationResult.json", "r",encoding='utf-8') as fr:
			for line in fr.readlines():
				relationJson = json.loads(line)
				relation = relationJson['rmention']
				relationName[relation] = relationJson['chrmention']

		count = 0 
		with open(filePath+"/wikidataRelation/readytoCrawl.json","r",encoding='utf-8') as fr:
			for line in fr.readlines():
				count += 1 
				print(1.0*count/33355)
				entityJson  = json.loads(line)
				link = "http:"+entityJson['entity']['url']
				entityName = entityJson['entityOriginName']
				entity = scrapy.Request(link,callback=self.parseEntity)
				entity.meta['entityName'] = entityName
				entity.meta['link'] = link
				entity.meta['alreadyGet'] = alreadyGet
				yield entity


	def parseEntity(self, response):
		print("=======================")

		entity1 = response.meta['entityName']
		alreadyGet = response.meta['alreadyGet']
		entityRelation = WikidatarelationItem()
		headers = {
			"user-agent" : "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.108 Safari/537.36",
			"accept-language" : "zh-CN,zh;q=0.9,en;q=0.8",
			"keep_alive" : "False"
		}
		proxies={'https':'http://127.0.0.1:1080'}
		for section in response.xpath('//h2[contains(@class,"wb-section-heading")]//span/text()'):
			title = section.extract()
			flag =  0
			if(title == "Statements"):
				flag = 1 
				for statement in response.xpath('.//div[@class="wikibase-statementgroupview"]'):
					relationItem = statement.xpath('.//div[@class="wikibase-statementlistview"]')
					relationName = statement.xpath('.//div[contains(@class,"wikibase-statementgroupview-property-label")]//a[contains(@title,"P")]/text()').extract()
					if(len(relationName)>0):
						relationName = relationName[0]
					else:
						continue
					for relatedEntity in relationItem.xpath('.//div[contains(@class,"wikibase-statementview-mainsnak")]//div[contains(@class,"wikibase-statementview-mainsnak")]\
						//div[contains(@class,"wikibase-snakview-value-container")]//div[contains(@class,"wikibase-snakview-body")]\
						//div[contains(@class,"wikibase-snakview-value")]//a[contains(@title,"Q")]'):
							entityId = relatedEntity.xpath('./@title').extract()
							if(len(entityId) == 0):
								continue
							else:
								relatedEntityId = entityId[0]
								entityIdRelatedEntityId = entity1 + relatedEntityId
								if entityIdRelatedEntityId in alreadyGet:
									print(entityIdRelatedEntityId)
									continue

								httpRequest = requests.session()
								# httpRequest.mount('https://', HTTPAdapter(max_retries=30)) 
								httpRequest.mount('http://',HTTPAdapter(max_retries=30))
								url = "http://www.wikidata.org/w/api.php?action=wbgetentities&ids="+relatedEntityId+"&format=json"
								relatedEntityJson = httpRequest.get(url,headers=headers,proxies=proxies).json()
								httpRequest.close()
								entity2 = str()
								if 'zh' in relatedEntityJson['entities'][relatedEntityId]['labels']:
									entity2 = relatedEntityJson['entities'][relatedEntityId]['labels']['zh']['value']
								elif 'en' in relatedEntityJson['entities'][relatedEntityId]['labels']:
									entity2 = relatedEntityJson['entities'][relatedEntityId]['labels']['en']['value']
								else:
									continue
								entityRelation['entity1'] = entity1
								entityRelation['relation'] = relationName 
								entityRelation['entity2'] = entity2
								entityRelation['relatedEntityId'] = relatedEntityId
								yield entityRelation




						
			if(flag):
				break
		print("\n========================")

entity2’] = entity2
entityRelation[‘relatedEntityId’] = relatedEntityId
yield entityRelation

		if(flag):
			break
	print("\n========================")

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值