python爬取谷歌学术_使用Python(或R)提取Google Scholar结果

I'd like to use python to scrape google scholar search results. I found two different script to do that, one is gscholar.py and the other is scholar.py (can that one be used as a python library?).

Now, I should maybe say that I'm totally new to python, so sorry if I miss the obvious!

The problem is when I use gscholar.py as explained in the README file, I get as a result

query() takes at least 2 arguments (1 given).

Even when I specify another argument (e.g. gscholar.query("my query", allresults=True), I get

query() takes at least 2 arguments (2 given).

This puzzles me. I also tried to specify the third possible argument (outformat=4; which is the BibTex format) but this gives me a list of function errors. A colleague advised me to import BeautifulSoup and this before running the query, but also that doesn't change the proble

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
爬取谷歌学术文献信息,你可以使用 Python 的 requests、BeautifulSoup 和 re 库来实现。 以下是一个简单的代码示例,可以帮助你获取谷歌学术搜索结果页面的 HTML 代码,并从中提取出每篇文献的标题、作者、摘要和链接: ```python import requests from bs4 import BeautifulSoup import re # 搜索关键词 query = 'python web scraping' # 构造查询字符串 params = {'q': query} # 定义请求头 headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'} # 发送 GET 请求并获取响应 response = requests.get('https://scholar.google.com/scholar', params=params, headers=headers) # 解析 HTML 代码 soup = BeautifulSoup(response.text, 'html.parser') # 提取每篇文献信息 articles = soup.find_all('div', {'class': 'gs_ri'}) for article in articles: # 提取标题 title = article.find('h3', {'class': 'gs_rt'}).text.strip() # 提取作者 authors = article.find('div', {'class': 'gs_a'}).text.strip() authors = re.sub(r'\xa0', '', authors) authors = re.split(' - ', authors) # 提取摘要 abstract = article.find('div', {'class': 'gs_rs'}).text.strip() # 提取链接 link = article.find('h3', {'class': 'gs_rt'}).find('a')['href'] # 打印结果 print('Title:', title) print('Authors:', authors) print('Abstract:', abstract) print('Link:', link) print('-------------------') ``` 这段代码会输出每篇文献的标题、作者、摘要和链接。你可以根据需求修改代码,提取更多或更少的信息

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值