Datawhale数据分析学习——学术前沿趋势分析 任务1

前言

这是参加 datawhale 数据分析学习的第一次打卡,整个学习内容基于Datawhale组织提供的课程材料。在学习过程中,本人遇到了一些知识盲区,在查阅相关资料以及看到学习群中同学的解答后,将部分知识点补充在了学习材料中。

赛题背景

赛题以数据分析为背景,要求选手使用公开的arXiv论文完成对应的数据分析操作。与之前的数据挖掘赛题不同,本次赛题不仅要求选手对数据进行建模,而且需要选手利用赛题数据完成具体的可视化分析。

任务1:论文数据统计

1.1 任务说明

  • 任务主题:论文数量统计,即统计2019年全年计算机各个方向论文数量;
  • 任务内容:赛题的理解、使用 Pandas 读取数据并进行统计;
  • 任务成果:学习 Pandas 的基础操作;
  • 可参考的学习资料:开源组织Datawhale joyful-pandas项目

1.2 数据集介绍

  • 数据集来源:数据集链接

  • 数据集的格式如下:

    • id:arXiv ID,可用于访问论文;
    • submitter:论文提交者;
    • authors:论文作者;
    • title:论文标题;
    • comments:论文页数和图表等其他信息;
    • journal-ref:论文发表的期刊的信息;
    • doi:数字对象标识符,https://www.doi.org
    • report-no:报告编号;
    • categories:论文在 arXiv 系统的所属类别或标签;
    • license:文章的许可证;
    • abstract:论文摘要;
    • versions:论文版本;
    • authors_parsed:作者的信息。
  • 数据集实例:

"root":{
		"id":string"0704.0001"
		"submitter":string"Pavel Nadolsky"
		"authors":string"C. Bal\'azs, E. L. Berger, P. M. Nadolsky, C.-P. Yuan"
		"title":string"Calculation of prompt diphoton production cross sections at Tevatron and LHC energies"
		"comments":string"37 pages, 15 figures; published version"
		"journal-ref":string"Phys.Rev.D76:013009,2007"
		"doi":string"10.1103/PhysRevD.76.013009"
		"report-no":string"ANL-HEP-PR-07-12"
		"categories":string"hep-ph"
		"license":NULL
		"abstract":string"  A fully differential calculation in perturbative quantum chromodynamics is presented for the production of massive photon pairs at hadron colliders. All next-to-leading order perturbative contributions from quark-antiquark, gluon-(anti)quark, and gluon-gluon subprocesses are included, as well as all-orders resummation of initial-state gluon radiation valid at next-to-next-to leading logarithmic accuracy. The region of phase space is specified in which the calculation is most reliable. Good agreement is demonstrated with data from the Fermilab Tevatron, and predictions are made for more detailed tests with CDF and DO data. Predictions are shown for distributions of diphoton pairs produced at the energy of the Large Hadron Collider (LHC). Distributions of the diphoton pairs from the decay of a Higgs boson are contrasted with those produced from QCD processes at the LHC, showing that enhanced sensitivity to the signal can be obtained with judicious selection of events."
		"versions":[
				0:{
						"version":string"v1"
						"created":string"Mon, 2 Apr 2007 19:18:42 GMT"
					}
				1:{
						"version":string"v2"
						"created":string"Tue, 24 Jul 2007 20:10:27 GMT"
					}]
		"update_date":string"2008-11-26"
		"authors_parsed":[
				0:[
						0:string"Balázs"
						1:string"C."
						2:string""]
				1:[
						0:string"Berger"
						1:string"E. L."
						2:string""]
				2:[
						0:string"Nadolsky"
						1:string"P. M."
						2:string""]
				3:[
						0:string"Yuan"
						1:string"C. -P."
						2:string""]]
}

1.3 arxiv论文类别介绍

我们从arxiv官网,查询到论文的类别名称以及其解释如下。

链接:https://arxiv.org/help/api/user-manual 的 5.3 小节的 Subject Classifications 的部分,或 https://arxiv.org/category_taxonomy, 具体的paper的类别部分如下:

'astro-ph': 'Astrophysics',
'astro-ph.CO': 'Cosmology and Nongalactic Astrophysics',
'astro-ph.EP': 'Earth and Planetary Astrophysics',
'astro-ph.GA': 'Astrophysics of Galaxies',
'cs.AI': 'Artificial Intelligence',
'cs.AR': 'Hardware Architecture',
'cs.CC': 'Computational Complexity',
'cs.CE': 'Computational Engineering, Finance, and Science',
'cs.CV': 'Computer Vision and Pattern Recognition',
'cs.CY': 'Computers and Society',
'cs.DB': 'Databases',
'cs.DC': 'Distributed, Parallel, and Cluster Computing',
'cs.DL': 'Digital Libraries',
'cs.NA': 'Numerical Analysis',
'cs.NE': 'Neural and Evolutionary Computing',
'cs.NI': 'Networking and Internet Architecture',
'cs.OH': 'Other Computer Science',
'cs.OS': 'Operating Systems',

1.4 任务整体思路

1) 导入并读取数据

  • 数据由 json格式转化为DataFrame格式(文件读取、DataFrame);

2) 数据预处理

  • 查看独立论文种类(set()函数、列表推导生成式、嵌套循环的使用);
  • 筛选2019年后的论文(时间格式转换、时间提取、数据筛选);
  • 爬取数据源网站论文分类信息,为数据分析做前期基础(爬虫、正则表达式);

3)数据分析及可视化

  • 数据分析(DataFrame共同特征合并查询、数据排序)
  • 数据可视化(matplotlib、饼图)

1.5 具体代码实现以及讲解

1.5.1 导入package并读取原始数据

# 导入所需的package
import seaborn as sns #用于画图
from bs4 import BeautifulSoup #用于爬取arxiv的数据
import re #用于正则表达式,匹配字符串的模式
import requests #用于网络连接,发送网络请求,使用域名获取对应信息
import json #读取数据,我们的数据为json格式的
import pandas as pd #数据处理,数据分析
import matplotlib.pyplot as plt #画图工具

这里使用的package的版本如下(python 3.7.4):

  • seaborn:0.9.0
  • BeautifulSoup:4.8.0
  • requests:2.22.0
  • json:0.8.5
  • pandas:0.25.1
  • matplotlib:3.1.1

检查各个工具包的使用版本,

import bs4
import matplotlib
print(sns.__version__)
print(bs4.__version__)
print(requests.__version__)
print(json.__version__)
print(pd.__version__)
print(matplotlib.__version__)

结果如下:
在这里插入图片描述
注意:查询BeautifulSoup以及matplotlib版本时需要进行import bs4import matplotlib,否则会报错。
未import bs4的运行结果
未import matplotlib的运行结果
接下来我们读取原始数据。

# 读入数据

data  = [] #初始化
#使用with语句优势:1.自动关闭文件句柄;2.自动显示(处理)文件读取数据异常
with open("arxiv-metadata-oai-2019.json", 'r') as f: 
    for line in f: 
        data.append(json.loads(line))
        
data = pd.DataFrame(data) #将list变为dataframe格式,方便使用pandas进行分析
data.shape #显示数据大小
Output: (170618, 14)

其中的170618表示数据总量,14表示特征数,对应我们1.2节说明的论文的14种信息。

data.head() #显示数据的前五行
idsubmitterauthorstitlecommentsjournal-refdoireport-nocategorieslicenseabstractversionsupdate_dateauthors_parsed
00704.0297Sung-Chul YoonSung-Chul Yoon, Philipp Podsiadlowski and Step...Remnant evolution after a carbon-oxygen white ...15 pages, 15 figures, 3 tables, submitted to M...None10.1111/j.1365-2966.2007.12161.xNoneastro-phNoneWe systematically explore the evolution of t...[{'version': 'v1', 'created': 'Tue, 3 Apr 2007...2019-08-19[[Yoon, Sung-Chul, ], [Podsiadlowski, Philipp,...
10704.0342Patrice Ntumba PunguB. Dugmore and PP. NtumbaCofibrations in the Category of Frolicher Spac...27 pagesNoneNoneNonemath.ATNoneCofibrations are defined in the category of ...[{'version': 'v1', 'created': 'Tue, 3 Apr 2007...2019-08-19[[Dugmore, B., ], [Ntumba, PP., ]]
20704.0360ZaqarashviliT.V. Zaqarashvili and K MurawskiTorsional oscillations of longitudinally inhom...6 pages, 3 figures, accepted in A&ANone10.1051/0004-6361:20077246Noneastro-phNoneWe explore the effect of an inhomogeneous ma...[{'version': 'v1', 'created': 'Tue, 3 Apr 2007...2019-08-19[[Zaqarashvili, T. V., ], [Murawski, K, ]]
30704.0525Sezgin Ayg\"unSezgin Aygun, Ismail Tarhan, Husnu BaysalOn the Energy-Momentum Problem in Static Einst...This submission has been withdrawn by arXiv ad...Chin.Phys.Lett.24:355-358,200710.1088/0256-307X/24/2/015Nonegr-qcNoneThis paper has been removed by arXiv adminis...[{'version': 'v1', 'created': 'Wed, 4 Apr 2007...2019-10-21[[Aygun, Sezgin, ], [Tarhan, Ismail, ], [Baysa...
40704.0535Antonio PipinoAntonio Pipino (1,3), Thomas H. Puzia (2,4), a...The Formation of Globular Cluster Systems in M...32 pages (referee format), 9 figures, ApJ acce...Astrophys.J.665:295-305,200710.1086/519546Noneastro-phNoneThe most massive elliptical galaxies show a ...[{'version': 'v1', 'created': 'Wed, 4 Apr 2007...2019-08-19[[Pipino, Antonio, ], [Puzia, Thomas H., ], [M...

1.5.2 数据预处理

首先我们先来粗略统计论文的种类信息:

data["categories"].describe()
count     170618
unique     15592
top        cs.CV
freq        5559
Name: categories, dtype: object

以上的结果表明:共有170618个数据,有15592个子类(因为有论文的类别是多个,例如一篇paper的类别是CS.AI & CS.MM和一篇paper的类别是CS.AI & CS.OS属于不同的子类别,这里仅仅是粗略统计),其中最多的种类是cs.CV,共出现了5559次。

由于部分论文的类别不止一种,所以下面我们判断在本数据集中共出现了多少种独立的数据集。

# 所有的种类(独立的)

unique_categories = set([i for l in [x.split(' ') for x in data["categories"]] for i in l])
len(unique_categories)
unique_categories

这里使用了 split 函数将多类别使用 “ ”(空格)分开,组成list,并使用 for 循环将独立出现的类别找出来,并使用 set 类别,将重复项去除得到最终所有的独立paper种类。1

172

{'acc-phys', 'adap-org', 'alg-geom', 'astro-ph', 'astro-ph.CO', 'astro-ph.EP', 'astro-ph.GA', 'astro-ph.HE', 'astro-ph.IM', 'astro-ph.SR', 'chao-dyn', 'chem-ph', 'cmp-lg', 'comp-gas', 'cond-mat', 'cond-mat.dis-nn', 'cond-mat.mes-hall', 'cond-mat.mtrl-sci', 'cond-mat.other', 'cond-mat.quant-gas', 'cond-mat.soft', 'cond-mat.stat-mech', 'cond-mat.str-el', 'cond-mat.supr-con', 'cs.AI', 'cs.AR', 'cs.CC', 'cs.CE', 'cs.CG', 'cs.CL', 'cs.CR', 'cs.CV', 'cs.CY', 'cs.DB','cs.DC', 'cs.DL', 'cs.DM', 'cs.DS', 'cs.ET', 'cs.FL', 'cs.GL', 'cs.GR', 'cs.GT', 'cs.HC', 'cs.IR', 'cs.IT', 'cs.LG', 'cs.LO', 'cs.MA', 'cs.MM', 'cs.MS', 'cs.NA', 'cs.NE', 'cs.NI', 'cs.OH', 'cs.OS', 'cs.PF', 'cs.PL', 'cs.RO', 'cs.SC', 'cs.SD', 'cs.SE', 'cs.SI', 'cs.SY', 'dg-ga', 'econ.EM', 'econ.GN', 'econ.TH', 'eess.AS', 'eess.IV', 'eess.SP', 'eess.SY', 'funct-an', 'gr-qc', 'hep-ex', 'hep-lat', 'hep-ph', 'hep-th', 'math-ph', 'math.AC', 'math.AG', 'math.AP', 'math.AT', 'math.CA', 'math.CO', 'math.CT', 'math.CV', 'math.DG', 'math.DS', 'math.FA', 'math.GM', 'math.GN', 'math.GR', 'math.GT', 'math.HO', 'math.IT', 'math.KT', 'math.LO', 'math.MG', 'math.MP', 'math.NA', 'math.NT', 'math.OA', 'math.OC', 'math.PR', 'math.QA', 'math.RA', 'math.RT', 'math.SG', 'math.SP', 'math.ST', 'mtrl-th', 'nlin.AO', 'nlin.CD', 'nlin.CG', 'nlin.PS', 'nlin.SI', 'nucl-ex', 'nucl-th', 'patt-sol', 'physics.acc-ph', 'physics.ao-ph', 'physics.app-ph', 'physics.atm-clus', 'physics.atom-ph', 'physics.bio-ph', 'physics.chem-ph',
 'physics.class-ph', 'physics.comp-ph', 'physics.data-an', 'physics.ed-ph', 'physics.flu-dyn', 'physics.gen-ph', 'physics.geo-ph', 'physics.hist-ph', 'physics.ins-det', 'physics.med-ph', 'physics.optics', 'physics.plasm-ph', 'physics.pop-ph', 'physics.soc-ph', 'physics.space-ph', 'q-alg', 'q-bio', 'q-bio.BM', 'q-bio.CB', 'q-bio.GN', 'q-bio.MN', 'q-bio.NC', 'q-bio.OT', 'q-bio.PE', 'q-bio.QM', 'q-bio.SC', 'q-bio.TO', 'q-fin.CP', 'q-fin.EC', 'q-fin.GN', 'q-fin.MF', 'q-fin.PM', 'q-fin.PR', 'q-fin.RM', 'q-fin.ST', 'q-fin.TR', 'quant-ph', 'solv-int', 'stat.AP', 'stat.CO', 'stat.ME', 'stat.ML', 'stat.OT', 'stat.TH', 'supr-con'}

从以上结果发现,共有172种论文种类,比我们直接从 https://arxiv.org/help/api/user-manual 的 5.3 小节的 Subject Classifications 的部分或 https://arxiv.org/category_taxonomy中的到的类别少,这说明存在一些官网上没有的类别,这是一个小细节。不过对于我们的计算机方向的论文没有影响,依然是以下的40个类别,我们从原数据中提取的和从官网的到的种类是可以一一对应的。

'cs.AI': 'Artificial Intelligence',
'cs.AR': 'Hardware Architecture',
'cs.CC': 'Computational Complexity',
'cs.CE': 'Computational Engineering, Finance, and Science',
'cs.CG': 'Computational Geometry',
'cs.CL': 'Computation and Language',
'cs.CR': 'Cryptography and Security',
'cs.CV': 'Computer Vision and Pattern Recognition',
'cs.CY': 'Computers and Society',
'cs.DB': 'Databases',
'cs.DC': 'Distributed, Parallel, and Cluster Computing',
'cs.DL': 'Digital Libraries',
'cs.DM': 'Discrete Mathematics',
'cs.DS': 'Data Structures and Algorithms',
'cs.ET': 'Emerging Technologies',
'cs.FL': 'Formal Languages and Automata Theory',
'cs.GL': 'General Literature',
'cs.GR': 'Graphics',
'cs.GT': 'Computer Science and Game Theory',
'cs.HC': 'Human-Computer Interaction',
'cs.IR': 'Information Retrieval',
'cs.IT': 'Information Theory',
'cs.LG': 'Machine Learning',
'cs.LO': 'Logic in Computer Science',
'cs.MA': 'Multiagent Systems',
'cs.MM': 'Multimedia',
'cs.MS': 'Mathematical Software',
'cs.NA': 'Numerical Analysis',
'cs.NE': 'Neural and Evolutionary Computing',
'cs.NI': 'Networking and Internet Architecture',
'cs.OH': 'Other Computer Science',
'cs.OS': 'Operating Systems',
'cs.PF': 'Performance',
'cs.PL': 'Programming Languages',
'cs.RO': 'Robotics',
'cs.SC': 'Symbolic Computation',
'cs.SD': 'Sound',
'cs.SE': 'Software Engineering',
'cs.SI': 'Social and Information Networks',
'cs.SY': 'Systems and Control',

我们的任务要求对于2019年以后的paper进行分析,所以首先对于时间特征进行预处理,从而得到2019年以后的所有种类的论文:

data["year"] = pd.to_datetime(data["update_date"]).dt.year #将update_date从例如2019-02-20的str变为datetime格式,并提取处year
del data["update_date"] #删除 update_date特征,其使命已完成
data = data[data["year"] >= 2019] #找出 year 中2019年以后的数据,并将其他数据删除
# data.groupby(['categories','year']) #以 categories 进行排序,如果同一个categories 相同则使用 year 特征进行排序
data.reset_index(drop=True, inplace=True) #重新编号
data.columns #查看修改后的列
Index(['id', 'submitter', 'authors', 'title', 'comments', 'journal-ref', 'doi',
       'report-no', 'categories', 'license', 'abstract', 'versions',
       'authors_parsed', 'year'],
      dtype='object')
data #查看结果
idsubmitterauthorstitlecommentsjournal-refdoireport-nocategorieslicenseabstractversionsauthors_parsedyear
00704.0297Sung-Chul YoonSung-Chul Yoon, Philipp Podsiadlowski and Step...Remnant evolution after a carbon-oxygen white ...15 pages, 15 figures, 3 tables, submitted to M...None10.1111/j.1365-2966.2007.12161.xNoneastro-phNoneWe systematically explore the evolution of t...[{'version': 'v1', 'created': 'Tue, 3 Apr 2007...[[Yoon, Sung-Chul, ], [Podsiadlowski, Philipp,...2019
10704.0342Patrice Ntumba PunguB. Dugmore and PP. NtumbaCofibrations in the Category of Frolicher Spac...27 pagesNoneNoneNonemath.ATNoneCofibrations are defined in the category of ...[{'version': 'v1', 'created': 'Tue, 3 Apr 2007...[[Dugmore, B., ], [Ntumba, PP., ]]2019
20704.0360ZaqarashviliT.V. Zaqarashvili and K MurawskiTorsional oscillations of longitudinally inhom...6 pages, 3 figures, accepted in A&ANone10.1051/0004-6361:20077246Noneastro-phNoneWe explore the effect of an inhomogeneous ma...[{'version': 'v1', 'created': 'Tue, 3 Apr 2007...[[Zaqarashvili, T. V., ], [Murawski, K, ]]2019
30704.0525Sezgin Ayg\"unSezgin Aygun, Ismail Tarhan, Husnu BaysalOn the Energy-Momentum Problem in Static Einst...This submission has been withdrawn by arXiv ad...Chin.Phys.Lett.24:355-358,200710.1088/0256-307X/24/2/015Nonegr-qcNoneThis paper has been removed by arXiv adminis...[{'version': 'v1', 'created': 'Wed, 4 Apr 2007...[[Aygun, Sezgin, ], [Tarhan, Ismail, ], [Baysa...2019
40704.0535Antonio PipinoAntonio Pipino (1,3), Thomas H. Puzia (2,4), a...The Formation of Globular Cluster Systems in M...32 pages (referee format), 9 figures, ApJ acce...Astrophys.J.665:295-305,200710.1086/519546Noneastro-phNoneThe most massive elliptical galaxies show a ...[{'version': 'v1', 'created': 'Wed, 4 Apr 2007...[[Pipino, Antonio, ], [Puzia, Thomas H., ], [M...2019
.............................................
170613quant-ph/9904032Mikhail LukinV. A. Sautenkov, M. D. Lukin, C. J. Bednar, G....Enhancement of Magneto-Optic Effects via Large...NoneNone10.1103/PhysRevA.62.023810Nonequant-phNoneWe utilize the generation of large atomic co...[{'version': 'v1', 'created': 'Thu, 8 Apr 1999...[[Sautenkov, V. A., ], [Lukin, M. D., ], [Bedn...2019
170614solv-int/9511005Wen-Xiu MaWen-Xiu Ma, Benno FuchssteinerExplicit and Exact Solutions to a Kolmogorov-P...14pages, Latex, to appear in Intern. J. Nonlin...None10.1016/0020-7462(95)00064-XNonesolv-int nlin.SINoneSome explicit traveling wave solutions to a ...[{'version': 'v1', 'created': 'Tue, 14 Nov 199...[[Ma, Wen-Xiu, ], [Fuchssteiner, Benno, ]]2019
170615solv-int/9809008Victor EnolskiiJ C Eilbeck, V Z Enol'skii, V B Kuznetsov, D V...Linear r-Matrix Algebra for a Hierarchy of One...plain LaTeX, 28 pagesNoneNoneNonesolv-int nlin.SINoneWe consider a hierarchy of many-particle sys...[{'version': 'v1', 'created': 'Wed, 2 Sep 1998...[[Eilbeck, J C, ], [Enol'skii, V Z, ], [Kuznet...2019
170616solv-int/9909010Pierre van MoerbekeM. Adler, T. Shiota and P. van MoerbekePfaff tau-functions42 pagesNoneNoneNonesolv-int adap-org hep-th nlin.AO nlin.SINoneConsider the evolution $$ \frac{\pl m_\iy}{\...[{'version': 'v1', 'created': 'Wed, 15 Sep 199...[[Adler, M., ], [Shiota, T., ], [van Moerbeke,...2019
170617solv-int/9909014David FairlieD.B. Fairlie and A.N. LeznovThe General Solution of the Complex Monge-Amp\...13 pages, latex, no figuresNone10.1088/0305-4470/33/25/307Nonesolv-int nlin.SINoneA general solution to the Complex Monge-Amp\...[{'version': 'v1', 'created': 'Thu, 16 Sep 199...[[Fairlie, D. B., ], [Leznov, A. N., ]]2019

170618 rows × 14 columns

这里我们就已经得到了所有2019年以后的论文,下面我们挑选出计算机领域内的所有文章:

#爬取所有的类别
website_url = requests.get('https://arxiv.org/category_taxonomy').text #获取网页的文本数据
soup = BeautifulSoup(website_url,'lxml') #爬取数据,这里使用lxml的解析器,加速
root = soup.find('div',{'id':'category_taxonomy_list'}) #找出 BeautifulSoup 对应的标签入口
tags = root.find_all(["h2","h3","h4","p"], recursive=True) #读取 tags

#初始化 str 和 list 变量
level_1_name = "" #一级专业全称
level_2_name = "" #二级专业全称
level_2_code = "" #二级专业简称
level_3_name = "" #专业方向全称
level_3_code = "" #专业方向简称
level_1_names = []
level_2_codes = []
level_2_names = []
level_3_codes = []
level_3_names = []
level_3_notes = [] #各专业方向简介列表

#进行
for t in tags:
    if t.name =="h2":
        #print(t.text)
        level_1_name = t.text
        level_2_code = t.text
        level_2_name = t.text
    elif t.name == "h3":
        raw = t.text
        level_2_code = re.sub(r"(.*)\((.*)\)",r"\2",raw)
        #正则表达式:模式字符串:(.*)\((.*)\);被替换字符串"\2";被处理字符串:raw
        level_2_name = re.sub(r"(.*)\((.*)\)",r"\1",raw)
    elif t.name == "h4":
        raw = t.text
        level_3_code = re.sub(r"(.*) \((.*)\)",r"\1",raw)
        level_3_name = re.sub(r"(.*) \((.*)\)",r"\2",raw)
    elif t.name == "p":
        notes = t.text
        level_1_names.append(level_1_name)
        level_2_names.append(level_2_name)
        level_2_codes.append(level_2_code)
        level_3_names.append(level_3_name)
        level_3_codes.append(level_3_code)
        level_3_notes.append(notes)

#根据上述信息生成Dataframe格式数据
df_taxonomy = pd.DataFrame({
    'group_name' : level_1_names,
    'archive_name' : level_2_names,
    'archive_id' : level_2_codes,
    'category_name' : level_3_names,
    'categories' : level_3_codes,
    'category_description': level_3_notes
    
})

#按照"group_name"进行分组,在组内使用"archive_name"进行排序
df_taxonomy.groupby(["group_name","archive_name"])
df_taxonomy
group_namearchive_namearchive_idcategory_namecategoriescategory_description
0Computer ScienceComputer ScienceComputer ScienceArtificial Intelligencecs.AICovers all areas of AI except Vision, Robotics...
1Computer ScienceComputer ScienceComputer ScienceHardware Architecturecs.ARCovers systems organization and hardware archi...
2Computer ScienceComputer ScienceComputer ScienceComputational Complexitycs.CCCovers models of computation, complexity class...
3Computer ScienceComputer ScienceComputer ScienceComputational Engineering, Finance, and Sciencecs.CECovers applications of computer science to the...
4Computer ScienceComputer ScienceComputer ScienceComputational Geometrycs.CGRoughly includes material in ACM Subject Class...
.....................
150StatisticsStatisticsStatisticsComputationstat.COAlgorithms, Simulation, Visualization
151StatisticsStatisticsStatisticsMethodologystat.MEDesign, Surveys, Model Selection, Multiple Tes...
152StatisticsStatisticsStatisticsMachine Learningstat.MLCovers machine learning papers (supervised, un...
153StatisticsStatisticsStatisticsOther Statisticsstat.OTWork in statistics that does not fit into the ...
154StatisticsStatisticsStatisticsStatistics Theorystat.THstat.TH is an alias for math.ST. Asymptotics, ...

155 rows × 6 columns


re.sub(r"(.)((.))",r"\2",raw)

正则表达式的理解见附录3

1.5.3 数据分析及可视化

接下来我们首先看一下所有大类的paper数量分布:

_df =data.merge(df_taxonomy,on = "categories", how="left").drop_duplicates(["id","group_name"]).groupby("group_name").agg({"id":"count"}).sort_values(by="id",ascending=False).reset_index()
_df

我们使用merge函数,以两个dataframe共同的属性 “categories” 进行合并,并以 “group_name” 作为类别进行统计,统计结果放入 “id” 列中并排序。

结果如下:

group_nameid
0Physics38379
1Mathematics24495
2Computer Science18087
3Statistics1802
4Electrical Engineering and Systems Science1371
5Quantitative Biology886
6Quantitative Finance352
7Economics173

下面我们使用饼图进行上图结果的可视化:

fig = plt.figure(figsize=(15,12))
explode = (0,0,0,0.2,0.3,0.3,0.2,0.1)
plt.pie(_df["id"],labels=_df["group_name"],autopct='%1.2f%%',startangle =170,explode =explode)
plt.tight_layout()
plt.show()

结果如下:

在这里插入图片描述

下面统计在计算机各个子领域2019年后的paper数量:

group_name="Computer Science"
cats = data.merge(df_taxonomy, on="categories").query("group_name == @group_name")
cats.groupby(["year","category_name"]).count().reset_index().pivot(index="category_name", columns="year",values="id")

我们同样使用 merge 函数,对于两个dataframe 共同的特征 categories 进行合并并且进行查询。然后我们再对于数据进行统计和排序从而得到以下的结果:

year2019
category_name
Artificial Intelligence558
Computation and Language2153
Computational Complexity131
Computational Engineering, Finance, and Science108
Computational Geometry199
Computer Science and Game Theory281
Computer Vision and Pattern Recognition5559
Computers and Society346
Cryptography and Security1067
Data Structures and Algorithms711
Databases282
Digital Libraries125
Discrete Mathematics84
Distributed, Parallel, and Cluster Computing715
Emerging Technologies101
Formal Languages and Automata Theory152
General Literature5
Graphics116
Hardware Architecture95
Human-Computer Interaction420
Information Retrieval245
Logic in Computer Science470
Machine Learning177
Mathematical Software27
Multiagent Systems85
Multimedia76
Networking and Internet Architecture864
Neural and Evolutionary Computing235
Numerical Analysis40
Operating Systems36
Other Computer Science67
Performance45
Programming Languages268
Robotics917
Social and Information Networks202
Software Engineering659
Sound7
Symbolic Computation44
Systems and Control415

我们可以从结果看出,Computer Vision and Pattern Recognition(计算机视觉与模式识别)类是CS中paper数量最多的子类,遥遥领先于其他的CS子类;另外,Computation and Language(计算与语言)、Cryptography and Security(密码学与安全)以及 Robotics(机器人学)的2019年paper数量均超过1000或接近1000,这与我们的认知是一致的。

附录 拓展学习/问题解答

附1 基于pandas模块json与dataframe的相互转换

一、json转化为dataframe

  1. 利用pandas自带的read_json();
  2. 利用json库的loads方法和pandas的json_normalize();

二、dataframe转换为json

  1. 利用colunms;
  2. 利用split;
  3. 利用records;
  4. 利用index;
  5. 利用values;

详见参考资料


附2 如何理解查看独立paper类别的代码

问:unique_categories = set([i for l in [x.split(’ ') for x in data[“categories”]] for i in l])的嵌套关系如何理解

上述嵌套关系可等效为在这里插入图片描述
令temp = [x.split(’ ') for x in data[“categories”]]

  1. 对于每一个data[“categories”]中的元素按照空格进行split,这对应一个paper的多个类别,是一个list[list[]],外层的list是每一个paper,内层是每一个paper的类别;
  2. for l in temp 是将每一个paper分开;
  3. for i in l 是对于一个分离得到一个paper的多个种类;

在此感谢Datawhale助教杨毅远以及群友凌凌解答疑惑


附3 如何理解处理爬取数据中的正则表达式

问:re.sub(r"(.*) \((.*)\)",r"\1",raw)的正则部分(.*) \((.*)\)怎么理解?

.表示匹配任意1个字符
*表示匹配前一个字符出现0次多次或者无限次
\(表示匹配“(”
具体的这个语句,(.*)为括号前所有的str,\((.*)\)为后面括号的str
例如:
原始的str为:Astrophysics(astro-ph)
经过re.sub(r"(.*)\((.*)\)",r"\2",raw)后的str为 astro-ph
经过 re.sub(r"(.*)\((.*)\)",r"\1",raw)后的str为 Astrophysics

其中:
“\1”表示()前的部分
“\2”表示()里的部分

split切割字符理解,上述正则表达式可等效为:

raw1 = t.text
raw1_list = raw1.split("(")
level_2_code = raw1_list[0].strip()
level_2_name = raw1_list[1].split(")")[0].strip()

在此感谢Datawhale助教杨毅远以及群友小季、wberica解答疑惑


  1. 详见附录2 ↩︎

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值