mergejoincontact一对多的解决办法(本质是去重但是要保留特定条件的行)

mergejoincontact一对多的解决办法(本质是去重但是要保留特定条件的行)

import pandas as pd #导入pandas库,取别名为pd###left的key没有重复,right的key有重复,:::多对一完全正常
left = pd.DataFrame({‘key1’: [‘K0’, ‘K0’, ‘K1’, ‘K2’],‘key2’: [‘K0’, ‘K1’, ‘K0’, ‘K1’],‘A’: [‘A0’, ‘A1’, ‘A2’, ‘A3’],‘B’: [‘B0’, ‘B1’, ‘B2’, ‘B3’]})
right = pd.DataFrame({‘key1’: [‘K0’, ‘K1’, ‘K1’, ‘K2’],‘key2’: [‘K0’, ‘K0’, ‘K0’, ‘K0’],‘C’: [‘C0’, ‘C1’, ‘C2’, ‘C3’],‘D’: [‘D0’, ‘D1’, ‘D2’, ‘D3’]})
result = pd.merge(left, right,how=‘right’, on=[‘key1’, ‘key2’]) #指定连接键
print(result)
key1 key2 A B C D
0 K0 K0 A0 B0 C0 D0
1 K1 K0 A2 B2 C1 D1
2 K1 K0 A2 B2 C2 D2
3 K2 K0 NaN NaN C3 D3

import pandas as pd #导入left的key有两个重复,right的key也有2个重复:::::多对多不是很正常,比正常的的行多很多
left = pd.DataFrame({‘key1’: [‘K0’, ‘K1’, ‘K1’, ‘K2’],‘key2’: [‘K0’, ‘K0’, ‘K0’, ‘K1’],‘A’: [‘A0’, ‘A1’, ‘A2’, ‘A3’],‘B’: [‘B0’, ‘B1’, ‘B2’, ‘B3’]})
right = pd.DataFrame({‘key1’: [‘K0’, ‘K1’, ‘K1’, ‘K2’],‘key2’: [‘K0’, ‘K0’, ‘K0’, ‘K0’],‘C’: [‘C0’, ‘C1’, ‘C2’, ‘C3’],‘D’: [‘D0’, ‘D1’, ‘D2’, ‘D3’]})
result = pd.merge(left, right,how=‘right’, on=[‘key1’, ‘key2’]) #指定连接键
print(result)

key1 key2 A B C D
0 K0 K0 A0 B0 C0 D0

1 K1 K0 A1 B1 C1 D1 1 K1 K0 A2 B2 C1 D1
2 K1 K0 A2 B2 C1 D1 #left第二次匹配到的,
####就是left有重复时候,就出现一对多的情况,就是把所有可能的情况都概括出来了,left有几个重复的就多出来几列,,,所有要把多出来的列都要根据重复的情况修改
3 K1 K0 A1 B1 C2 D2 ####
4 K1 K0 A2 B2 C2 D2 2 K1 K0 A2 B2 C2 D2 ###在left第二次匹配到的

5 K2 K0 NaN NaN C3 D3

######一对多的情况出现就出现很多次重复,但是往往这样操作不可行,###
a.txt
a 1
a 2
a 3
a 4

b.txt
a the
a her
a this

想的到:
a 1 the
a 2 her
a 3 this
a 4 NaN

###本质就是去重,保留特定的行!!!!!!!

####解决办法就式把所有重复的都拿出来单独处理!!用groupby(折叠列!因为相同的a有重没办法辨认,现实情况也是
pd.concat([df.drop_duplicates(["",""]),df.drop_duplicates(keep=Flase)]).drop_duplicates(keep=Flase)
其实用下面的and的条件进行帅选就不用了!!!先把col[0]=="a"全部去重不保留,然后用concat把帅选出来的行再加入即可!!!!!

##第二个办法就是先去重复,保留下来想保留的!加入我们就想保留a 3 这个,可以特定选择df[df[:,1]3&&df[:,0][重复]]

##第三个办法就是用tuple循环处理

###第四个办法就是用正则表达式

lc.sort([“loan_amnt”])

lc.sort([“loan_amnt”],ascending=True)
lc.sort([“loan_amnt”,“int_rate”],ascending=False)

lc.loc[(lc[“grade”] == “B”) & (lc[“loan_amnt”]>5000), [“member_id”, “term” , “loan_amnt”, “grade”,“sub_grade”, “int_rate”]].head()

lc.loc[(lc[“grade”] != “B”) & (lc[“loan_status”] != “Charged Off”),[“member_id”, “term” , “loan_amnt”, “grade”, “sub_grade”, “loan_status”]].head()
lc.loc[(lc[“grade”] == “B”) | (lc[“loan_amnt”] > 5000), [“member_id”, “term” , “loan_amnt”, “grade”,“sub_grade”, “int_rate”]].head()
lc.loc[(lc[“grade”] == “B”) & (lc[“loan_amnt”] > 5000), [“member_id”, “term” , “loan_amnt”, “grade”,“sub_grade”, “int_rate”]].sort([“loan_amnt”])
lc.loc[lc[“grade”] == “B”].loan_amnt.count()###计数

lc.loc[lc[“grade”] == “B”,].loan_amnt是字段.sum()

示例代码:

d1 = data[(data[‘ENTITY’]==‘D:\’) & (data[‘TARGET_ID’]==184 )]

如果是筛选某一列中等于某个值的那一行,可以使用:

dataframe[dataframe[‘列名’] == “具体值”]
1
如果筛选某两列中的值等于具体值的那几行,可以使用:

search_se = dataframe.loc[dataframe[‘列1’] == “值1”, ‘列2’] == “值2”

得到筛选后的行的index

search_index = search_se[search_se == True].index

得到筛选后的 dataframe

target_dataframe = dataframe.loc[search_index]

2.删除/选取某行含有特殊数值的列

#删除/选取某行含有特定数值的列
cols=[x for i,x in enumerate(df2.columns) if df2.iat[0,i]==3]
#利用enumerate对row0进行遍历,将含有数字3的列放入cols中
print(cols)

#df2=df2[cols] 选取含有特定数值的列
df2=df2.drop(cols,axis=1) #利用drop方法将含有特定数值的列删除
#删除/选取某列含有特定数值的行
#df1=df1[df1[‘A’].isin([1])]
#df1[df1[‘A’].isin([1])] 选取df1中A列包含数字1的行

df1=df1[~df1[‘A’].isin([1])]
#通过~取反,选取不包含数字1的行
print(df1)

假设删除w1中date列中数值大于200的行,并保存删除后的表为w2

先写w2=w1.drop()
补上行w2=w1.drop(w1[].index)
添上条件w2=w1.drop(w1[w1.date>200].index)
成功。
或者
删除日期大于2020-06-15的行

先写d2 = d1[]
筛选符合条件的行,加(),d2 = d1[(d1.date>‘2020-06-15’)]
删除刚刚选择的行,加-号d2 = d1[-(d1.date>‘2020-06-15’)]

join()是一种方便的方法,用于将两个可能不同索引的列组合DataFrames成单个结果 DataFrame。

JOIN 拼接列,主要用于基于行索引上的合并。

只要两个表列名不同,不加任何参数就可以直接用。
如果两个表有重复的列名,需指定lsuffix, rsuffix参数。(就是a表有个列,b表也有
默认左外连接how=left

import pandas as pd #导入left的key有两个重复,right的key也有2个重复:::::多对多不是很正常,比正常的的行多很多
left = pd.DataFrame({‘key1’: [‘K0’, ‘K1’, ‘K1’, ‘K2’],‘key2’: [‘K0’, ‘K0’, ‘K0’, ‘K1’],‘A’: [‘A0’, ‘A1’, ‘A2’, ‘A3’],‘B’: [‘B0’, ‘B1’, ‘B2’, ‘B3’]})
right = pd.DataFrame({‘key1’: [‘K0’, ‘K1’, ‘K1’, ‘K2’],‘key2’: [‘K0’, ‘K0’, ‘K0’, ‘K0’],‘C’: [‘C0’, ‘C1’, ‘C2’, ‘C3’],‘D’: [‘D0’, ‘D1’, ‘D2’, ‘D3’]})
right.join(left, lsuffix=’_right’, rsuffix=’_left’)####完美解决这个问题

result = right.join(left,on=[‘key1’, ‘key2’]) #连接
print(result)

left = pd.DataFrame({‘key1’: [‘K0’, ‘K0’, ‘K1’, ‘K2’],‘key2’: [‘K0’, ‘K1’, ‘K0’, ‘K1’],‘A’: [‘A0’, ‘A1’, ‘A2’, ‘A3’],‘B’: [‘B0’, ‘B1’, ‘B2’, ‘B3’]})
right = pd.DataFrame({‘key1’: [‘K0’, ‘K1’, ‘K1’, ‘K2’],‘key2’: [‘K0’, ‘K0’, ‘K0’, ‘K0’],‘C’: [‘C0’, ‘C1’, ‘C2’, ‘C3’],‘D’: [‘D0’, ‘D1’, ‘D2’, ‘D3’]})
result = right.join(left,on=[‘key1’, ‘key2’]) #连接
print(result)

import pandas as pd #导入left的key有两个重复,right的key也有2个重复:::::多对多不是很正常,比正常的的行多很多
left = pd.DataFrame({‘key1’: [‘K0’, ‘K1’, ‘K1’, ‘K2’],‘key2’: [‘K0’, ‘K0’, ‘K0’, ‘K1’],‘A’: [‘A0’, ‘A1’, ‘A2’, ‘A3’],‘B’: [‘B0’, ‘B1’, ‘B2’, ‘B3’]})
right = pd.DataFrame({‘key1’: [‘K0’, ‘K1’, ‘K1’, ‘K2’],‘key2’: [‘K0’, ‘K0’, ‘K0’, ‘K0’],‘C’: [‘C0’, ‘C1’, ‘C2’, ‘C3’],‘D’: [‘D0’, ‘D1’, ‘D2’, ‘D3’]})

result = pd.concat([left,right], keys=[‘x’, ‘y’],axis=1) #指定片段
print(result)
x y
key1 key2 A B key1 key2 C D
0 K0 K0 A0 B0 K0 K0 C0 D0
1 K1 K0 A1 B1 K1 K0 C1 D1
2 K1 K0 A2 B2 K1 K0 C2 D2
3 K2 K1 A3 B3 K2 K0 C3 D3

import pandas as pd #导入left的key有两个重复,right的key也有2个重复:::::多对多不是很正常,比正常的的行多很多
left = pd.DataFrame({‘key1’: [‘K0’, ‘K1’, ‘K1’, ‘K2’],‘key2’: [‘K0’, ‘K0’, ‘K0’, ‘K1’],‘A’: [‘A0’, ‘A1’, ‘A2’, ‘A3’],‘B’: [‘B0’, ‘B1’, ‘B2’, ‘B3’]})
right = pd.DataFrame({‘key1’: [‘K0’, ‘K1’, ‘K1’, ‘K2’],‘key2’: [‘K0’, ‘K0’, ‘K0’, ‘K0’],‘C’: [‘C0’, ‘C1’, ‘C2’, ‘C3’],‘D’: [‘D0’, ‘D1’, ‘D2’, ‘D3’]})
right.join(left, lsuffix=’_right’, rsuffix=’_left’)
Out[234]:
key1_right key2_right C D key1_left key2_left A B
0 K0 K0 C0 D0 K0 K0 A0 B0
1 K1 K0 C1 D1 K1 K0 A1 B1
2 K1 K0 C2 D2 K1 K0 A2 B2
3 K2 K0 C3 D3 K2 K1 A3 B3

import pandas as pd #导入left的key有两个重复,right的key也有2个重复:::::多对多不是很正常,比正常的的行多很多
left = pd.DataFrame({‘key1’: [‘K0’, ‘K1’, ‘K1’, ‘K2’],‘key2’: [‘K0’, ‘K0’, ‘K0’, ‘K1’],‘A’: [‘A0’, ‘A1’, ‘A2’, ‘A3’],‘B’: [‘B0’, ‘B1’, ‘B2’, ‘B3’]})
right = pd.DataFrame({‘key1’: [‘K0’, ‘K1’, ‘K1’, ‘K2’],‘key2’: [‘K0’, ‘K0’, ‘K0’, ‘K0’],‘C’: [‘C0’, ‘C1’, ‘C2’, ‘C3’],‘D’: [‘D0’, ‘D1’, ‘D2’, ‘D3’]})
right.set_index([‘key1’, ‘key2’]).join(left.set_index([‘key1’, ‘key2’]))
Out[235]:
C D A B
key1 key2
K0 K0 C0 D0 A0 B0
K1 K0 C1 D1 A1 B1
K0 C1 D1 A2 B2
K0 C2 D2 A1 B1
K0 C2 D2 A2 B2
K2 K0 C3 D3 NaN NaN

import pandas as pd #导入left的key有两个重复,right的key也有2个重复:::::多对多不是很正常,比正常的的行多很多
left = pd.DataFrame({‘key1’: [‘K0’, ‘K1’, ‘K1’, ‘K2’],‘key2’: [‘K0’, ‘K0’, ‘K0’, ‘K1’],‘A’: [‘A0’, ‘A1’, ‘A2’, ‘A3’],‘B’: [‘B0’, ‘B1’, ‘B2’, ‘B3’]})
right = pd.DataFrame({‘key1’: [‘K0’, ‘K1’, ‘K1’, ‘K2’],‘key2’: [‘K0’, ‘K0’, ‘K0’, ‘K0’],‘C’: [‘C0’, ‘C1’, ‘C2’, ‘C3’],‘D’: [‘D0’, ‘D1’, ‘D2’, ‘D3’]})
right.join(left.set_index([‘key1’, ‘key2’]), on=[‘key1’, ‘key2’])
Out[236]:
key1 key2 C D A B
0 K0 K0 C0 D0 A0 B0
1 K1 K0 C1 D1 A1 B1
1 K1 K0 C1 D1 A2 B2
2 K1 K0 C2 D2 A1 B1
2 K1 K0 C2 D2 A2 B2
3 K2 K0 C3 D3 NaN NaN

DataFrame按多个条件删除行
data.drop(index=(data[(data[“公司缩写”]==row[1])&(data[“发布时间”]==row[2])&(data[“文章综合情感”]==row[3])].index))
for row in merge_data12.itertuples(): #循环titleData每一行
a = row[1].split(",")
#print(merge_data12[(merge_data12[“公司缩写”]==row[1])&(merge_data12[“发布时间”]==row[2])&(merge_data12[“文章综合情感”]==row[3])].index)
if len(a)>1:
for i in a: #将分割后的各个公司 重新加入表格中
merge_data12 = merge_data12.append({‘公司缩写’:i, ‘发布时间’:row[2], ‘文章综合情感’:row[3]}, ignore_index=True)
#删除原本行
merge_data12.drop(index=(merge_data12[(merge_data12[“公司缩写”]==row[1])&(merge_data12[“发布时间”]==row[2])&(merge_data12[“文章综合情感”]==row[3])].index))

使用Pandas对数据进行筛选和排序https://blog.csdn.net/butterfly1107/article/details/82756902?biz_id=102&utm_term=%E6%A0%B9%E6%8D%AE%E4%B8%A4%E5%88%97%E7%9A%84%E6%9D%A1%E4%BB%B6%E9%80%89%E6%8B%A9%E8%A1%8C&utm_medium=distribute.pc_search_result.none-task-blog-2allsobaiduweb~default-0-82756902&spm=1018.2118.3001.4187

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值