[Spark][Python]Spark Join 小例子

[training@localhost ~]$ hdfs dfs -cat people.json

{"name":"Alice","pcode":"94304"}
{"name":"Brayden","age":30,"pcode":"94304"}
{"name":"Carla","age":19,"pcoe":"10036"}
{"name":"Diana","age":46}
{"name":"Etienne","pcode":"94104"}
[training@localhost ~]$

hdfs dfs -cat pcodes.json

{"pcode":"10036","city":"New York","state":"NY"}
{"pcode:"87501","city":"Santa Fe","state":"NM"}
{"pcode":"94304","city":"Palo Alto","state":"CA"}
{"pcode":"94104","city":"San Francisco","state":"CA"}

sqlContext = HiveContext(sc)
peopleDF = sqlContext.read.json("people.json")

sqlContext = HiveContext(sc)
pcodesDF = sqlContext.read.json("pcodes.json")

mydf001=peopleDF.join(pcodesDF,"pcode")

mydf001.limit(5).show()

+-----+----+-------+----+---------------+-------------+-----+
|pcode| age| name|pcoe|_corrupt_record| city|state|
+-----+----+-------+----+---------------+-------------+-----+
|94304|null| Alice|null| null| Palo Alto| CA|
|94304| 30|Brayden|null| null| Palo Alto| CA|
|94104|null|Etienne|null| null|San Francisco| CA|
+-----+----+-------+----+---------------+-------------+-----+





本文转自健哥的数据花园博客园博客,原文链接:http://www.cnblogs.com/gaojian/p/7630003.html,如需转载请自行联系原作者

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值