大数据-玩转数据-Spark-RDD编程基础-数据读写(python版)

大数据-玩转数据-Spark-RDD编程基础-数据读写(python版)

一、本地数据读写
读取本地文件

>>> textFile = sc.textFile("file:///home/hadoop/temp/word.txt")
>>> textFile.first()
'Hadoop is good'                                                                
>>> 

写入本地文件

>>> textFile = sc.textFile("file:///home/hadoop/temp/word.txt")
textFile.saveAsTextFile("file:///home/hadoop/temp/word_bak")
>>>

显示

[root@hadoop1 word_bak]# pwd
/home/hadoop/temp/word_bak
[root@hadoop1 word_bak]# ls
part-00000  part-00001  _SUCCESS

二、HDFS数据读写

[root@hadoop1 ~]# pyspark --master spark://hadoop1:7077
>>> textFile = sc.textFile("hdfs://192.168.80.2:9000/tmp/word.txt")
>>> textFile.saveAsTextFile("hdfs://192.168.80.2:9000/tmp/word_2")

显示

[root@hadoop1 temp]# hadoop fs -ls /tmp/word_2
Found 3 items
-rw-r--r--   4 root supergroup          0 2022-01-08 19:23 /tmp/word_2/_SUCCESS
-rw-r--r--   4 root supergroup         29 2022-01-08 19:23 /tmp/word_2/part-00000
-rw-r--r--   4 root supergroup         16 2022-01-08 19:23 /tmp/word_2/part-00001

三、HBASE数据读写
Hbase 是一个高可靠,高性能,面向列,可伸缩的分布式数据库
保证hdfs启动,zookeper启动,hbase启动(已配置环境变量)

[root@hadoop1 temp]# zkServer.sh start
[root@hadoop1 temp]# start-hbase.sh 
[root@hadoop1 ~]# hbase shell

显示
在这里插入图片描述
hbase 中创建一个表,输入数据信息

hbase(main):010:0> create 'student','info'
0 row(s) in 2.4010 seconds

=> Hbase::Table - student
hbase(main):011:0> put 'student','1','info:name','zhangsan'
0 row(s) in 0.0980 seconds

hbase(main):012:0> put 'student','1','info:gender','F'
0 row(s) in 0.0480 seconds


hbase(main):013:0> put 'student','1','info:age','23'
0 row(s) in 0.0370 seconds

hbase(main):014:0> get 'student','1'
COLUMN                                      CELL                                                                                                                           
 info:age                                   timestamp=1641643577275, value=23                                                                                              
 info:gender                                timestamp=1641643546974, value=F                                                                                               
 info:name                                  timestamp=1641643518594, value=zhangsan                                                                                        
1 row(s) in 0.1170 seconds

hbase(main):015:0> 

配置Spark
hbase lib目录下的 jar 包复制到 spark

[root@hadoop1 jars]# mkdir hbase
[root@hadoop1 jars]# cd hbase
[root@hadoop1 hbase]# pwd
/home/hadoop/spark/jars/hbase
[root@hadoop1 hbase]# cp /home/hadoop/hbase/lib/hbase*.jar ./
[root@hadoop1 hbase]# cp /home/hadoop/hbase/lib/guava-12.0.1.jar ./
[root@hadoop1 hbase]# cp /home/hadoop/hbase/lib/htrace-core-3.1.0-incubating.jar ./
[root@hadoop1 hbase]# cp /home/hadoop/hbase/lib/protobuf-java-2.5.0.jar ./

编辑 spark/conf

[root@hadoop1 conf]# vi spark-env.sh
export SPARK_DISK_CLASSPASS=$(/home/hadoop/apps/hadoop-2.10.1/bin/hadoop classpath):$(/home/hadoop/hbase/bin/hbase):/home/hadoop/spark/jars/hbase/*

读取数据

from pyspark.sql import SparkSession
from pyspark import SparkContext,SparkConf

conf = SparkConf().setMaster("local").setAppName("ReadHBase")
sc = SparkContext(conf = conf)
print('spark对象已创建')
host = 'local'
table = 'student'
conf = {"hbase.zookeeper.quorum": host, "hbase.mapreduce.inputtable": table}
keyConv = "org.apache.spark.examples.pythonconverters.ImmutableBytesWritableToStringConverter"
valueConv = "org.apache.spark.examples.pythonconverters.HBaseResultToStringConverter"
hbase_rdd = sc.newAPIHadoopRDD("org.apache.hadoop.hbase.mapreduce.TableInputFormat","org.apache.hadoop.hbase.io.ImmutableBytesWritable","org.apache.hadoop.hbase.client.Result",keyConverter=keyConv,valueConverter=valueConv,conf=conf)
count = hbase_rdd.count()
hbase_rdd.cache()
output = hbase_rdd.collect()
for (k, v) in output:
        print (k, v)

写入数据

from pyspark.sql import SparkSession
from pyspark import SparkContext,SparkConf
 
spark=SparkSession.builder.appName("abv").getOrCreate() #创建spark对象
print('spark对象已创建')
host = 'local'
table = 'student'
keyConv = "org.apache.spark.examples.pythonconverters.StringToImmutableBytesWritableConverter"
valueConv = "org.apache.spark.examples.pythonconverters.StringListToPutConverter"
conf = {"hbase.zookeeper.quorum": host,"hbase.mapred.outputtable": table,"mapreduce.outputformat.class": "org.apache.hadoop.hbase.mapreduce.TableOutputFormat","mapreduce.job.output.key.class": "org.apache.hadoop.hbase.io.ImmutableBytesWritable","mapreduce.job.output.value.class": "org.apache.hadoop.io.Writable"}
 
rawData = ['3,info,name,Rongcheng','4,info,name,Guanhua']
#( rowkey , [ row key , column family , column name , value ] )
print('准备写入数据')
sc.parallelize(rawData).map(lambda x: (x[0],x.split(','))).saveAsNewAPIHadoopDataset(conf=conf,keyConverter=keyConv,valueConverter=valueConv)
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值