spark使用

1.
start-dfs.sh
start-yarn.sh
--------------------------------------------
(base) [root@hadoop1 ~]# jps
17252 Jps
(base) [root@hadoop1 ~]# start-dfs.sh
Starting namenodes on [hadoop1]
hadoop1: starting namenode, logging to /opt/hadoop/hadoop-2.6.5/logs/hadoop-root-namenode-hadoop1.out
hadoop2: starting datanode, logging to /opt/hadoop/hadoop-2.6.5/logs/hadoop-root-datanode-hadoop2.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /opt/hadoop/hadoop-2.6.5/logs/hadoop-root-secondarynamenode-hadoop1.out
(base) [root@hadoop1 ~]# jps
17536 NameNode
17907 SecondaryNameNode
19065 Jps
(base) [root@hadoop1 ~]# start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop/hadoop-2.6.5/logs/yarn-root-resourcemanager-hadoop1.out
hadoop2: starting nodemanager, logging to /opt/hadoop/hadoop-2.6.5/logs/yarn-root-nodemanager-hadoop2.out
(base) [root@hadoop1 ~]# jps
17536 NameNode
19570 Jps
17907 SecondaryNameNode
19209 ResourceManager
-----------------------------------
 2.
[root@hadoop1 ~]# cd /opt/hive/apache-hive-1.2.1-bin/
[root@hadoop1 apache-hive-1.2.1-bin]# ./bin/hive --service metastore
---------------------------------------
(base) [root@hadoop1 ~]# cd /opt/hive/apache-hive-1.2.1-bin/
(base) [root@hadoop1 apache-hive-1.2.1-bin]# ./bin/hive --service metastore
ls: cannot access /opt/spark/spark-2.2.0-bin-hadoop2.6/lib/spark-assembly-*.jar: No such file or directory
Starting Hive Metastore Server

(base) [root@hadoop1 ~]# jps
17536 NameNode
17907 SecondaryNameNode
20483 RunJar
21095 Jps
19209 ResourceManager
-------------------------------------------------------------------
 3.
[root@hadoop1 ~]# cd /opt/spark/spark-2.2.0-bin-hadoop2.6/
[root@hadoop1 spark-2.2.0-bin-hadoop2.6]# ./sbin/start-thriftserver.sh
[root@hadoop1 spark-2.2.0-bin-hadoop2.6]# ./bin/beeline
Beeline version 1.2.1.spark2 by Apache Hive
beeline> !connect jdbc:hive2://192.168.58.111:10000
Connecting to jdbc:hive2://192.168.58.111:10000
Enter username for jdbc:hive2://192.168.58.111:10000: root
Enter password for jdbc:hive2://192.168.58.111:10000: ******
---------------------------------------------
(base) [root@hadoop1 ~]# cd /opt/spark/spark-2.2.0-bin-hadoop2.6/
(base) [root@hadoop1 spark-2.2.0-bin-hadoop2.6]# ./sbin/start-thriftserver.sh
starting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to /opt/spark/spark-2.2.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-hadoop1.out
(base) [root@hadoop1 spark-2.2.0-bin-hadoop2.6]# jps
17536 NameNode
21856 Jps
17907 SecondaryNameNode
20483 RunJar
21683 SparkSubmit
19209 ResourceManager
(base) [root@hadoop1 spark-2.2.0-bin-hadoop2.6]# ./bin/beeline
Beeline version 1.2.1.spark2 by Apache Hive
beeline> !connect jdbc:hive2://10.196.83.15:10000
Connecting to jdbc:hive2://10.196.83.15:10000
Enter username for jdbc:hive2://10.196.83.15:10000: root
Enter password for jdbc:hive2://10.196.83.15:10000: ********
20/06/28 16:38:04 INFO jdbc.Utils: Supplied authorities: 10.196.83.15:10000
20/06/28 16:38:04 INFO jdbc.Utils: Resolved authority: 10.196.83.15:10000
20/06/28 16:38:04 INFO jdbc.HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://10.196.83.15:10000
Connected to: Spark SQL (version 2.2.0)
Driver: Hive JDBC (version 1.2.1.spark2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://10.196.83.15:10000>
(base) [root@hadoop1 sbin]# jps
17536 NameNode
17907 SecondaryNameNode
20483 RunJar
21683 SparkSubmit
22552 BeeLine
19209 ResourceManager
23129 Jps
-----------------------------------------------------------------------


4.
ctrl+c停止./bin/beeline
[root@hadoop1 sbin]# ./stop-thriftserver.sh
stopping org.apache.spark.sql.hive.thriftserver.HiveThriftServer2

Starting Hive Metastore Server
^C[root@hadoop1 bin]#

[root@hadoop1 bin]# stop-yarn.sh

[root@hadoop1 bin]# stop-dfs.sh





参考
第二步:启动 beeline客户端
在克隆的窗口输出 beeline 命令,进入到beeline客户端,然后输入

beeline> !connect jdbc:hive2://hadoop05:10000

!connect jdbc:hive2:// 这是固定的,后面的是hive客户端服务在哪个节点上(我的hive装在hadoop05),端口号10000

输入hadoop05节点的用户名

Enter username for jdbc:hive2://hadoop05:10000: hadoop

输入hadoop05节点的密码

Enter password for jdbc:hive2://hadoop05:10000: hadoop

2.hadoop的hdfs

参考
https://blog.csdn.net/qq_33598343/article/details/83040864

[root@hadoop1 ~]# hdfs dfs -ls /
Found 2 items
drwxrwxr-x   - root supergroup          0 2020-02-29 21:59 /tmp
drwxr-xr-x   - root supergroup          0 2020-03-01 14:00 /user
[root@hadoop1 ~]# hdfs dfs -mkdir /hd/
[root@hadoop1 ~]# hdfs dfs -chmod g+w /hd
[root@hadoop1 ~]# hdfs dfs -put /opt/wc.txt /hd



  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值