文章目录
写在前面
- Spark3.2.0
- Flink1.13.6
- Hadoop3.1.4
- jdk1.8
- Sqoop1.4.6
- MySQL5.7
- Hive3.1.2
- Kafka0.11
- Flume1.9.0
- Zookeeper3.4.6
- Hbase2.4
- Redis6.2.0
- Dlink0.7.3
- Yarn3.1.4
- DolphinScheduler2.0.6
Hadoop11,12,13集群
文章中没有明确说明关闭命令的皆使用kill <PID>
的方式关闭应用或服务。
一、Zookeeper
默认通信端口:2181
**注意:**以下命令测试环境为Zookeeper-3.4.6
1.1 Zookeeper的启动
[root@hadoop10 ~]# zkServer.sh start
1.2 Zookeeper客户端进入
[root@hadoop10 ~]# zkCli.sh
Connecting to localhost:2181
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.ZooKeeper).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Welcome to ZooKeeper!
JLine support is enabled
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
1.2.1 zkCli查看连接命令
[zk: localhost:2181(CONNECTED) 0] ls /
[cluster, controller_epoch, controller, brokers, zookeeper, admin, isr_change_notification, dolphinscheduler, consumers, latest_producer_id_block, config, hbase]
[zk: localhost:2181(CONNECTED) 1]
1.3 zkServer查看状态
[root@hadoop10 ~]# zkServer.sh status
JMX enabled by default
Using config: /opt/installs/zookeeper3.4.6/zoo.cfg
Mode: standalone
1.4 Zookeeper的关闭
[root@hadoop10 ~]# zkServer.sh stop
JMX enabled by default
Using config: /opt/installs/zookeeper3.4.6/zoo.cfg
Stopping zookeeper ... STOPPED
二、Kafka
通信端口:9092
**注意:**以下命令针对Kafka0.11,版本不同命令略有差异,影响使用。
2.1 Kafka的启动
[root@hadoop10 ~]# kafka-server-start.sh -daemon /opt/installs/kafka0.11/config/server.properties
2.2 Kafka的关闭
切入kafka目录,在bin中启动。
[root@hadoop10 ~]# cd /opt/installs/kafka0.11/
[root@hadoop10 kafka0.11]# bin/kafka-server-stop.sh stop
2.2 创建和删除Topic
2.2.1 创建Topic
[root@hadoop10 kafka0.11]# kafka-topics.sh --create --zookeeper hadoop10:2181 --topic topic1 --partitions 1 --replication-factor 1
Created topic "topic1".
2.2.2 删除Topic
[root@hadoop10 kafka0.11]# kafka-topics.sh --delete --zookeeper hadoop10:2181 --topic topic1
Topic topic1 is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
2.3 Kafka进入Topic生产
切入kafka目录,在bin中启动。
[root@hadoop10 ~]# cd /opt/installs/kafka0.11/
[root@hadoop10 kafka0.11]# bin/kafka-console-producer.sh --broker-list hadoop10:9092 --topic topic-car
三、Hadoop
HDFS webUI端口:9870
Hadoop日志服务:8088
3.1 启动HDFS集群
[root@hadoop10 ~]# start-all.sh
3.2 单独启动HDFS
[root@hadoop10 dolphinscheduler2.0.6]# start-dfs.sh
3.3 开启历史日志服务器
[root@hadoop10 ~]# mr-jobhistory-daemon.sh start historyserver
运行成功显示:
[root@hadoop10 ~]# jps
2400 SecondaryNameNode
100481 RunJar
100625 RunJar
62627 JobHistoryServer # Hadoop 历史日志进程
62691 Jps
2709 ResourceManager
2901 NodeManager
2172 DataNode
2029 NameNode
3.4 集群中两个Standby的解决方案
[root@hadoop11 ~]# hdfs haadmin -getServiceState nn2
standby
[root@hadoop11 ~]# hdfs haadmin -getServiceState nn1
standby
[root@hadoop11 ~]# hdfs haadmin -transitionToActive --forcemanual nn1
You have specified the --forcemanual flag. This flag is dangerous, as it can induce a split-brain scenario that WILL CORRUPT your HDFS namespace, possibly irrecoverably.
It is recommended not to use this flag, but instead to shut down the cluster and disable automatic failover if you prefer to manually manage your HA state.
You may abort safely by answering 'n' or hitting ^C now.
Are you sure you want to continue? (Y or N) y
2023-09-28 16:08:26,544 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for NameNode at hadoop12/192.168.200.12:8020
2023-09-28 16:08:26,787 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for NameNode at hadoop11/192.168.200.11:8020
[root@hadoop11 ~]# hdfs haadmin -getServiceState nn1
active
四、Spark
以下Spark命令为Standalone模式中使用测试。
通信端口:7077
web UI:8080
4.1 启动集群
[root@hadoop10 ~]# cd /opt/installs/spark3.2.0/sbin/
[root@hadoop10 sbin]# ./start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /opt/installs/spark3.2.0/logs/spark-root-org.apache.spark.deploy.master.Master-1-hadoop10.out
hadoop10: starting org.apache.spark.deploy.worker.Worker, logging to /opt/installs/spark3.2.0/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-hadoop10.out
4.2 关闭集群
因为spark的群起命令会和hdfs的命令冲突,所以spark执行命令时使用绝对路径。
[root@hadoop11 ~]# /opt/installs/spark3.1.2/sbin/stop-all.sh
[root@hadoop10 sbin]# sh /opt/installs/spark3.2.0/sbin/stop-all.sh
hadoop10: stopping org.apache.spark.deploy.worker.Worker
stopping org.apache.spark.deploy.master.Master
4.3 Spark测试案例运行命令
运行计算Pi测试案例:
[root@hadoop10 installs]# spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME/examples/jars/spark-examples_2.12-3.2.0.jar
运行结果:
23/06/25 22:35:42 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Pi is roughly 3.142675713378567
在yarn模式的abc队列中运行计算Pi测试案例:
[root@hadoop10 installs]# spark-submit --queue abc --master yarn --class org.apache.spark.examples.SparkPi $SPARK_HOME/examples/jars/spark-examples_2.12-3.2.0.jar
运行结果:
23/06/25 22:41:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/06/25 22:42:02 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
Pi is roughly 3.1404757023785117
4.4 历史日志服务器启动
[root@hadoop10 ~]# cd /opt/installs/spark3.2.0/
[root@hadoop10 spark3.2.0]# sbin/start-history-server.sh
历史日志服务web:18080
五、Flink
webUI:8081
5.1 启动集群
[root@hadoop10 ~]# start-cluster.sh
5.2 关闭集群
[root@hadoop10 ~]# stop-cluster.sh
六、Dinky
webUI端口 8888
6.1 Dinky的启动
[root@hadoop10 ~]# cd /opt/installs/dlink0.7.3/
[root@hadoop10 dlink0.7.3]# sh auto.sh start
FLINK VERSION : 1.14
........................................Start Dinky Successfully........................................
注意查看CPU占用和内存使用情况是否提高,检查jps是否有Dlink进程。
[root@hadoop10 dlink0.7.3]# jps
49632 Dlink
1859 NameNode
64052 QuorumPeerMain
2006 DataNode
2214 SecondaryNameNode
2679 NodeManager
48775 StandaloneSessionClusterEntrypoint
49082 TaskManagerRunner
2523 ResourceManager
50653 Jps
6.2 Dinky的关闭
[root@hadoop10 ~]# cd /opt/installs/dlink0.7.3/
[root@hadoop10 dlink0.7.3]# sh auto.sh stop
........................................Stop Dinky Successfully.....................................
七、HBase
**注意:**启用HBase前请确保ZooKeeper连接已成功建立且HBase在ZooKeeper中注册。否则请回到1.1 Zookeeper的启动
小节。
7.1 HBase后台启动
7.1.1 后台启动
[root@hadoop10 ~]# start-hbase.sh
...
7.1.2 后台关闭
[root@hadoop10 ~]# stop-hbase.sh
stopping hbase...............
7.2 HBase Shell启动
启动成功后会加载片刻,然后可以在shell中进行HBase操作。
[root@hadoop10 ~]# hbase shell
...
hbase:001:0> list
八、Hive
8.1 Hive启动
[root@hadoop11 ~]# hive
which: no hbase in ...
8.2 Hive数据库基本操作
创建数据库和显示数据库名。
create database test_hive;
show databases;
8.3 带metastore的后台启动
[root@hadoop10 ~]# nohup hive --service metastore > /tmp/metastore.log 2>&1 &
[root@hadoop10 ~]# hiveserver2
2023-06-25 21:06:59: Starting HiveServer2
...
[root@hadoop11 ~]# nohup hive --service hiveserver2 > /tmp/hiveserver2.log 2>&1 &
[1] 7273
[root@hadoop11 ~]# tail -f /tmp/hiveserver2.log
nohup: 忽略输入
8.4 Hive的Local模式开启
set hive.exec.mode.local.auto=true;
8.5 HQL添加和删除表分区带元数据的更新
alter table t_name add partition(dt='xxxxxxx')
alter table t_name drop partition(dt='xxxxxxx')
8.6 Beeline的连接
[root@hadoop11 ~]# beeline
Beeline version 2.3.7 by Apache Hive
beeline> !connect jdbc:hive2://hadoop11:10000
Connecting to jdbc:hive2://hadoop11:10000
Enter username for jdbc:hive2://hadoop11:10000: root
Enter password for jdbc:hive2://hadoop11:10000: ****
2023-10-09 14:18:39,146 INFO jdbc.Utils: Supplied authorities: hadoop11:10000
2023-10-09 14:18:39,149 INFO jdbc.Utils: Resolved authority: hadoop11:10000
Connected to: Apache Hive (version 3.1.2)
九、Yarn
9.1 刷新队列
[root@hadoop10 ~]# yarn rmadmin -refreshQueues
9.2 启动
[root@hadoop10 ~]# start-yarn.sh
9.3 关闭
[root@hadoop10 ~]# stop-yarn.sh
历史日志服务
[root@hadoop10 ~]# mapred --daemon start historyserver
[root@hadoop10 ~]# jps
*
100022 JobHistoryServer
十、DolphinScheduler
10.1 启动前置操作
启动DS需要前置启动Zookeeper、HDFS、Yarn。
zkServer.sh start #启动zk
start-dfs.sh #启动hdfs
start-yarn.sh #启动yarn
10.2 启动
以下命令经过测试,不区分dolphinscheduler2.0.6
的解压目录和安装目录。
[root@hadoop10 dolphinscheduler2.0.6]# bin/start-all.sh
Web UI:http://hadoop10:12345/dolphinscheduler
上面进不去用Web UI:http://hadoop10:12345/dolphinscheduler/ui/view/login/index.html
正常启动成功后的进程:
[root@hadoop10 dolphinscheduler2.0.6]# jps
76706 MasterServer
74345 NodeManager
76937 PythonGatewayServer
73608 DataNode
77288 Jps
76843 AlertServer
73836 SecondaryNameNode
76748 WorkerServer
73455 NameNode
74193 ResourceManager
74833 QuorumPeerMain
76796 LoggerServer
76892 ApiApplicationServer
10.3 关闭
[root@hadoop10 dolphinscheduler2.0.6]# bin/stop-all.sh
10.4 一键安装(适用于更新配置文件后)
[root@hadoop10 ~]# cd /opt/installs/dolphinscheduler2.0.6/
[root@hadoop10 dolphinscheduler2.0.6]# sh install.sh
十一、MySQL
11.1 查看MySQL服务状态
版本不同,命令有差异
[root@hadoop10 ~]# service mysqld status
11.2 设置简单密码模式
set global validate_password_policy=0;
set global validate_password_length=4;
flush privileges;
十二、Redis
12.1 启动和查看redis进程
[root@hadoop10 ~]# redis-server /opt/installs/redis-6.2.0/redis.conf
[root@hadoop10 ~]# ps -ef | grep redis
root 80277 1 0 23:33 ? 00:00:00 redis-server hadoop10:6379
root 80350 79829 0 23:33 pts/1 00:00:00 grep --color=auto redis
12.2 启动redis客户端
[root@hadoop10 ~]# redis-cli -h hadoop10 -p 6379
hadoop10:6379> auth 123
OK
12.3 清空数据库
hadoop10:6379> flushall
OK
十三、MongaDB
13.1 Windows Shell 服务启动与关闭
首先windows中可以手动在服务中启动:
-
浏览器进入
http://localhost:27017/
-
显示
It looks like you are trying to access MongoDB over HTTP on the native driver port.
-
然后进入可以进入Shell
PS C:\Users\Lenovo> mongo
- 关闭MongoDB服务,注意只能在
admin
内执行
> db.shutdownServer()
shutdown command only works with the admin database; try 'use admin'
> use admin
switched to db admin
> db.shutdownServer()
server should be down...
- 关闭服务之后再进入27017端口
This site can’t be reached
localhost refused to connect.
13.2 创建数据库
如果没有这个库,则创建,但是并不会显示,需要插入数据后才能显示
> use TestDb2
> switched to db TestDb2
> show dbs
> TestDb1 0.000GB
> admin 0.000GB
> config 0.000GB
> local 0.000GB
13.3 插入方法
> db.testdemo.insert({name:"guoyachao",age:25})
WriteResult({ "nInserted" : 1 })
> db.testdemo.insertMany([{name:"guoyachao2",age:25},{name:"guoyachao3",age:"25"}])
{
"acknowledged" : true,
"insertedIds" : [
ObjectId("650d08a71163e5c30f7eb223"),
ObjectId("650d08a71163e5c30f7eb224")
]
}
> db.testdemo.insertOne({name:"guoyachao4",age:25})
{
"acknowledged" : true,
"insertedId" : ObjectId("650d09131163e5c30f7eb225")
}
13.4 删除数据库
首先进入要删除的数据库,然后执行删除指令。
> use TestDb2
switched to db TestDb2
> db.dropDatabase()
{ "ok" : 1 }
> show dbs
TestDb1 0.000GB
admin 0.000GB
config 0.000GB
local 0.000GB
十四、Linux
14.1 磁盘、CPU
free -h
:显示系统内存使用情况,G为单位。
df -h
:显示磁盘使用情况和文件系统的信息。
lscpu
:显示有关CPU(中央处理单元)和系统架构的信息。
wc -l *
:查看当前目录中文件数量。
du -sh *
:显示当前目录中文件大小,G为单位。
15.1 系统时间同步
同步后立刻查看时间会有延迟,请等待数秒再查看新的时间。
[root@hadoop10 ~]# date
2023年 10月 04日 星期三 05:56:06 CST
[root@hadoop10 ~]# systemctl restart chronyd
[root@hadoop10 ~]# date
2023年 10月 04日 星期三 05:56:23 CST
[root@hadoop10 ~]# date
2023年 10月 05日 星期四 11:14:59 CST
十五、SecureCRT
15.1 上传与下载文件
首次使用注意:
- 需要在
options — session options — X/Y/Zmodem
中配置上传和下载目录; - 在linux中执行安装程序
yum -y install lrzsz
。
15.1.1 下载
sz
:s意为send(发送),站在服务器的视角发送文件,即为“下载”。
[root@hadoop11 data]# sz aaaa
rz
Starting zmodem transfer. Press Ctrl+C to cancel.
Transferring aaaa...
100% 8 bytes 8 bytes/sec 00:00:01 0 Errors
15.1.2 上传
rz
:r意为received(接收),站在服务器的视角接收文件,即为“上传”。
输入该命令后回车执行,会弹出窗口选择上传至服务器的文件。