大数据开发笔记

大数据开发组件        

HDFS

[atguigu@hadoop102 hadoop-3.1.3]$ sbin/start-dfs.sh

[atguigu@hadoop103 hadoop-3.1.3]$ sbin/start-yarn.sh

http://hadoop102:9870/explorer.html#/

Yarn

[atguigu@hadoop102 hadoop-3.1.3]$ sbin/stop-yarn.sh

[atguigu@hadoop103 hadoop-3.1.3]$ sbin/start-yarn.sh

http://hadoop103:8088/cluster

Zookeeper

[atguigu@hadoop102 zookeeper-3.5.7]$ zk.sh start

HA

[atguigu@hadoop102 ~]$ stop-dfs.sh

[atguigu@hadoop102 ~]$ zk.sh start

[atguigu@hadoop102 ~]$ start-dfs.sh

[atguigu@hadoop102 ~]$ start-yarn.sh

http://hadoop102:9870/explorer.html#/

http://hadoop104:8088/cluster

http://hadoop102:19888/jobhistory

Hive

[atguigu @hadoop102 opt]$ mysql -uroot -p

[atguigu@hadoop102 hive]$ bin/hive

hive> show tables;

[atguigu@hadoop102 hive]$ bin/hive --service hiveserver2

[atguigu@hadoop102 hive]$ bin/beeline -u jdbc:hive2://hadoop102:10000 -n atguigu

[atguigu@hadoop202 hive]$ nohup hive --service metastore 2>&1 &

[atguigu@hadoop202 hive]$ nohup hiveserver2 2>&1 &

[atguigu@hadoop102 hive]$ hiveservices.sh start

Idea连接hive数据库

Flume

[atguigu@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file conf/nc-flume-log.conf -Dflume.root.logger=INFO,console

[atguigu@hadoop102 flume]$ bin/flume-ng agent -c conf/ -n a1 -f conf/nc-flume-log.conf -Dflume.root.logger=INFO,console

[atguigu@hadoop102 ~]$ nc localhost 44444

[atguigu@hadoop102 conf]$ vim taildir-flume-hdfs.conf

添加如下内容

a1.sources = r1

a1.sinks = k1

a1.channels = c1

# Describe/configure the source

a1.sources.r1.type = TAILDIR

a1.sources.r1.filegroups = f1 f2

# 必须精确到文件,可以写匹配表达式匹配多个文件

a1.sources.r1.filegroups.f1 = /opt/module/flume/files1/.*file.*

a1.sources.r1.filegroups.f2 = /opt/module/flume/files2/.*log.*

# 实现断点续传的文件存放位置 不改有默认位置也能实现断点续传

a1.sources.r1.positionFile = /opt/module/flume/taildir_position.json

# Describe the sink

a1.sinks.k1.type = hdfs

a1.sinks.k1.hdfs.path = hdfs://hadoop102:8020/flume/%Y%m%d/%H

#上传文件的前缀

a1.sinks.k1.hdfs.filePrefix = log-

#是否使用本地时间戳

a1.sinks.k1.hdfs.useLocalTimeStamp = true

#积攒多少个Event才flush到HDFS一次

a1.sinks.k1.hdfs.batchSize = 100

#设置文件类型,可支持压缩

a1.sinks.k1.hdfs.fileType = DataStream

#多久生成一个新的文件

a1.sinks.k1.hdfs.rollInterval = 30

#设置每个文件的滚动大小大概是128M

a1.sinks.k1.hdfs.rollSize = 134217700

#文件的滚动与Event数量无关

a1.sinks.k1.hdfs.rollCount = 0

# Use a channel which buffers events in memory

a1.channels.c1.type = memory

a1.channels.c1.capacity = 1000

a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel

a1.sources.r1.channels = c1

a1.sinks.k1.channel = c1

[atguigu@hadoop102 flume]$ mkdir files1

[atguigu@hadoop102 flume]$ mkdir files2

[atguigu@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file conf/taildir-flume-hdfs.conf

[atguigu@hadoop102 files1]$ echo hello >> file1.txt

[atguigu@hadoop102 files1]$ echo atguigu >> file2.txt

查看HDFS上的数据

Kafka

先启动Zookeeper集群,然后启动kafaka

[atguigu@hadoop102   kafka]$ zk.sh start

[atguigu@hadoop102   kafka]$ kf.sh start

[atguigu@hadoop102 kafka]$ bin/kafka-topics.sh --zookeeper hadoop102:2181 --list

[atguigu@hadoop102 kafka]$ bin/kafka-topics.sh --zookeeper hadoop102:2181 --create --replication-factor 3 --partitions 1 --topic first

[atguigu@hadoop102 kafka]$ bin/kafka-console-producer.sh --broker-list hadoop102:9092 --topic first

[atguigu@hadoop102 kafka]$ bin/kafka-topics.sh --zookeeper hadoop102:2181 --describe –

-topic first

[atguigu@hadoop102 kafka]$ bin/kafka-topics.sh --zookeeper hadoop102:2181 --alter –-

topic first --partitions 6

Hbase

[atguigu@hadoop102 hbase]$ bin/start-hbase.sh

[atguigu@hadoop102 hbase]$ bin/stop-hbase.sh

[atguigu@hadoop102 hbase]$ bin/hbase shell

hbase(main):002:0> create 'student','info'

http://hadooo102:16010

http://hadooo103:16010

http://hadooo104:16010

Flume监控之Ganglia

[atguigu@hadoop102 flume]$ sudo service httpd start

[atguigu@hadoop102 flume]$ sudo service gmetad start

[atguigu@hadoop102 flume]$ sudo service gmond start

http://hadoop102/ganglia/

Kafka监控kafka-eagle

 [atguigu@hadoop102 eagle]$ bin/ke.sh start

启动之前需要先启动ZK以及KAFKA

http://hadoop102:8048/ke/

Hadoop概念学习系列之Hadoop、Spark学习路线(很值得推荐)(十八) - 大数据和AI躺过的坑 - 博客园

  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
目录 第一部分 Spark学习 ....................................................................................................................... 6 第1章 Spark介绍 ................................................................................................................... 7 1.1 Spark简介与发展 ...................................................................................................... 7 1.2 Spark特点 .................................................................................................................. 7 1.3 Spark与Hadoop集成 ................................................................................................ 7 1.4 Spark组件 .................................................................................................................. 8 第2章 Spark弹性分布数据集 ............................................................................................... 9 2.1 弹性分布式数据集 .................................................................................................... 9 2.2 MapReduce数据分享效率低..................................................................................... 9 2.3 MapReduce进行迭代操作 ........................................................................................ 9 2.4 MapReduce进行交互操作 ...................................................................................... 10 2.5 Spark RDD数据分享 ............................................................................................... 10 2.6 Spark RDD 迭代操作 .............................................................................................. 10 2.7 Spark RDD交互操作 ............................................................................................... 10 第3章 Spark安装 ................................................................................................................. 11 第4章 Spark CORE编程 ....................................................................................................... 13 4.1 Spark Shell ........................................................

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值