大数据学习总结------在VMware下搭建(hadoop+spark+hbase+hive+zookeeper+kafka+flume)

大数据学习总结------在VMware下基于Centos7搭建(hadoop+spark+flink+hbase+mysql+hive+zookeeper+kafka+flume+sqoop)集群

准备环境

VMware Workstation,CentOS-7,hadoop-2.7.7, jdk-1.8, flink-1.9, flume-1.8,
hive-2.3, hbase-2.2, kafka-2.12, mysql-5.7, mysql-connector-java-5.1, scala-2.12,
spark-2.4, zookeeper-3.4 sqoop-1.4

系统安装-软件选择

我选择了基本环境-开发及生成工作站,附加选项:附加开发,开发工具,大系统性能,平台开发,python

创建管理集群的用户

安装完系统后创建管理集群的用户

adduser hadoop(用户名)
passwd 密码

统一所有节点的用户名。

切换到root用户,在/etc/sudoers文件里修改
## Allow root to run any commands anywher

root ALL=(ALL) ALL

hadoop ALL=(ALL) ALL #hadoop是新增用户名

用于赋予新创建用户root权限

关闭防火墙

关闭防火墙为了节点之的通信正常,对外设有一个不关闭防火墙的服务器,用此服务器访问集群。
关闭当前防火墙命令:systemctl stop firewalld.service
永久关闭防火墙命令:systemctl disable firewalld.service

节点的时间同步

所有节点
启动服务:systemctl start ntpd.service
设置开机启动:systemctl enable ntpd.service

主节点(作为ntpserver,同步外网时间):
vi /etc/ntp.conf
在restrict ::1下一行添加restrct 192.168.xxx.0 mask 255.255.255.0 (xxx根据自身的IP地址来填)
重启ntpd服务systemctl restart ntpd
查看状态ntpstat

分节点
vi /etc/ntp.conf
添加一行并注释四行:
server 192.168.0.104
# server 0.centos.pool.ntp.org iburst
# server 1.centos.pool.ntp.org iburst
# server 2.centos.pool.ntp.org iburst
# server 3.centos.pool.ntp.org iburst

重启ntpd服务systemctl restart ntpd
查看状态ntpstat
需等待十分钟左右,查看状态:
成功同步主节点时间
则成功

设置静态IP地址

首先需要知道VMware的一些设置
点击编辑–虚拟网络编辑器–VMnet8需要设置为NAT模式,点开DHCP设置记下子网IP,子网掩码,起始IP地址,结束IP地址
点开NAT设置记下网关IP
在这里插入图片描述
回到虚拟机,操作所有节点:
vi /etc/sysconfig/network-scripts/ifcfg-ens33

TYPE=“Ethernet”
PROXY_METHOD=“none”
BROWSER_ONLY=“no”
BOOTPROTO=“static” #从dhcp改为static
DEFROUTE=“yes”
IPV4_FAILURE_FATAL=“no”
IPV6INIT=“yes”
IPV6_AUTOCONF=“yes”
IPV6_DEFROUTE=“yes”
IPV6_FAILURE_FATAL=“no”
IPV6_ADDR_GEN_MODE=“stable-privacy”
NAME=“ens33”
UUID=“a2a3e231-ee32-4871-acc2-1177ee9aec2b”
DEVICE=“ens33”
ONBOOT=“yes”
#添加内容
IPADDR=192.168.50.152 #自定义IP地址,在起始到结束中选择个没被占用的
GATEWAY=192.168.50.2 #网关IP
NEMASK=255.255.255.0 #子网掩码
DNS1=192.168.50.2 #跟网关IP一样

更改主机名和与IP地址做映射

更改主机名
vi /etc/hostname
修改为要改的主机名

映射
vi /etc/hosts
IP地址 主机名
IP地址 主机名
IP地址 主机名
所有节点都要填写

重启系统reboot

设置ssh无密码登录

所有节点
切换到新用户
生成公钥和私钥:ssh-keygen -t rsa ,一直回车到创建成功
将本机公钥分配给所有节点:ssh-copy-id -i 主机名

完成后可以测试:ssh 主机名

创建存放安装包的目录

创建目录mkdir ~/opt

创建存放安装包的目录mkdir ~/opt/file
将所有压缩文件放入file

java的安装与配置

所有节点
卸载自带的openjdk

切换到root用户
查看已安装的jdk:rpm -qa | grep java
将所有包含openjdk的文件删除:rpm -e --nodeps 文件名

切换到新用户
创建java存放路径mkdir /home/hadoop/opt/java
解压jdk:tar -zxvf /home/hadoop/opt/file/jdk-8u251-linux-x64.tar.gz
将解压出来的文件移动到java目录:mv jdk-1.8.0.251 /home/hadoop/opt/java/jdk-1.8

配置环境变量
vi ~/.bash_profile

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

export JAVA_HOME=/home/hadoop/opt/java/jdk-1.8

export PATH=.:${JAVA_HOME}/bin:$PATH

#PATH=$PATH:$HOME/.local/bin:$HOME/bin

#export PATH

使配置文件生效
source ~/.bash_profile

需要对~/.bashrc进行配置,否则每次重启终端都要source ~/.bash_profile
vi ~/.bashrc

# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi

# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=

# User specific aliases and functions
source /etc/profile

查看java是否安装成功:java -version

hadoop的安装与配置

创建hadoop存放路径:mkdir ~/opt/hadoop
解压:tar -zxvf ~/opt/file/hadoop-2.7.7.tar.gz
移动且重命名:mv hadoop-2.7.7/ ~/opt/hadoop/hadoop-2.7

配置环境
vi ~/.bash_profile

export JAVA_HOME=/home/hadoop/opt/java/jdk-1.8

export HADOOP_HOME=/home/hadoop/opt/hadoop/hadoop-2.7

export PATH=.:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:$PATH

source ~/.bash_profile

切换路径配置hadoop
cd ~/opt/hadoop/hadoop-2.7/etc/hadoop/
vi core-site.xml

<configuration>

<property>
        <name>fs.default.name</name>
        <value>hdfs://master-01:9000</value>
</property>
<property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/tmp_nds</value>
</property>
<property>
        <name>io.file.buffer.size</name>
        <value>131702</value>
</property>

</configuration>

vi hadoop-env.sh

export JAVA_HOME=/home/hadoop/opt/java/jdk-1.8

vi hdfs-site.xml

<configuration>

        <!-- namenode数据的存放地点-->
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>/home/hadoop/opt/hadoop/hadoop-2.7/dfs/name</value>
        </property>
        <!-- datanode数据的存放地点-->
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>/home/hadoop/opt/hadoop/hadoop-2.7/dfs/data</value>
        </property>
        <!-- hdfs的副本数设置-->
        <property>
                <name>dfs.replication</name>
                <value>2</value>
        </property>

        <!-- secondary namenode的http通讯地址-->
        <property>
                <name>dfs.secondary.http.address</name>
                <value>master-01:50090</value>
        </property>
        <property>
        <!-- 开启hdfs的web访问接口-->
                <name>dfs.webhdfs.enabled</name>
                <value>true</value>
        </property>

</configuration>

cp mapred-site.xml.template mapred-site.xml

vi mapred-site.xml

<configuration>

        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <!-- JobHistory Server ============================================================== -->
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>master-01:10020</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>slave-01:19888</value>
        </property>

</configuration>

vi yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->

        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>master-01</value>
        </property>
        <property>
                <!--yarn总管理器的IPC通讯地址-->
                <name>yarn.resourcemanager.address</name>
                <value>${yarn.resourcemanager.hostname}:8032</value>
        </property>
        <property>
                <!--yarn总管理器调度程序的IPC通讯地址-->
                <name>yarn.resourcemanager.scheduler.address</name>
                <value>${yarn.resourcemanager.hostname}:8030</value>
        </property>
        <property>
                <!--yarn总管理器的web http通讯地址-->
                <name>yarn.resourcemanager.webapp.address</name>
                <value>${yarn.resourcemanager.hostname}:8088</value>
        </property>
        <property>
                <description>The https adddress of the RM web application.</description>
                <name>yarn.resourcemanager.webapp.https.address</name>
                <value>${yarn.resourcemanager.hostname}:8090</value>
        </property>
        <property>
                <!--yarn总管理器的IPC通讯地址-->
                <value>${yarn.resourcemanager.hostname}:8031</value>
        </property>
        <property>
                <!--yarn总管理器的IPC管理地址-->
                <name>yarn.resourcemanager.admin.address</name>
                <value>${yarn.resourcemanager.hostname}:8033</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        <property>
                <name>yarn.scheduler.maximum-allocation-mb</name>
                <value>2048</value>
                <discription>单个任务可申请最大内存,默认8192MB</discription>
        </property>
        <property>
                <!--容器所占的虚拟内存和物理内存之比。该值指示了虚拟内存的使用可以超过
所分配内存的量
                     。默认值是2.1-->
                <name>yarn.nodemanager.vmem-pmem-ratio</name>
                <value>2.1</value>
        </property>
        <property>
                <name>yarn.nodemanager.resource.memory-mb</name>
                <value>2048</value>
        </property>
        <property>
                <name>yarn.nodemanager.vmem-check-enabled</name>
                <value>false</value>
        </property>

</configuration>

vi slaves

slave-01
slave-02

将配置好的文件传到其它节点:
scp ~/.bash_profile hadoop@slave-01:~/
scp ~/.bash_profile hadoop@slave-02:~/

scp ~/opt/hadoop/ hadoop@slave-01:~/opt/
scp ~/opt/hadoop/ hadoop@slave-02:~/opt/

其他节点节点:
source ~/.bash_profile

检查hadoop环境是否配置好
hadoop version

初始化
cd ~/opt/hadoop/hadoop-2.7/bin/
hdfs namenode -format

启动
cd ~/opt/hadoop/hadoop-2.7/sbin/
start-all.sh

jps
主节点:
27702 ResourceManager
27528 SecondaryNameNode
27979 Jps
27276 NameNode

分节点:
21953 DataNode
22260 Jps
22124 NodeManager

通过网页查看Hadoop启动状态是否成功
IP:50070和IP:8088

在这里插入图片描述
在这里插入图片描述

关闭
stop-all.sh

安装scala

mkdir ~/opt/scala
tar -zxvf ~/opt/file/scala-2.12.2.tgz
mv scala-2.12.2/ ~/opt/scala/scala-2.12

vi ~/.bash_profile

export JAVA_HOME=/home/hadoop/opt/java/jdk-1.8

export HADOOP_HOME=/home/hadoop/opt/hadoop/hadoop-2.7

export SCALA_HOME=/home/hadoop/opt/scala/scala-2.12

export PATH=.:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${SCALA_HOME}/bin:$PATH

source ~/.bash_profile

scala -version

安装spark

mkdir ~/opt/spark
tar -zxvf ~/opt/file/spark-2.4.5-bin-hadoop2.7.tgz
mv spark-2.4.5-bin-hadoop2.7/ ~/opt/spark/spark-2.4

vi ~/.bash_profile

export JAVA_HOME=/home/hadoop/opt/java/jdk-1.8

export HADOOP_HOME=/home/hadoop/opt/hadoop/hadoop-2.7

export SCALA_HOME=/home/hadoop/opt/scala/scala-2.12

export SPARK_HOME=/home/hadoop/opt/spark/spark-2.4

export PATH=.:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${SCALA_HOME}/bin:${SPARK_HOME}/bin:$PATH

source ~/.bash_profile

cd ~/opt/spark/spark-2.4/conf/
cp spark-env.sh.template spark-env.sh

vi spark-env.sh

export SCALA_HOME=/home/hadoop/opt/scala/scala-2.12
export JAVA_HOME=/home/hadoop/opt/java/jdk-1.8
export HADOOP_HOME=/home/hadoop/opt/hadoop/hadoop-2.7
export HADOOP_CONF_DIR=/home/hadoop/opt/hadoop/hadoop-2.7/etc/hadoop
export SPARK_HOME=/home/hadoop/opt/spark/spark-2.4
export SPARK_MASTER_IP=192.168.50.152    # (主节点IP)
export SPARK_EXECUTOR_MEMORY=2G

cp slaves.template slaves

vi slaves

slave-01
slave-02

同步到分节点
scp -r ~/opt/scala hadoop@slave-01:~/opt/
scp -r ~/opt/scala hadoop@slave-02:~/opt/

scp -r ~/opt/spark hadoop@slave-01:~/opt/
scp -r ~/opt/spark hadoop@slave-02:~/opt/

scp ~/.bash_profile hadoop@slave-01:~/
scp ~/.bash_profile hadoop@slave-02:~/

每个分节点
source ~/.bash_profile

启动spark(前提hadoop已启动)
cd ~/opt/spark/spark-2.4/sbin/
start-all.sh
在IP:8080看
在这里插入图片描述
stop-all.sh

安装zookeeper

mkdir ~/opt/zookeeper
tar -zxvf ~/opt/file/zookeeper-3.4.14.tar.gz
mv zookeeper-3.4.14/ ~/opt/zookeeper/zookeeper-3.4

vi ~/.bash_profile

export JAVA_HOME=/home/hadoop/opt/java/jdk-1.8

export HADOOP_HOME=/home/hadoop/opt/hadoop/hadoop-2.7

export SCALA_HOME=/home/hadoop/opt/scala/scala-2.12

export SPARK_HOME=/home/hadoop/opt/spark/spark-2.4

export ZOOKEEPER_HOME=/home/hadoop/opt/zookeeper/zookeeper-3.4

export PATH=.:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${SCALA_HOME}/bin:${SPARK_HOME}/bin:${ZOOKEEPER_HOME}/bin:$PATH

source ~/.bash_profile

mkdir ~/opt/zookeeper/data
mkdir ~/opt/zookeeper/datalog
cd ~/opt/zookeeper/data
touch myid

vi myid
在myid里面添加1,其余节点添加2,3等,如分节点1添加2,分节点2添加3

1

cd ~/opt/zookeeper/zookeeper-3.4/conf/
cp zoo_sample.cfg zoo.cfg

vi zoo.cfg

The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
# dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

dataDir=~/opt/zookeeper/data
dataLogDir=~/opt/zookeeper/datalog
server.1=192.168.50.152:2888:3888
server.2=192.168.50.153:2888:3888
server.3=192.168.50.154:2888:3888

同步信息到所有节点:
scp ~/.bash_profile hadoop@slave-01:~/
scp ~/.bash_profile hadoop@slave-02:~/
scp -r ~/opt/zookeeper hadoop@slave-01:~/opt/
scp -r ~/opt/zookeeper hadoop@slave-02:~/opt/
分节点:
source ~/.bash_profile
修改myid文件

启动zookeeper:
所有节点:
cd ~/opt/zookeeper/zookeeper-3.4/bin/

启动zookeeper:
zkServer.sh start

查看zookeeper状态(全启动后至少有一个leader,其余都是follower):
zkServer.sh status

结束zookeeper:
zkServer.sh stop

flink安装

mkdir ~/opt/flink
tar -zxvf ~/opt/file/flink-1.9.3-bin-scala_2.12.tgz
mv flink-1.9.3/ ~/opt/flink/flink-1.9/

cd ~/opt/flink/flink-1.9/conf
vi ~/.bash_profile

export JAVA_HOME=/home/hadoop/opt/java/jdk-1.8

export HADOOP_HOME=/home/hadoop/opt/hadoop/hadoop-2.7

export SCALA_HOME=/home/hadoop/opt/scala/scala-2.12

export SPARK_HOME=/home/hadoop/opt/spark/spark-2.4

export ZOOKEEPER_HOME=/home/hadoop/opt/zookeeper/zookeeper-3.4

export FLINK_HOME=/home/hadoop/opt/flink/flink-1.9

export PATH=.:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${SCALA_HOME}/bin:${SPARK_HOME}/bin:${ZOOKEEPER_HOME}/bin:${FLINK_HOME}/bin:$PATH

source ~/.bash_profile

vi masters

master-01:8081

vi slaves

slave-01
slave-02

vi flink-conf.yaml
修改:

taskmanager.numberOfTaskSlots:2
jobmanager.rpc.address: master-01

同步到分节点:
scp -r ~/opt/flink/ hadoop@slave-01:~/opt/
scp -r ~/opt/flink/ hadoop@slave-02:~/opt/

scp ~/.bash_profile hadoop@slave-01:~/
scp ~/.bash_profile hadoop@slave-02:~/
分节点:
source ~/.bash_profile

cd ~/opt/flink/flink-1.9/bin/
运行flink:
start-cluster.sh
jps
主节点

38850 StandaloneSessionClusterEntrypoint
38919 Jps

分节点

30984 TaskManagerRunner
31048 Jps

在这里插入图片描述

结束flink:
stop-cluster.sh

安装hbase

mkdir ~/opt/hbase
tar -zxvf ~/opt/file/hbase-2.2.4-bin.tar.gz
mv hbase-2.2.4/ ~/opt/hbase/hbase-2.2

vi ~/.bash_profile

export JAVA_HOME=/home/hadoop/opt/java/jdk-1.8

export HADOOP_HOME=/home/hadoop/opt/hadoop/hadoop-2.7

export SCALA_HOME=/home/hadoop/opt/scala/scala-2.12

export SPARK_HOME=/home/hadoop/opt/spark/spark-2.4

export ZOOKEEPER_HOME=/home/hadoop/opt/zookeeper/zookeeper-3.4

export FLINK_HOME=/home/hadoop/opt/flink/flink-1.9

export HBASE_HOME=/home/hadoop/opt/hbase/hbase-2.2

export PATH=.:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${SCALA_HOME}/bin:${SPARK_HOME}/bin:${ZOOKEEPER_HOME}/bin:${FLINK_HOME}/bin:${HBASE_HOME}/bin:$PATH

source ~/.bash_profile

cd ~/opt/hbase/hbase-2.2/conf/

vi hbase-env.sh

export JAVA_HOME=/home/hadoop/opt/java/jdk-1.8
export HADOOP_HOME=/home/hadoop/opt/hadoop/hadoop-2.7
export HBASE_HOME=/home/hadoop/opt/hbase/hbase-2.2
export HBASE_CLASSPATH=/home/hadoop/opt/hadoop/hadoop-2.7/etc/hadoop
export HBASE_PID_DIR=/home/hadoop/opt/hbase/pids
export HBASE_MANAGES_ZK=false

vi hbase-site.xml

        <property>
                <name>hbase.rootdir</name>
                <value>hdfs://master-01:9000/hbase</value>
                <description>The directory shared byregion servers.</description>
        </property>
                <!-- hbase端口 -->
        <property>
                <name>hbase.zookeeper.property.clientPort</name>
                <value>2181</value>
        </property>
                <!-- 超时时间 -->
        <property>
                <name>zookeeper.session.timeout</name>
                <value>120000</value>
        </property>
                <!--防止服务器时间不同步出错 -->
        <property>
        <property>
                <name>hbase.master.maxclockskew</name>
                <value>150000</value>
        </property>
                <!-- 集群主机配置 -->
        <property>
                <name>hbase.zookeeper.quorum</name>
                <value>master-01,slave-01,slave-02</value>
        </property>
                <!--   路径存放 -->
        <property>
                <name>hbase.tmp.dir</name>
                <value>/home/hadoop/opt/hbase/tmp</value>
        </property>
                <!-- true表示分布式 -->
        <property>
                <name>hbase.cluster.distributed</name>
                <value>true</value>
        </property>
                <!-- 指定master -->
        <property>
                <name>hbase.master</name>
                <value>master-01:60000</value>
        </property>

vi regionservers

slave-01
slave-02

同步配置信息:
scp ~/.bash_profile hadoop@slave-01:~/
scp ~/.bash_profile hadoop@slave-02:~/

scp -r ~/opt/hbase/ hadoop@slave-01:~/opt/
scp -r ~/opt/hbase/ hadoop@slave-02:~/opt/
分节点
source ~/.bash_profile

先启动Hadoop、zookeeper:
启动hbase

cd ~/opt/hbase/hbase-2.2/bin/
start-hbase.sh
查看jps
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

在这里插入图片描述
stop-hbase.sh

安装mysql(单节点)

卸载自带mariadb
切换到root用户

rpm -qa | grep mariadb
rpm -e --nodeps 文件名

切换到新用户
mkdir ~/opt/mysql
tar -zxvf ~/opt/file/mysql-5.7.27-linux-glibc2.12-x86_64.tar.gz
mv mysql-5.7.27-linux-glibc2.12-x86_64/ ~/opt/mysql/mysql-5.7

vi ~/.bash_profile

export JAVA_HOME=/home/hadoop/opt/java/jdk-1.8

export HADOOP_HOME=/home/hadoop/opt/hadoop/hadoop-2.7

export SCALA_HOME=/home/hadoop/opt/scala/scala-2.12

export SPARK_HOME=/home/hadoop/opt/spark/spark-2.4

export ZOOKEEPER_HOME=/home/hadoop/opt/zookeeper/zookeeper-3.4

export FLINK_HOME=/home/hadoop/opt/flink/flink-1.9

export HBASE_HOME=/home/hadoop/opt/hbase/hbase-2.2

export MYSQL_HOME=/home/hadoop/opt/mysql/mysql-5.7

export PATH=.:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${SCALA_HOME}/bin:${SPARK_HOME}/bin:${ZOOKEEPER_HOME}/bin:${FLINK_HOME}/bin:${HBASE_HOME}/bin:${MYSQL_HOME}/bin:$PATH

source ~/.bash_profile

vi ~/opt/my.conf

#mysql客户端默认字符集
default-character-set=utf8
[mysqld]
#跳过权限表校验
#skip-grant-tables
skip-name-resolve
#设置3306端口
port = 3306
# 设置mysql的安装目录
basedir=/home/hadoop/opt/mysql/mysql-5.7
# 设置mysql数据库的数据的存放目录
datadir=/home/hadoop/opt/mysql/mysql-5.7/data
# 允许最大连接数
max_connections=200
# 服务端使用的字符集默认为8比特编码的latin1字符集
character-set-server=utf8
# 创建新表时将使用的默认存储引擎
default-storage-engine=INNODB
lower_case_table_names=1
max_allowed_packet=16M

mkdir ~/opt/mysql/mysql-5.7/data

创建mysql用户:
cd ~/opt/mysql/mysql-5.7
sudo groupadd mysql
sudo useradd -r -g mysql mysql
sudo chown -R mysql:mysql /home/hadoop/opt/mysql/mysql-5.7/

初始化:
cd ~/opt/mysql/mysql-5.7/bin/
./mysqld --initialize --user=mysql --basedir=/home/hadoop/opt/mysql/mysql-5.7 --datadir=/home/hadoop/opt/mysql/mysql-5.7/data
记录初始密码:
在这里插入图片描述
!fucuF=7g&p5

sudo vi ~/opt/mysql/mysql-5.7/support-files/mysql.server

basedir=/home/hadoop/opt/mysql/mysql-5.7
datadir=/home/hadoop/opt/mysql/mysql-5.7/data

cp -a ~/opt/mysql/mysql-5.7/support-files/mysql.server /etc/init.d/mysqld
cd ~/opt/mysql/mysql-5.7/support-files

开启mysql:
mysql.server start

进入数据库:
mysql -uroot -p
密码

修改密码:
alter user user() identified by “root”;
退出重新登陆

mysql> use mysql;
mysql> update user set host=’%’ where user=‘hadoop’;允许所有ip的主机连接
mysql> GRANT ALL PRIVILEGES ON . TO ‘root’@’%’ IDENTIFIED BY ‘root’ WITH GRANT OPTION;授权远程主机允许连接
mysql> flush privileges;刷新权限

设置开机启动:
sudo chkconfig --add mysqld
sudo chkconfig mysqld on

hive安装(单节点)

mkdir ~/opt/hive
tar -zxvf ~/opt/file/apache-hive-2.3.7-bin.tar.gz
mv apache-hive-2.3.7-bin/ ~/opt/hive/hive-2.3

vi ~/.bash_profile

export HIVE_HOME=/home/hadoop/opt/hive/hive-2.3

export PATH=.:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${SCALA_HOME}/bin:${SPARK_HOME}/bin:${ZOOKEEPER_HOME}/bin:${FLINK_HOME}/bin:${HBASE_HOME}/bin:${MYSQL_HOME}/bin:${HIVE_HOME}/bin:$PATH

source ~/.bash_profile

cd ~/opt/hive/hive-2.3/conf/
cp hive-env.sh.template hive-env.sh

vi hive-env.sh

JAVA_HOME=/home/hadoop/opt/java/jdk-1.8
HADOOP_HOME=/home/hadoop/opt/hadoop/hadoop-2.7
HIVE_HOME=/home/hadoop/opt/hive/hive-2.3
export HIVE_CONF_DIR=$HIVE_HOME/conf
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$HADOOP_HOME/lib:$HIVE_HOME/lib

cp hive-default.xml.template hive-site.xml

启动hadoop:
hdfs dfs -mkdir -p ~/user/hive/warehouse
hdfs dfs -chmod -R 777 ~/user/hive/warehouse
hdfs dfs -mkdir -p ~/tmp/hive
hdfs dfs -chmod -R 777 ~/tmp/hive

mkdir ~/opt/hive/hive-2.3/tmp
chmod -R 777 ~/opt/hive/hive-2.3/tmp

复制mysql驱动包
cd ~/opt/file/
tar -zxvf mysql-connector-java-5.1.49.tar.gz
cp ~/opt/file/mysql-connector-java-5.1.49/mysql-connector-java-5.1.49-bin.jar ~/opt/hive/hive-2.3/lib/

cd ~/opt/hive/hive-2.3/conf/

vi hive-site.xml
配置信息比较多
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

初始化:
cd ~/opt/hive/hive-2.3/bin
schematool -initSchema -dbType mysql

启动:
hive

安装flume(hadoop)

mkdir ~/opt/flume
tar -zxvf ~/opt/file/apache-flume-1.8.0-bin.tar.gz
mv apache-flume-1.8.0-bin/ ~/opt/flume/flume-1.8

实现与hadoop交互
cp ~/opt/hadoop/hadoop-2.7/share/hadoop/common/hadoop-common-2.7.7.jar ~/opt/flume/flume-1.8/lib/
cp ~/opt/hadoop/hadoop-2.7/share/hadoop/common/lib/commons-configuration-1.6.jar ~/opt/flume/flume-1.8/lib/
cp ~/opt/hadoop/hadoop-2.7/share/hadoop/common/lib/commons-io-2.4.jar ~/opt/flume/flume-1.8/lib/
cp ~/opt/hadoop/hadoop-2.7/share/hadoop/common/lib/hadoop-auth-2.7.7.jar ~/opt/flume/flume-1.8/lib/
cp ~/opt/hadoop/hadoop-2.7/share/hadoop/hdfs/hadoop-hdfs-2.7.7.jar ~/opt/flume/flume-1.8/lib/
cp ~/opt/hadoop/hadoop-2.7/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar ~/opt/flume/flume-1.8/lib/

vi ~/.bash_profile

export FLUME_HOME=/home/hadoop/opt/flume/flume-1.8

export PATH=.:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${SCALA_HOME}/bin:${SPARK_HOME}/bin:${ZOOKEEPER_HOME}/bin:${FLINK_HOME}/bin:${HBASE_HOME}/bin:${MYSQL_HOME}/bin:${HIVE_HOME}/bin:${FLUME_HOME}/bin:$PATH
source ~/.bash_profile

cd ~/opt/flume/flume-1.8/conf/
cp flume-env.sh.template flume-env.sh

vi flume-env.sh

export JAVA_HOME=/home/hadoop/opt/java/jdk-1.8

安装kafka

mkdir ~/opt/kafka
tar -zxvf ~/opt/file/kafka_2.12-2.4.1.tgz
mv kafka_2.12-2.4.1/ ~/opt/kafka/kafka-2.4

vi ~/.bash_profile

export KAFKA_HOME=/home/hadoop/opt/kafka/kafka-2.4

export PATH=.:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${SCALA_HOME}/bin:${SPARK_HOME}/bin:${ZOOKEEPER_HOME}/bin:${FLINK_HOME}/bin:${HBASE_HOME}/bin:${MYSQL_HOME}/bin:${HIVE_HOME}/bin:${FLUME_HOME}/bin:${KAFKA_HOME}/bin:$PATH

source ~/.bash_profile

cd ~/opt/kafka/kafka-2.4/config/
vi server.properties

在这里插入图片描述
在这里插入图片描述

broker.id的值在同步给分节点后要要变成broker.id=2;broker.id=3

同步到分节点:
scp ~/opt/kafka hadoop@slave-01:~/opt/
scp ~/opt/kafka hadoop@slave-02:~/opt/

scp ~/.bash_profile hadoop@slave-01:~/
scp ~/.bash_profile hadoop@slave-02:~/

source ~/.bash_profile

所有节点:
cd ~/opt/kafka/kafka-2.4/
./bin/kafka-server-start.sh -daemon config/server.properties

jps
主节点

4896 QuorumPeerMain
7891 Kafka
4388 SecondaryNameNode
4566 ResourceManager
7994 Jps
4127 NameNode

分节点:
51248 Kafka
46551 QuorumPeerMain
46121 DataNode
51323 Jps
46255 NodeManager

测试:
主节点:
./bin/kafka-topics.sh --create --zookeeper master-01:2181,slave-01:2181,slave-02:2181 --replication-factor 3 --partitions 3 --topic test-01
分节点:
./bin/kafka-topics.sh --list --zookeeper localhost:2181
看到主创建的文件
启动生产者

./bin/kafka-console-producer.sh --broker-list master-01:9092,slave-01:9092,slave-02:9092 --topic test-01
别的节点的查看:
./bin/kafka-console-consumer.sh --bootstrap-server master-01:9092,slave-01:9092,slave-02:9092 --from-beginning --topic test-01
之后可在别的节点看到主节点输入的信息

安装sqoop

mkdir ~/opt/sqoop
tar -zxvf ~/opt/file/sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz
mv sqoop-1.4.7.bin__hadoop-2.6.0/ ~/opt/sqoop/sqoop-1.4

vi ~/.bash_profile

export SQOOP_HOME=/home/hadoop/opt/sqoop/sqoop-1.4

export PATH=.:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${SCALA_HOME}/bin:${SPARK_HOME}/bin:${ZOOKEEPER_HOME}/bin:${FLINK_HOME}/bin:${HBASE_HOME}/bin:${MYSQL_HOME}/bin:${HIVE_HOME}/bin:${FLUME_HOME}/bin:${KAFKA_HOME}/bin:${SQOOP_HOME}/bin:$PATH

source ~/.bash_profile

cd ~/opt/sqoop/sqoop-1.4/conf

cp sqoop-env-template.sh sqoop-env.sh
vi sqoop-env.sh

在这里插入图片描述
将mysql-connect包的jar复制到sqoop的lib,如果没有解压请解压
mv ~/opt/file/mysql-connector-java-5.1.49/mysql-connector-java-5.1.49.jar ~/opt/sqoop/sqoop-1.4/lib

查看sqoop版本是否安装好:
sqoop version

与mysql的联通:
cd ~/opt/sqoop/sqoop-1.4/bin

./sqoop-list-databases --connect jdbc:mysql://master-01:3306 --username root --password root
username是mysql的用户名,password是mysql的密码

可以看到mysql数据库,成功!

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值