hadoop

一、修改主机名

vim /etc/hostname     (修改后需要重启)

修改完可以输入hostname查看

二、修改IP映射

1、vim /etc/hosts

IP地址 主机名

2、echo "IP地址 主机名" >> /etc/hosts

ifconfig检查

三、 关闭防火墙

systemctl stop firewalld

关闭后查看防火墙状态:

systemctl status firewalld

Active: inactive (dead) 表示防火墙已关闭

四、安装JDK

1、 解压到指定目录:

tar -zxvf jdk-8u144-linux-x64.tar.gz -C /usr/local/src/

2、重命名:

mv jdk1.8.0_144/ jdk

3、设置全局环境变量:

sudo vim /etc/profile

在最后一行插入

 # ENV 
    export JAVA_HOME=/usr/local/src/jdk
    export PATH=$PATH:$JAVA_HOME/bin

4、刷新设置

source /etc/profile

运行java -version 查看是否配置成功

java -version

5、确保/usr/local/src/目录和子文件夹的所属用户以及所属组为ec2-user

查看src的所属用户及所属组

ll /usr/local

若是ec2-user,下一步

若不是:

修改所属用户及所属组

sudo chown -R ec2-user:ec2-user /usr/local/src

6、远程分发

需要ssh支撑    

若没有ssh需安装

1、ssh-keygen  

2、ssh-copy-id  主机名            #就是上面设置的

已经有的直接分发就行

scp [-r] 源目录  [用户名@]主机名或IP:保存的地址
scp -r jdk/ slave1:/usr/local/src 
scp -r jdk/ slave2:/usr/local/src 

7、检查另外两台机器的jdk路径,环境变量设置一下

可等到Hadoop安装配置好一块分发,设置环境变量

五、Hadoop安装

1、解压到指定目录 /usr/local/src 

2、重命名 

3、配置环境变量

sudo vim /etc/profile
export HADOOP_HOME=/usr/local/src/hadoop 
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

4. 刷新设置,运行hadoop version 查看是否配置成功

source /etc/profile 

5. 修改hadoop的核心配置文件

5.1 修改 hadoop-env.sh   mapred-env.sh   yarn-env.sh  

vim hadoop-env.sh  
export JAVA_HOME=/usr/local/jdk

5.2 修改 core-site.xml 

        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://master:9000</value>
        </property>
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/usr/local/src/hadoop/data_tmp</value>
        </property>

5.3 修改 hdfs-site.xml 
  

        <property>
            <name>dfs.namenode.secondary.http-address</name>
            <value>master:50090</value>
        </property>

5.4 修改 mapred-site.xml 

        <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
        </property>

5.5 修改 yarn-site.xml 

        <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
        </property>

5.6 修改 slave

slave1
slave2

6、远程分发

7、格式化

hdfs namenode -format

8、启动hdfs和yarn

start-dfs.sh

start-yarn.sh

六、安装Hive

#1、安装Hive的元数据库 MySQL

1、下载mysql源安装包

sudo wget http://dev.mysql.com/get/mysql57-community-release-el7-8.noarch.rpm

2、安装mysql源

sudo yum localinstall mysql57-community-release-el7-8.noarch.rpm

3、安装mysql

sudo yum install mysql-community-server

4、启动MySQL服务

sudo systemctl start mysqld

5、查看mysql初始密码

[ec2-user@master ~]$ sudo grep "password" /var/log/mysqld.log
2021-03-19T07:56:41.030922Z 1 [Note] A temporary password is generated for root@localhost: v=OKXu0laSo;

v=OKXu0laSo;是密码

6、修改mysql登陆密码

把初始密码复制下来,在进入mysql需要输入密码时粘贴下来,回车,就可以进入MySQL命令行。

进入命令行

sudo mysql -uroot -p

修改密码,设置MySQL登陆密码为1234

新密码设置的时候如果设置的过于简单会报错

修改密码规则:

在mysql命令行中输入

mysql> set global validate_password_policy=0;
Query OK, 0 rows affected (0.00 sec)

mysql> set global validate_password_length=1;
Query OK, 0 rows affected (0.00 sec)

重新设置密码

mysql> set password for 'root'@'localhost'=password('1234');
Query OK, 0 rows affected, 1 warning (0.00 sec)

7、设置远程登陆

先退出MySQL

以新密码登陆MySQL

[ec2-user@master ~]$ mysql -uroot -p1234
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 10
Server version: 5.7.33 MySQL Community Server (GPL)

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 

创建用户

mysql> create user 'root'@'172.%.%.%' identified by '1234';
Query OK, 0 rows affected (0.00 sec)

允许远程连接:

mysql> grant all privileges on *.* to 'root'@'172.%.%.%' with grant option;
Query OK, 0 rows affected (0.00 sec)

刷新权限:

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

到此mysql安装完成

#2、安装Hive(依赖Hadoop)

1、把Hive解压到指定位置

tar -zxvf hadoop/apache-hive-1.1.0-bin.tar.gz -C /usr/local/src/

2、重命名

mv apache-hive-1.1.0-bin/ hive

3、配置全局环境变量

sudo vim /etc/profile

export HIVE_HOME=/usr/local/src/hive

export PATH=$PATH:$HIVE_HOME/bin

export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/usr/local/src/hive/lib/*

刷新环境变量

source /etc/profile

4、在src/hive/conf下创建hive-site.xml文件

touch hive-site.xml

在hive-site.xml文件中添加如下内容:

<configuration>
<property>
        <name>hive.metastore.warehouse.dir</name>
        <value>/user/hive/warehouse</value>
</property>

<property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://master:3306/hive?createDatabaseIfNotExist=true&amp;useSSL=false</value>
</property>

<property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
</property>

<property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>root</value>
</property>

<property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>1234</value>
</property>
</configuration>

注意:MySQL密码要改成自己设置的密码

5、添加驱动包

把MySQL驱动放到hive的lib目录下

cp /home/ec2-user/hadoop/mysql-connector-java-5.1.44-bin.jar $HIVE_HOME/lib

6、修改hive-env.sh配置文件

[ec2-user@master conf]$ vi hive-env.sh
#在里面添加如下配置
export HADOOP_HOME=/usr/local/src/hadoop
export HIVE_CONF_DIR=/usr/local/src/hive/conf

7、启动Hive

需要确保hdfs、mapreduce、mysql启动

若没启动

start-all.sh

初始化MySQL中的hive的数据库

schematool -dbType mysql -initSchema

启动hive

hive

#安装sqoop

1、解压

tar -zxvf hadoop/sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz -C /usr/local/src/

2、重命名为sqoop

[ec2-user@master src]$ ls
hadoop  hive  jdk  sqoop-1.4.7.bin__hadoop-2.6.0
[ec2-user@master src]$ mv sqoop-1.4.7.bin__hadoop-2.6.0/ sqoop

3、添加环境变量

[ec2-user@master src]$ sudo vi /etc/profile
#在里面添加如下代码
export SQOOP_HOME=/usr/local/src/sqoop
export PATH=$PATH:$SQOOP_HOME/bin
#刷新环境变量
[ec2-user@master src]$ source /etc/profile

4、修改sqoop-env.sh配置文件

[ec2-user@master src]$ cd sqoop/conf/
[ec2-user@master conf]$ mv sqoop-env-template.sh sqoop-env.sh
[ec2-user@master conf]$ vi sqoop-env.sh

在里面修改一下配置项,根据自己的环境来修改:

#Set path to where bin/hadoop is available
export HADOOP_COMMON_HOME=/usr/local/src/hadoop

#Set path to where hadoop-*-core.jar is available
export HADOOP_MAPRED_HOME=/usr/local/src/hadoop

#Set the path to where bin/hive is available
export HIVE_HOME=/usr/local/src/hive

5、把mysql驱动放到sqoop的lib目录下

[ec2-user@master conf]$ cp /home/ec2-user/hadoop/mysql-connector-java-5.1.44-bin.jar $SQOOP_HOME/lib[ec2-user@master conf]$ ls $SQOOP_HOME/lib/mysql-connector-java-5.1.44-bin.jar 
/usr/local/src/sqoop/lib/mysql-connector-java-5.1.44-bin.jar

6、验证sqoop是否能连接mysql

[ec2-user@master conf]$ sqoop help
Warning: /usr/local/src/sqoop/../hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /usr/local/src/sqoop/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/local/src/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /usr/local/src/sqoop/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
21/03/19 08:53:06 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
usage: sqoop COMMAND [ARGS]

Available commands:
  codegen            Generate code to interact with database records
  create-hive-table  Import a table definition into Hive
  eval               Evaluate a SQL statement and display the results
  export             Export an HDFS directory to a database table
  help               List available commands
  import             Import a table from a database to HDFS
  import-all-tables  Import tables from a database to HDFS
  import-mainframe   Import datasets from a mainframe server to HDFS
  job                Work with saved jobs
  list-databases     List available databases on a server
  list-tables        List available tables in a database
  merge              Merge results of incremental imports
  metastore          Run a standalone Sqoop metastore
  version            Display version information

See 'sqoop help COMMAND' for information on a specific command.

help list-databases --url --username --password

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

黑神666

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值