大数据学习3:hadoop编译和伪分布式部署

本文用于记录hadoop编译和伪分布式部署的安装记录。
1、检查是否安装过,是否有残留,检查hosts配置
ps -ef | grep hadoop
find / -name hadoop


[hadoop@hadoop001 ~]$ cat /etc/hosts 
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.137.11 hadoop001
192.168.137.12 hadoop002
192.168.137.13 hadoop003


2、创建目录
在/opt 下 mkdir sourcecode 和 software
[root@hadoop001 ~]# mkdir sourcecode software
[root@hadoop001 opt]# cd /opt/sourcecode/
======================================以下为hadoop编译=======================================================
3、上传hadoop-2.8.1-src.tar.gz包
[root@hadoop001 sourcecode]# ll 
total 33720
drwxr-xr-x. 17 root root     4096 Jun  2 14:13 hadoop-2.8.1-src
-rw-r--r--.  1 root root 34523353 Aug 20  2017 hadoop-2.8.1-src.tar.gz
[root@hadoop001 sourcecode]# tar xzvf hadoop-2.8.1-src.tar.gz 


可以在github上看hadoop源码  搜hadoop
在building.txt 文件中会让我们安装相关的软件:
java
maven
相关的gcc等包
protobuf


4、下载JDK 并 配置环境变量
将jdk 下载到/usr/java  下,如果java不存在,则创建
[root@hadoop001 java]# tar -xzvf jdk-8u144-linux-x64.tar.gz 
修改用户和用户组
[root@hadoop001 java]# chown -R root.root jdk1.8.0_144/
[root@hadoop001 java]# ll 
total 181172
drwxr-xr-x. 8 root root      4096 Jul 22 13:11 jdk1.8.0_144


[root@hadoop001 java]# vi /etc/profile
在最下面添加:
export JAVA_HOME=/usr/java/jdk1.8.0_144
export PATH=$JAVA_HOME/bin:$PATH
保存后source
[root@hadoop001 java]# source /etc/profile
[root@hadoop001 java]# java -version 
java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)


5、安装maven包 并 配环境变量
安装maven 3.3.30,放在/opt/software/下
[root@hadoop001 software]# ls 
[root@hadoop001 software]# rz
rz waiting to receive.
开始 zmodem 传输。  按 Ctrl+C 取消。
  100%    8415 KB 8415 KB/s 00:00:01       0 Errors
解压:
[root@hadoop001 software]# unzip apache-maven-3.3.9-bin.zip 


添加环境变量:
[root@hadoop001 software]# vi /etc/profile
export MAVEN_HOME=/opt/software/apache-maven-3.3.9
在PATH中添加,添加后为:
export PATH=$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH


[root@hadoop001 software]# source /etc/profile
[root@hadoop001 software]# mvn -version 
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-11T00:41:47+08:00)
Maven home: /opt/software/apache-maven-3.3.9
Java version: 1.8.0_144, vendor: Oracle Corporation
Java home: /usr/java/jdk1.8.0_144/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "2.6.32-431.el6.x86_64", arch: "amd64", family: "unix"


6、安装protobuf-2.5.0.tar.gz 并 配环境变量
上传到/opt/software 
解压:
[root@hadoop001 software]# tar xzvf protobuf-2.5.0.tar.gz 
[root@hadoop001 software]# chown -R root.root protobuf-2.5.0/


安装gcc gcc-c++ make cmake 等一系列的依赖包
[root@hadoop001 software]# yum install -y gcc gcc-c++ make cmake
[root@hadoop001 software]# yum -y install autoconf automake libtool curl make g++ unzip


进入protobuf
[root@hadoop001 software]# cd protobuf-2.5.0/
进行编译:
[root@hadoop001 protobuf-2.5.0]# ./configure --prefix=/usr/local/protobuf
[root@hadoop001 protobuf-2.5.0]# make && make install


make 以后会出现文件
[root@hadoop001 local]# ll -d protobuf/
drwxr-xr-x. 5 root root 4096 Aug 17 18:32 protobuf/


配环境变量
export PROTOC_HOME=/usr/local/protobuf
在PATH中添加,添加后为:
export PATH=$PROTOC_HOME/bin:$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH


[root@hadoop001 protobuf-2.5.0]# source /etc/profile
[root@hadoop001 protobuf-2.5.0]# protoc  --version 
libprotoc 2.5.0


7、安装findbugs 并 配置环境变量
在/opt/software 中上传软件 findbugs-1.3.9.zip
[root@hadoop001 software]# unzip findbugs-1.3.9.zip 


配置环境变量
export FINDBUGS_HOME=/opt/software/findbugs-1.3.9
在PATH中添加,添加后为:
export PATH=$FINDBUGS_HOME/bin:$PROTOC_HOME/bin:$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH
[root@hadoop001 software]# source /etc/profile
[root@hadoop001 software]# findbugs -version 
1.3.9


8、安装其他依赖包
yum install -y openssl openssl-devel svn ncurses-devel zlib-devel libtool
yum install -y snappy snappy-devel bzip2 bzip2-devel lzo lzo-devel lzop autoconf automake


这里遇到lzo lzo-devel lzop 这三个包在CentOS6.5的另一个ISO里,这里我手动下载了2个包
[root@hadoop001 mnt]# rpm -ivh lzo-minilzo-2.03-3.1.el6.x86_64.rpm 
warning: lzo-minilzo-2.03-3.1.el6.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 192a7d7d: NOKEY
Preparing...                ########################################### [100%]
   1:lzo-minilzo            ########################################### [100%]
[root@hadoop001 mnt]# yum install -y   lzo-devel-2.03-3.1.el6.x86_64.rpm 
[root@hadoop001 mnt]# yum install -y   lzo lzo-devel lzop 


再次安装,确认所有包都安装了,一定要做这一步!
yum install -y snappy snappy-devel bzip2 bzip2-devel lzo lzo-devel lzop autoconf automake


9、编译
保证能ping通外网
进入/opt/sourcecode/hadoop-2.8.1-src 
[root@hadoop001 hadoop-2.8.1-src]# pwd 
/opt/sourcecode/hadoop-2.8.1-src
[root@hadoop001 hadoop-2.8.1-src]# mvn clean package -Pdist,native -DskipTests -Dtar
根据网速,可能要1小时及以上。。。。
开始:21:00   Last login: Thu Aug 17 19:06:52 2017 from 192.168.137.1
结束:


提醒: 
1、有时候编译过程中会出现下载某个包的时间太久,这是由于连接网站的过程中会出现假死,
此时按ctrl+c,重新运行编译命令。 
2、如果出现缺少了某个文件的情况,则要先清理maven(使用命令 mvn clean) 再重新编译。


10、打包后,最终文件在此目录下
 /opt/sourcecode/hadoop-2.8.1-src/hadoop-dist/target/hadoop-2.8.1.tar.gz
======================================以下为hadoop安装=======================================================
11、创建用户
[root@hadoop001 software]# useradd hadoop
[root@hadoop001 software]# passwd hadoop
Changing password for user hadoop.
New password: 
BAD PASSWORD: it is too simplistic/systematic
BAD PASSWORD: is too simple
Retype new password: 
passwd: all authentication tokens updated successfully.
[root@hadoop001 software]# vi /etc/sudoers
添加
hadoop  ALL=(ALL)       ALL
wq!保存


12、将文件拷贝到/opt/software 下
解压
[root@hadoop001 software]# tar xzvf hadoop-2.8.1.tar.gz 
[root@hadoop001 software]# ln -s hadoop-2.8.1 hadoop 


lrwxrwxrwx.  1 root root   12 Aug 17 19:49 hadoop -> hadoop-2.8.1
drwxrwxr-x.  9  500  500 4096 Jun  2 14:24 hadoop-2.8.1
修改权限
[root@hadoop001 software]# chown -R hadoop.hadoop hadoop
[root@hadoop001 software]# chown -R hadoop.hadoop hadoop-2.8.1/
[root@hadoop001 software]# ll
total 16
drwxr-xr-x.  6 root   root   4096 Nov 10  2015 apache-maven-3.3.9
drwxr-xr-x.  7 root   root   4096 Aug 21  2009 findbugs-1.3.9
lrwxrwxrwx.  1 hadoop hadoop   12 Aug 17 19:49 hadoop -> hadoop-2.8.1
drwxrwxr-x.  9 hadoop hadoop 4096 Jun  2 14:24 hadoop-2.8.1
drwxr-xr-x. 10 root   root   4096 Aug 17 18:29 protobuf-2.5.0
[root@hadoop001 software]# 


13、配置环境变量
将bin 和 sbin 加入环境PATH
[root@hadoop001 hadoop]# vi /etc/profile
export HADOOP_HOME=/opt/software/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$FINDBUGS_HOME/bin:$PROTOC_HOME/bin:$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH
[root@hadoop001 hadoop]# source /etc/profile


13、检查基础服务
[root@hadoop001 hadoop]# java -version 
java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
[root@hadoop001 hadoop]# service sshd status 
openssh-daemon (pid  995) is running...




13、配置互信
切换用户
[root@hadoop001 hadoop]# su - hadoop
[hadoop@hadoop001 ~]$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
86:80:23:4b:7c:97:17:72:f7:76:9a:16:90:74:e4:45 hadoop@hadoop001
The key's randomart image is:
+--[ RSA 2048]----+
|     . o.+oo.E   |
|.  .  + o.= .    |
|.oo..o .   = .   |
|.o......  . =    |
|.     . S  +     |
|       .  .      |
|                 |
|                 |
|                 |
+-----------------+
[hadoop@hadoop001 ~]$ ssh-copy-id hadoop001
The authenticity of host 'hadoop001 (192.168.137.11)' can't be established.
RSA key fingerprint is 11:8b:fb:d2:04:04:d0:68:90:91:34:59:f8:f1:44:26.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop001,192.168.137.11' (RSA) to the list of known hosts.
hadoop@hadoop001's password: 
Now try logging into the machine, with "ssh 'hadoop001'", and check in:


  .ssh/authorized_keys


to make sure we haven't added extra keys that you weren't expecting.


测试:
[hadoop@hadoop001 ~]$ ssh hadoop001 date 
Thu Aug 17 19:59:03 CST 2017
[hadoop@hadoop001 ~]$ ssh 192.168.137.11 date 
Thu Aug 17 20:00:01 CST 2017


以上命令会生成文件
[hadoop@hadoop001 ~]$ ll /home/hadoop/.ssh/
total 16
-rw-------. 1 hadoop hadoop  398 Aug 17 19:58 authorized_keys
-rw-------. 1 hadoop hadoop 1675 Aug 17 19:58 id_rsa
-rw-r--r--. 1 hadoop hadoop  398 Aug 17 19:58 id_rsa.pub
-rw-r--r--. 1 hadoop hadoop  795 Aug 17 19:59 known_hosts




14、配置参数:
[hadoop@hadoop001 ~]$ cd /opt/software/hadoop/etc/hadoop
#核心文件
[hadoop@hadoop001 hadoop]$ vi core-site.xml 
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop001:9000</value>
    </property>
</configuration>
#复制策略
[hadoop@hadoop001 hadoop]$ vi hdfs-site.xml 
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>


15、格式化 和 启动
#添加环境变量
[hadoop@hadoop001 hadoop]$ vi hadoop-env.sh 
export JAVA_HOME=/usr/java/jdk1.8.0_144


[hadoop@hadoop001 hadoop]$ which hdfs
/opt/software/hadoop/bin/hdfs
[hadoop@hadoop001 hadoop]$ hdfs namenode -format
........
看到
17/08/17 20:11:03 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.
........


[hadoop@hadoop001 hadoop]$ which start-dfs.sh
/opt/software/hadoop/sbin/start-dfs.sh
[hadoop@hadoop001 hadoop]$ 
[hadoop@hadoop001 hadoop]$ start-dfs.sh
# 0.0.0.0 也要输入yes


10.检查服务是否OK
[hadoop@hadoop001 ~]$ jps 
19170 DataNode
19045 NameNode
19438 Jps
19327 SecondaryNameNode
[hadoop@hadoop002 logs]$


然后网页登陆试试
http://192.168.137.11:50070
完成。




-------------------------------------------------------------
ERROR:
1.Starting namenodes on [hadoop001]
hadoop001: Error: JAVA_HOME is not set and could not be found.
localhost: Error: JAVA_HOME is not set and could not be found.
解决办法:要手动在里面添加,否则无法识别
[hadoop@hadoop001 hadoop]$ vi hadoop-env.sh 
export JAVA_HOME=/usr/java/jdk1.8.0_144
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值