搭建hadoop+spark集群之前,先配置好系统基础环境
centos6.5系统root权限下操作
1 vi /etc/selinux/config修改selinux=disabled
2 关闭防火墙
service iptables stop
chkconfig iptables off
3 网络配置
vi /etc/sysconfig/network-scripts/ifcfg-eth0
添加:DNS1=114.114.114.114
4 重启网络服务
service network restart
5 cat /etc/resolv.conf查看写入域名地址
6 更新yum
yum update
7 根据需求安装gcc
yum install gcc
8 安装时间同步服务
yum -y install ntp
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
修改文件vi /etc/ntp.conf
添加:
(对照机器本身的这个文件更改restrict 192.168.1.0 mask 255.255.255.0 ...)
restrict 192.168.1.4 nomodify noquery(这是作为对照机器的IP)
server 127.127.1.0
fudge 127.127.1.0 stratum 3
启动:service ntpd start
验证:netstat -unlnp
ntpstat
ntpq -p
9 安装
yum install zlib
yum install zlib-devl
yum install lsof
yum install unzip zip
yum install gcc
yum install gcc-c++
yum install libjpeg
yum install pcre-devel
yum install libxml2-devel
yum install zlib-devel
10 集群host映射
vi /etc/hosts
添加:(主机IP---名称)
192.168.*.* data01.novalocal
192.168.*.* data02.novalocal
192.168.*.* data03.novalocal
192.168.*.* data04.novalocal
192.168.*.* data05.novalocal