新的linux上 虚拟机(最小化的需要下载 (ifconfig ))
网上下载 ifconfig
yum install -y net-tools
在网上下载vim
yum install -y vim
改掉 ip 地址
cd /etc/sysconfig/network-scripts/
vim ifcfg-ens33
IPADDR=“192.168.182.147”
NETMASK=“255.255.255.0”
GATEWAY=“192.168.182.2”
DNS=“8.8.8.8”
BOOTPROTO=static
重启网络服务:systemctl restart network
更新yum
yum update -y
修改本机名 删掉原来的
vim /etc/hostname lqz
添加最后一行地址主机名 空格 名字
3、修改linux域名映射文件vim /etc/hosts
vim /etc/hosts 192.168.231.130 lqz
192.168.182.147 hdp1
安装jdk
全局变量
Vim /etc/profile
export JAVA_HOME=/usr/lib/jvm/jdk
export PATH=$PATH:$JAVA_HOME/bin
安装状况:
#java -version
#javac -version
配置情况:
echo $JAVA_HOME
执⾏命令:
vi ~/.bashrc
在结尾追加:
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk
保存⽂件后执⾏下⾯命令使 JAVA_HOME 环境变量⽣效:
保存⽂件后执⾏下⾯命令使 JAVA_HOME 环境变量⽣效:
source ~/.bashrc
为了检测系统中 JAVA 环境是否已经正确配置并⽣效,可以分别执⾏下⾯命令:
#java -version
#$JAVA_HOME/bin/java -version
1.2 安装Hadoop
格式化
hadoop namenode -format #一台执行即可
本实验使⽤ hadoop-2.10 版本,使⽤ wget ⼯具在线下载。
yum -y install wget
wget https://mirrors.tuna.tsinghua.edu.cn/apache/
hadoop/common/hadoop-2.10.1/hadoop-2.10.1.tar.gz
或者将⽼师给你的包hadoop-2.10.1.tar.gz复制到/usr/local下
将 Hadoop 安装到 /usr/local ⽬录下:
tar -zvxf hadoop-2.10.1.tar.gz
对安装的⽬录进⾏重命名,便于后续操作⽅便:
#cd /usr/local
#mv ./hadoop-2.10.1/ ./hadoop
检查Hadoop是否已经正确安装:
/usr/local/hadoop/bin/hadoop version
如果成功输出hadoop的版本信息,表明hadoop已经成功安装。
hadoop环境变量
修改hadoop环境
1 vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
找到export JAVA_HOME=${JAVA_HOME} 改为:
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk
系统环境变量
vi ~/.bashrc 在结尾追加如下内容:
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HIVE_HOME=/usr/local/hive
export
PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$HIVE_HOME/bin
使Hadoop环境变量配置⽣效:
source ~/.bashrc
Hadoop配置⽂件
Hadoop的配置⽂件位于安装⽬录的 /usr/local/hadoop/etc/hadoop ⽬录下,
需要修改的配置⽂件为如下两个:
/usr/local/hadoop/etc/hadoop/core-site.xml
/usr/local/hadoop/etc/hadoop/hdfs-site.xml
编辑 core-site.xml
修改节点的内容为如下所示:
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop/tmp</value>
<description>location to store temporary
files</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.231.133:9000</value>
</property>
</configuration>
编辑 hdfs-site.xml
修改节点的内容为如下所示:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/data</value>
</property>
</configuration>
格式化NameNode
/usr/local/hadoop/bin/hdfs namenode -format
在输出信息中看到如下信息,则表示格式化成功:
Storage directory /usr/local/hadoop/tmp/dfs/name has been
successfully formatted.
启动和停⽌Hadoop
/usr/local/hadoop/sbin/start-dfs.sh
检查 NameNode 和 DataNode 是否正常启动:
jps
如果NameNode和DataNode已经正常启动,会显示NameNode、DataNode和
SecondaryNameNode的进程信息:
# jps
3689 SecondaryNameNode
3520 DataNode
3800 Jps
3393 NameNode
停⽌命令:
/usr/local/hadoop/sbin/stop-dfs.sh
再起启动只需要执⾏下⾯命令:
/usr/local/hadoop/sbin/start-dfs.sh
安装MySQL
1.1.1 下载mysql,并安装:
# wget http://dev.mysql.com/get/mysql57-community-release-el7-
10.noarch.rpm
# yum -y install mysql57-community-release-el7-10.noarch.rpm
# yum -y install mysql-community-server
1.1.2 启动mysql服务
systemctl start mysqld.service
1.1.3 查看mysql服务状态
systemctl status mysqld.service
此时MySQL已经开始正常运⾏,不过要想进⼊MySQL还得先找出此时root⽤户的密
码,通过如下命令可以在⽇志⽂件中找出密码。
查看 mysql 删除 原有的
1- rpm -qa|grep -i mysql
2- find / -name mysql
4.设置用户名密码
Mysql
set password = password('root');
grant all privileges on *.* to root@'lqz' identified by "root";
grant all privileges on *.* to root@'%' identified by "root";
flush privileges;
登录mysql
mysql -uroot -p
重启mysql
service mysqld restart
MySQL安装配置完毕。
安装配置Hive
2.1 下载Hive2.3.9
wget --no-check-certificate
https://mirrors.tuna.tsinghua.edu.cn/apache/hive/hive-
2.3.9/apache-hive-2.3.9-bin.tar.gz
解压,并将⽂件夹移动到/usr/local/hive
# tar xzvf apache-hive-2.3.9-bin.tar.gz
# mv apache-hive-2.3.9-bin /usr/local/hive
2.2 配置环境变量
vi ~/.bashrc
在⽂件末尾加⼊这两⾏
export HIVE_HOME=/usr/local/hive
export PATH=$PATH:$HIVE_HOME/bin
激活环境变量
source ~/.bashrc
2.3 复制hive配置⽂件
# cd /usr/local/hive/conf
# cp hive-env.sh.template hive-env.sh
# cp hive-default.xml.template hive-site.xml
2.3.1 配置hive-env.sh⽂件,指定HADOOP_HOME
在环境⽂件hive-env.sh中找到HADOOP_HOME配置改为:
HADOOP_HOME=/usr/local/hadoop
2.3.2 修改hive-site.xml⽂件
指定MySQL数据库驱动、数据库名、⽤户名及密码 找到hive配置⽂件hive-site.xml
中以下相关项:按name搜索,修改value
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive?
createDatabaseIfNotExist=true&useSSL=false
</value>
<description>JDBC connect string for a JDBC
metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC
metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>Username to use against metastore
database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
<description>password to use against metastore
database</description>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/usr/local/hive/scratchdir</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/usr/local/hive/resourcesdir</value>
<description>Temporary local directory for added resources in
the remote file system.</description>
</property>
<property>
<name>hive.querylog.location</name>
<value>/usr/local/hive/querylog</value>
<description>Location of Hive run time structured log
file</description>
</property>
<property>
<name>hive.server2.logging.operation.log.location</name>
<value>/usr/local/hive/operation_logs</value>
<description>Top level directory where operation logs are stored
if logging functionality is enabled</description>
</property>
2.3.3 创建相关⽬录
# mkdir -p /usr/local/hive/scratchdir
# mkdir -p /usr/local/hive/resourcesdir
# mkdir -p /usr/local/querylog
# mkdir -p /usr/local/hive/operation_logs
2.3.4 修改hive/bin下的hive-config.sh⽂件
设置JAVA_HOME,HADOOP_HOME
export JAVA_HOME= /usr/lib/jvm/java-1.8.0-openjdk
export HADOOP_HOME=/usr/local/hadoop
export HIVE_HOME=/usr/local/hive
2.3.5 下载mysql-connector-java-5.1.27-bin.jar⽂件
并放到$HIVE_HOME/lib⽬录下
# wget https://repo1.maven.org/maven2/mysql/mysql-connector
java/5.1.38/mysql-connector-java-5.1.38.jar
# mv mysql-connector-java-5.1.38.jar /usr/local/hive/lib
2.3.6初始化Hive元数据库
/usr/local/hive/bin/schematool -dbType mysql -initSchema
2.3.7 启动hive
/usr/local/hive/bin/hive