云主机上Hive源码编译安装,hadoop安装

一、需要安装的软件 
# 相关环境: 
# jdk-7u80 
# hadoop-2.6.0-cdh5.7.1 不支持jdk1.8,因此此处也延续jdk1.7 
# apache-maven-3.3.9 
# mysql5.1 
# 伪分布集群已启动 
(利用unzip命令解压缩的时候,出现-bash: unzip: command not found的错误。yum install -y unzip zip) 
二、安装jdk 
mkdir /usr/java && cd /usr/java/ 
tar -zxvf /tmp/server-jre-7u80-linux-x64.tar.gz 
chown -R root:root /usr/java/jdk1.7.0_80/ 
echo ‘export JAVA_HOME=/usr/java/jdk1.7.0_80’>>/etc/profile 
source /etc/profile

三、安装maven 
cd /usr/local/ 
unzip /tmp/apache-maven-3.3.9-bin.zip 
chown root: /usr/local/apache-maven-3.3.9 -R 
echo ‘export MAVEN_HOME=/usr/local/apache-maven-3.3.9’>>/etc/profile 
echo ‘export MAVEN_OPTS=”-Xms256m -Xmx512m”’>>/etc/profile 
echo ‘export PATH=$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH’>>/etc/profile 
source /etc/profile

四、安装mysql 
yum -y install mysql-server mysql 
/etc/init.d/mysqld start 
chkconfig mysqld on 
mysqladmin -u root password 123456 
mysql -uroot -p123456 
use mysql; 
GRANT ALL PRIVILEGES ON . TO ‘root’@’localhost’ IDENTIFIED BY ‘v123456’ WITH GRANT OPTION; 
GRANT ALL PRIVILEGES ON . TO ‘root’@’i-2-1776-VM’ IDENTIFIED BY ‘123456’ WITH GRANT OPTION;(用的是云主机) 
GRANT ALL PRIVILEGES ON . TO ‘root’@’%’ IDENTIFIED BY ‘123456’ WITH GRANT OPTION; 
update user set password=password(‘123456’) where user=’root’; 
delete from user where not (user=’root’) ; 
delete from user where user=’root’ and password=”; 
drop database test; 
DROP USER ”@’%’; 
flush privileges;

五、 
# 下载hive源码包: 
http://archive.cloudera.com/cdh5/cdh/5/ 
# 根据cdh版本选择对应hive软件包: 
# hive-1.1.0-cdh5.7.1-src.tar.gz 
(wget http://archive.cloudera.com/cdh5/cdh/5/hive-1.1.0-cdh5.7.1-src.tar.gz) 
# 解压后使用maven命令编译成安装包

六、 
cd /tmp/ 
tar -xf hive-1.1.0-cdh5.7.1-src.tar.gz 
cd /tmp/hive-1.1.0-cdh5.7.1 
mvn clean package -DskipTests -Phadoop-2 -Pdist 
# 编译生成的包在以下位置: 
# packaging/target/apache-hive-1.1.0-cdh5.7.1-bin.tar.gz

七、安装编译生成的Hive包,然后测试 
cd /usr/local/ 
tar -xf /tmp/apache-hive-1.1.0-cdh5.7.1-bin.tar.gz 
ln -s apache-hive-1.1.0-cdh5.7.1-bin hive 
chown -R hadoop:hadoop apache-hive-1.1.0-cdh5.7.1-bin 
chown -R hadoop:hadoop hive 
echo'export HIVE_HOME=/usr/local/hive'>>/etc/profile
echo'export PATH=$HIVE_HOME/bin:$PATH'>>/etc/profile

八、更改环境变量 
su - hadoop 
cd /usr/local/hive 
cd conf

1、hive-env.sh 
cp hive-env.sh.template hive-env.sh&&vi hive-env.sh 
HADOOP_HOME=/usr/local/hadoop 
2、hive-site.xml 
vi hive-site.xml 
<?xml version=”1.0”?> 
<?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?> 
<configuration> 
<property> 
<name>javax.jdo.option.ConnectionURL</name> 
<value >jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value > 
</property> 
<property> 
<name>javax.jdo.option.ConnectionDriverName</name> 
<value>com.mysql.jdbc.Driver</value> 
</property> 
<property> 
<name>javax.jdo.option.ConnectionUserName</name> 
<value>root< /value> 
</property> 
<property> 
<name>javax.jdo.option.ConnectionPassword 
<value>123456< /value> 
</property> 
</configuration>

九、拷贝mysql驱动包到$HIVE_HOME/lib 
# 上方的hive-site.xml使用了java的mysql驱动包 
# 需要将这个包上传到hive的lib目录之下 
# 解压 mysql-connector-java-5.1.45.zip 对应的文件到目录即可 
cd /tmp 
unzip mysql-connector-java-5.1.45.zip 
cd mysql-connector-java-5.1.45 
cp mysql-connector-java-5.1.45-bin.jar /usr/local/hive/lib/ 
未拷贝启动会报错: 
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

十、安装hadoop

cd /usr/local
tar -xf /tmp/hadoop-2.6.0-cdh5.7.1.tar.gz
chown hadoop: hadoop-2.6.0-cdh5.7.1 -R
ln-s hadoop-2.6.0-cdh5.7.1/ hadoopcd hadoop
echo'export HADOOP_HOME=/usr/local/hadoop'>>/etc/profile
echo'export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH'>>/etc/profile
source /etc/profile

vi core-site.xml

<configuration>
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
            <name>fs.defaultFS</name>
            <value>hdfs://localhost:9000</value>
    </property>
</configuration>

vi hdfs-site.xml

<configuration>
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

vi ../etc/hadoop/hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_45

vi slaves

hadoop

 vi mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

vi yarn-site.xml 

<?xml version="1.0"?>
<configuration>
<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>

【来自@若泽大数据】

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值