hadoop hbase hive 集群安装

 
一:卸载redhat操作系统默认jdk
1:查找安装默认安装jdk
   rpm -qa | grep java
2:删除jdk
   rpm -e --nodeps java-1.6.0-openjdk-1.6.0.0-1.21.b17.el6.x86_64

二:安装oracle jdk
1:使用root账号安装
2:创建目录:/usr/java
3:下载jdk存放到/usr/java目录:jdk-6u33-linux-x64.bin
4:给安装文件添加执行权限:
   chmod +x jdk-6u43-linux-x64.bin
5:执行jdk安装包
   ./jdk-6u43-linux-x64.bin
6:在/etc/profile文件中添加环境变量
Java代码 复制代码  收藏代码
  1. export JAVA_HOME=/usr/java/jdk1.6.0_43   
  2. export JRE_HOME=$JAVA_HOME/jre   
  3. export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib/rt.jar   
  4. export PATH=$PATH:$JAVA_HOME/bin  
export JAVA_HOME=/usr/java/jdk1.6.0_43
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib/rt.jar
export PATH=$PATH:$JAVA_HOME/bin

7:配置生效,执行下面命令
source /etc/profile

三:主机分配,在每一个机器的的/etc/hosts文件中添加下面四行内容
Java代码 复制代码  收藏代码
  1. 192.168.205.23 inm1   
  2. 192.168.205.24 inm2   
  3. 192.168.205.25 inm3   
  4. 192.168.205.26 inm4   
  5. 192.168.205.27 inm5   
  6. 192.168.205.28 inm6   
  7. 192.168.205.29 inm7   
  8. 192.168.205.30 inm8   
  9. 192.168.205.31 inm9   
  10. 192.168.205.32 inm10  
192.168.205.23 inm1
192.168.205.24 inm2
192.168.205.25 inm3
192.168.205.26 inm4
192.168.205.27 inm5
192.168.205.28 inm6
192.168.205.29 inm7
192.168.205.30 inm8
192.168.205.31 inm9
192.168.205.32 inm10


四:关闭所有机器防火墙
chkconfig iptables off
service iptables stop

五:在每台机器上创建hadoop用户组合hadoop用户
1:创建用户组:groupadd hadoop
2:创建用户:useradd -g hadoop hadoop
3:修改密码:passwd hadoop

六:在master.hadoop机器上配置SSH
Java代码 复制代码  收藏代码
  1. [hadoop@master ~]$ ssh-keygen -t rsa -P ""  
  2.    Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): /home/hadoop/.ssh/id_rsa   
  3. [hadoop@master ~]cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys   
  4. [hadoop@master ~]chmod 700 ~/.ssh/   
  5. [hadoop@master ~]chmod 600 ~/.ssh/authorized_key   
  6. 验证   
  7. [hadoop@master ~]ssh localhost   
  8. [hadoop@master ~]ssh inm1   
  9. 复制ssh配置到其它机器   
  10. [hadoop@master ~]ssh-copy-id -i $HOME/.ssh/id_rsa.pub hadoop@inm2  
  11. [hadoop@master ~]ssh-copy-id -i $HOME/.ssh/id_rsa.pub hadoop@inm3  
[hadoop@master ~]$ ssh-keygen -t rsa -P ""
   Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): /home/hadoop/.ssh/id_rsa
[hadoop@master ~]cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
[hadoop@master ~]chmod 700 ~/.ssh/
[hadoop@master ~]chmod 600 ~/.ssh/authorized_key
验证
[hadoop@master ~]ssh localhost
[hadoop@master ~]ssh inm1
复制ssh配置到其它机器
[hadoop@master ~]ssh-copy-id -i $HOME/.ssh/id_rsa.pub hadoop@inm2
[hadoop@master ~]ssh-copy-id -i $HOME/.ssh/id_rsa.pub hadoop@inm3


七:zookeeper三节点集群安装
1:使用三台服务器安装zookeeper,安装在hadoop用户上
   192.168.205.24、192.168.205.25、192.168.205.26
2:使用cloudera版本zookeeper:zookeeper-3.4.5-cdh4.2.0.tar.gz
3:解压并修改目录名称
   tar -zxf zookeeper-3.4.5-cdh4.2.0.tar.gz
   mv zookeeper-3.4.5-cdh4.2.0/ zookeeper
4:配置zookeeper,在conf目录下创建zoo.cfg文件,添加文件内容
Java代码 复制代码  收藏代码
  1. tickTime=2000     
  2.  initLimit=5      
  3.  syncLimit=2      
  4.  dataDir=/homt/hadoop/storage/zookeeper/data   
  5.  dataLogDir=/homt/hadoop/storage/zookeeper/logs      
  6.  clientPort=2181    
  7.  server.1=inm2:2888:3888      
  8.  server.2=inm3:2888:3888    
  tickTime=2000  
   initLimit=5   
   syncLimit=2   
   dataDir=/homt/hadoop/storage/zookeeper/data
   dataLogDir=/homt/hadoop/storage/zookeeper/logs   
   clientPort=2181 
   server.1=inm2:2888:3888   
   server.2=inm3:2888:3888  

   server.3=inm4:2888:3888
5:创建zookeeper的数据文件和日志存放目录
   /home/hadoop/storage/zookeeper/data
   /home/hadoop/storage/zookeeper/logs
   在/home/hadoop/storage/zookeeper/data目录中创建文件myid,添加内容为:1
6:复制安装的zookeeper和storage目录到inm3和inm4机器上。
   scp -r zookeeper inm4:/home/hadoop
   scp -r storage inm4:/home/hadoop
   修改inm3机器上myid文件中值为2
   修改inm3机器上myid文件中值为3
7:启动服务器
   ./bin/zkServer.sh start
8:验证安装
   ./bin/zkCli.sh -server inm3:2181 

八:安装hadoop-2.0.0-cdh4.2.0
用户hadoop账号进入系统
1:解压tar -xvzf hadoop-2.0.0-cdh4.2.0.tar.gz ,修改目录名称:mv hadoop-2.0.0-cdh4.2.0 hadoop
2:配置Hadoop环境变量:修改vi ~/.bashrc,在文件最后面加上如下配置:
Java代码 复制代码  收藏代码
  1. export HADOOP_HOME=/home/hadoop/hadoop   
  2. export HIVE_HOME=/home/hadoop/hive   
  3. export HBASE_HOME=/home/hadoop/hbase   
  4.   
  5. export HADOOP_MAPRED_HOME=${HADOOP_HOME}   
  6. export HADOOP_COMMON_HOME=${HADOOP_HOME}   
  7. export HADOOP_HDFS_HOME=${HADOOP_HOME}   
  8. export YARN_HOME=${HADOOP_HOME}   
  9. export HADOOP_YARN_HOME=${HADOOP_HOME}   
  10. export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop   
  11. export HDFS_CONF_DIR=${HADOOP_HOME}/etc/hadoop   
  12. export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop   
  13.   
  14. export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin  
export HADOOP_HOME=/home/hadoop/hadoop
export HIVE_HOME=/home/hadoop/hive
export HBASE_HOME=/home/hadoop/hbase

export HADOOP_MAPRED_HOME=${HADOOP_HOME}
export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_HDFS_HOME=${HADOOP_HOME}
export YARN_HOME=${HADOOP_HOME}
export HADOOP_YARN_HOME=${HADOOP_HOME}
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HDFS_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin

3:使配置生效
   source .bashrc
4:修改HADOOP_HOME/etc/hadoop目录下mastes和slaves文件
   masters文件内容:
   inm1
   slaves文件内容:
   inm2
   inm3
   inm4
5:修改HADOOP_HOME/etc/hadoop/core-site.xml文件配置
Java代码 复制代码  收藏代码
  1. <configuration>   
  2.   <property>   
  3.     <name>fs.defaultFS</name>   
  4.     <value>hdfs://inm1:9000</value>   
  5.   </property>   
  6.      
  7.   <property>   
  8.     <name>io.file.buffer.size</name>   
  9.     <value>131072</value>   
  10.     <description>Size of read/write buffer used in SequenceFiles.</description>   
  11.   </property>   
  12.      
  13.   <property>   
  14.     <name>io.native.lib.available</name>   
  15.     <value>true</value>   
  16.   </property>   
  17. </configuration>  
<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://inm1:9000</value>
  </property>
  
  <property>
    <name>io.file.buffer.size</name>
    <value>131072</value>
    <description>Size of read/write buffer used in SequenceFiles.</description>
  </property>
  
  <property>
    <name>io.native.lib.available</name>
    <value>true</value>
  </property>
</configuration>

6:修改HADOOP_HOME/etc/hadoop/hdfs-site.xml文件配置
Java代码 复制代码  收藏代码
  1. <configuration>   
  2.   <property>   
  3.       <name>dfs.replication</name>   
  4.       <value>3</value>   
  5.   </property>   
  6.   <property>   
  7.       <name>hadoop.tmp.dir</name>   
  8.       <value>/home/hadoop/storage/hadoop/tmp</value>   
  9.   </property>   
  10.   <property>   
  11.         <name>dfs.name.dir</name>   
  12.         <value>/home/hadoop/storage/hadoop/name</value>   
  13.     </property>   
  14.     <property>   
  15.         <name>dfs.data.dir</name>   
  16.         <value>/home/hadoop/storage/hadoop/data</value>   
  17.     </property>   
  18.   <property>   
  19.         <name>dfs.block.size</name>   
  20.         <value>67108864</value>   
  21.         <description>HDFS blocksize of 64MB for large file-systems.</description>   
  22.     </property>   
  23.   <property>   
  24.       <name>dfs.namenode.http-address</name>   
  25.       <value>inm1:50070</value>   
  26.   </property>   
  27.   <property>   
  28.       <name>dfs.webhdfs.enabled</name>   
  29.       <value>true</value>   
  30.   </property>   
  31. </configuration>  
<configuration>
  <property>
      <name>dfs.replication</name>
      <value>3</value>
  </property>
  <property>
      <name>hadoop.tmp.dir</name>
      <value>/home/hadoop/storage/hadoop/tmp</value>
  </property>
  <property>
		<name>dfs.name.dir</name>
		<value>/home/hadoop/storage/hadoop/name</value>
	</property>
	<property>
		<name>dfs.data.dir</name>
		<value>/home/hadoop/storage/hadoop/data</value>
	</property>
  <property>
		<name>dfs.block.size</name>
		<value>67108864</value>
		<description>HDFS blocksize of 64MB for large file-systems.</description>
	</property>
  <property>
      <name>dfs.namenode.http-address</name>
      <value>inm1:50070</value>
  </property>
  <property>
      <name>dfs.webhdfs.enabled</name>
      <value>true</value>
  </property>
</configuration>

7:修改HADOOP_HOME/etc/hadoop/mapred-site.xml文件配置
Java代码 复制代码  收藏代码
  1. <configuration>   
  2.   <property>   
  3.     <name>mapreduce.framework.name</name>   
  4.     <value>yarn</value>   
  5.   </property>   
  6.   
  7.   <property>   
  8.     <name>mapreduce.jobhistory.address</name>   
  9.     <value>inm1:10020</value>   
  10.   </property>   
  11.   
  12.   <property>   
  13.     <name>mapreduce.jobhistory.webapp.address</name>   
  14.     <value>inm1:19888</value>   
  15.   </property>   
  16. </configuration>  
<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>

  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>inm1:10020</value>
  </property>

  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>inm1:19888</value>
  </property>
</configuration>

8:修改HADOOP_HOME/etc/hadoop/yarn-site.xml文件配置
Java代码 复制代码  收藏代码
  1. <configuration>   
  2.   <property>   
  3.     <name>yarn.resourcemanager.resource-tracker.address</name>   
  4.     <value>inm1:8031</value>   
  5.   </property>   
  6.   <property>   
  7.     <name>yarn.resourcemanager.address</name>   
  8.     <value>inm1:8032</value>   
  9.   </property>   
  10.   <property>   
  11.     <name>yarn.resourcemanager.scheduler.address</name>   
  12.     <value>inm1:8030</value>   
  13.   </property>   
  14.   <property>   
  15.     <name>yarn.resourcemanager.admin.address</name>   
  16.     <value>inm1:8033</value>   
  17.   </property>   
  18.   <property>   
  19.      <name>yarn.resourcemanager.webapp.address</name>   
  20.      <value>inm1:8088</value>   
  21.    </property>   
  22.    <property>   
  23.       <description>Classpath for typical applications.</description>   
  24.       <name>yarn.application.classpath</name>   
  25.       <value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,   
  26.           $HADOOP_COMMON_HOME/share/hadoop/common/lib/*,   
  27.           $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,   
  28.           $YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*,   
  29.           $YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*</value>   
  30.     </property>   
  31.     <property>   
  32.       <name>yarn.nodemanager.aux-services</name>   
  33.       <value>mapreduce.shuffle</value>   
  34.    </property>   
  35.    <property>   
  36.       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>   
  37.       <value>org.apache.hadoop.mapred.ShuffleHandler</value>   
  38.    </property>   
  39.   
  40.   <property>   
  41.       <name>yarn.nodemanager.local-dirs</name>   
  42.       <value>/home/hadoop/storage/yarn/local</value>   
  43.    </property>   
  44.    <property>   
  45.       <name>yarn.nodemanager.log-dirs</name>   
  46.       <value>/home/hadoop/storage/yarn/logs</value>   
  47.    </property>   
  48.    <property>   
  49.       <description>Where to aggregate logs</description>   
  50.       <name>yarn.nodemanager.remote-app-log-dir</name>   
  51.       <value>/home/hadoop/storage/yarn/logs</value>   
  52.    </property>   
  53.   
  54.   <property>   
  55.       <name>yarn.app.mapreduce.am.staging-dir</name>   
  56.       <value>/user</value>   
  57.   </property>   
  58. </configuration>  
<configuration>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>inm1:8031</value>
  </property>
  <property>
    <name>yarn.resourcemanager.address</name>
    <value>inm1:8032</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>inm1:8030</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address</name>
    <value>inm1:8033</value>
  </property>
  <property>
     <name>yarn.resourcemanager.webapp.address</name>
     <value>inm1:8088</value>
   </property>
   <property>
      <description>Classpath for typical applications.</description>
      <name>yarn.application.classpath</name>
      <value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,
          $HADOOP_COMMON_HOME/share/hadoop/common/lib/*,
          $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,
          $YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*,
          $YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*</value>
    </property>
    <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce.shuffle</value>
   </property>
   <property>
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>

  <property>
      <name>yarn.nodemanager.local-dirs</name>
      <value>/home/hadoop/storage/yarn/local</value>
   </property>
   <property>
      <name>yarn.nodemanager.log-dirs</name>
      <value>/home/hadoop/storage/yarn/logs</value>
   </property>
   <property>
      <description>Where to aggregate logs</description>
      <name>yarn.nodemanager.remote-app-log-dir</name>
      <value>/home/hadoop/storage/yarn/logs</value>
   </property>

  <property>
      <name>yarn.app.mapreduce.am.staging-dir</name>
      <value>/user</value>
  </property>
</configuration>

9:同步hadoop工程到inm2,inm3,inm4机器上面
Java代码 复制代码  收藏代码
  1. scp -r hadoop inm2:/home/hadoop   
  2. scp -r hadoop inm2:/home/hadoop   
  3. scp -r hadoop inm2:/home/hadoop  
scp -r hadoop inm2:/home/hadoop
scp -r hadoop inm2:/home/hadoop
scp -r hadoop inm2:/home/hadoop

10:格式文件系统
Java代码 复制代码  收藏代码
  1. hadoop namenode -format  
hadoop namenode -format

11:启动hdfs和yarn,启动脚本在HADOOP_HOME/sbin目录中
./start-hdfs.sh
./start-yarn.sh

八:安装hbase-0.94.2-cdh4.2.0
1:解压tar -xvzf hbase-0.94.2-cdh4.2.0.tar.gz ,修改目录名称:mv hbase-0.94.2-cdh4.2.0.tar.gz hbase
2:修改HBASE_HOME/conf/regionservers文件,添加运行HRegionServer进程的机器名称。
  
Java代码 复制代码  收藏代码
  1. inm2   
  2.    inm3   
  3.    inm4  
inm2
   inm3
   inm4

3:修改HBASE_HOME/conf/hbase-site.xml文件
Java代码 复制代码  收藏代码
  1. <configuration>   
  2.   <property>   
  3.     <name>hbase.rootdir</name>   
  4.     <value>hdfs://inm1/hbase</value>   
  5.   </property>   
  6.   <property>   
  7.     <name>hbase.cluster.distributed</name>   
  8.     <value>true</value>   
  9.   </property>   
  10.      
  11.   <property>   
  12.     <name>hbase.tmp.dir</name>   
  13.     <value>/home/hadoop/storage/hbase</value>   
  14.   </property>   
  15.      
  16.   <property>   
  17.     <name>hbase.zookeeper.quorum</name>   
  18.     <value>inm2,inm3,inm4</value>   
  19.   </property>   
  20. </configuration>  
<configuration>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://inm1/hbase</value>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  
  <property>
    <name>hbase.tmp.dir</name>
    <value>/home/hadoop/storage/hbase</value>
  </property>
  
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>inm2,inm3,inm4</value>
  </property>
</configuration>

4:同步hbase工程到inm2,inm3,inm4机器上面
Java代码 复制代码  收藏代码
  1. scp -r hbase inm2:/home/hadoop   
  2. scp -r hbase inm2:/home/hadoop   
  3. scp -r hbase inm2:/home/hadoop  
scp -r hbase inm2:/home/hadoop
scp -r hbase inm2:/home/hadoop
scp -r hbase inm2:/home/hadoop

5:在inm1上启动hbase集群
Java代码 复制代码  收藏代码
  1. HBASE_HOME/bin/start-hbase.sh  
HBASE_HOME/bin/start-hbase.sh

6:执行hbase shell进入hbase console。执行list命令验证安装。

九:安装hive-0.10.0-cdh4.2.0
1:解压tar -xvzf hive-0.10.0-cdh4.2.0.tar.gz ,修改目录名称:mv hive-0.10.0-cdh4.2.0 hive
2:修改HIVE_HOME/conf/hive-site.xml文件
Java代码 复制代码  收藏代码
  1. <?xml version="1.0"?>   
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>   
  3. <configuration>   
  4.   <property>   
  5.     <name>javax.jdo.option.ConnectionURL</name>   
  6.     <value>jdbc:mysql://192.168.205.31:3306/hive?useUnicode=true&amp;characterEncoding=UTF-8</value>   
  7.     <description>JDBC connect string for a JDBC metastore</description>   
  8.   </property>   
  9.      
  10.   <property>   
  11.     <name>javax.jdo.option.ConnectionDriverName</name>   
  12.     <value>com.mysql.jdbc.Driver</value>   
  13.     <description>Driver class name for a JDBC metastore</description>   
  14.   </property>   
  15.      
  16.   <property>   
  17.     <name>javax.jdo.option.ConnectionUserName</name>   
  18.     <value>hive</value>   
  19.     <description>username to use against metastore database</description>   
  20.   </property>   
  21.      
  22.   <property>   
  23.     <name>javax.jdo.option.ConnectionPassword</name>   
  24.     <value>hive2013</value>   
  25.     <description>password to use against metastore database</description>   
  26.   </property>   
  27.      
  28.   <property>   
  29.    <name>mapred.job.tracker</name>   
  30.    <value>inm1:8031</value>   
  31.   </property>   
  32.      
  33.   <property>   
  34.    <name>mapreduce.framework.name</name>   
  35.    <value>yarn</value>   
  36.   </property>   
  37.      
  38.   <property>   
  39.     <name>hive.aux.jars.path</name>   
  40.     <value>file:///home/hadoop/hive/lib/zookeeper-3.4.5-cdh4.2.0.jar,   
  41.       file:///home/hadoop/hive/lib/hive-hbase-handler-0.10.0-cdh4.2.0.jar,   
  42.       file:///home/hadoop/hive/lib/hbase-0.94.2-cdh4.2.0.jar,   
  43.       file:///home/hadoop/hive/lib/guava-11.0.2.jar</value>   
  44.   </property>   
  45.      
  46.   <property>   
  47.     <name>hive.querylog.location</name>   
  48.     <value>/home/hadoop/storage/hive/querylog</value>   
  49.     <description>   
  50.       Location of Hive run time structured log file   
  51.     </description>   
  52.   </property>   
  53.      
  54.   <property>   
  55.     <name>hive.support.concurrency</name>   
  56.     <description>Enable Hive's Table Lock Manager Service</description>   
  57.     <value>true</value>   
  58.   </property>   
  59.      
  60.   <property>   
  61.     <name>hive.zookeeper.quorum</name>   
  62.     <description>Zookeeper quorum used by Hive's Table Lock Manager</description>   
  63.     <value>inm2,inm3,inm4</value>   
  64.   </property>   
  65.      
  66.   <property>   
  67.     <name>hive.hwi.listen.host</name>   
  68.     <value>inm1</value>   
  69.     <description>This is the host address the Hive Web Interface will listen on</description>   
  70.   </property>   
  71.      
  72.   <property>   
  73.     <name>hive.hwi.listen.port</name>   
  74.     <value>9999</value>   
  75.     <description>This is the port the Hive Web Interface will listen on</description>   
  76.   </property>   
  77.      
  78.   <property>   
  79.     <name>hive.hwi.war.file</name>   
  80.     <value>lib/hive-hwi-0.10.0-cdh4.2.0.war</value>   
  81.     <description>This is the WAR file with the jsp content for Hive Web Interface</description>   
  82.   </property>  
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://192.168.205.31:3306/hive?useUnicode=true&amp;characterEncoding=UTF-8</value>
    <description>JDBC connect string for a JDBC metastore</description>
  </property>
  
  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
  </property>
  
  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hive</value>
    <description>username to use against metastore database</description>
  </property>
  
  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hive2013</value>
    <description>password to use against metastore database</description>
  </property>
  
  <property>
   <name>mapred.job.tracker</name>
   <value>inm1:8031</value>
  </property>
  
  <property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
  </property>
  
  <property>
    <name>hive.aux.jars.path</name>
    <value>file:///home/hadoop/hive/lib/zookeeper-3.4.5-cdh4.2.0.jar,
      file:///home/hadoop/hive/lib/hive-hbase-handler-0.10.0-cdh4.2.0.jar,
      file:///home/hadoop/hive/lib/hbase-0.94.2-cdh4.2.0.jar,
      file:///home/hadoop/hive/lib/guava-11.0.2.jar</value>
  </property>
  
  <property>
    <name>hive.querylog.location</name>
    <value>/home/hadoop/storage/hive/querylog</value>
    <description>
      Location of Hive run time structured log file
    </description>
  </property>
  
  <property>
    <name>hive.support.concurrency</name>
    <description>Enable Hive's Table Lock Manager Service</description>
    <value>true</value>
  </property>
  
  <property>
    <name>hive.zookeeper.quorum</name>
    <description>Zookeeper quorum used by Hive's Table Lock Manager</description>
    <value>inm2,inm3,inm4</value>
  </property>
  
  <property>
    <name>hive.hwi.listen.host</name>
    <value>inm1</value>
    <description>This is the host address the Hive Web Interface will listen on</description>
  </property>
  
  <property>
    <name>hive.hwi.listen.port</name>
    <value>9999</value>
    <description>This is the port the Hive Web Interface will listen on</description>
  </property>
  
  <property>
    <name>hive.hwi.war.file</name>
    <value>lib/hive-hwi-0.10.0-cdh4.2.0.war</value>
    <description>This is the WAR file with the jsp content for Hive Web Interface</description>
  </property>


</configuration>
3:添加mysql驱动修改HIVE_HOME/lib目录。
4:进入hive console,执行show databases,验证安装是否成功!
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值