首先将hive/lib下的jline-x.x.jar包发送到所有主机的hadoop/share/hadoop/yarn/lib中
这种存储方式需要在远端服务器运行一个mysql服务器,并且需要在Hive服务器启动metastore服务。
所以将mysql连接驱动mysql-connector-java.jar包放到hive/lib下
mysql中创建hive的用户
grant all privileges on *.* to hive@'%' identified by '123456' with grant option
flush privileges
Remote一体
CDH1 | CDH2 |
---|---|
hive | Mysql |
CDH1上
修改hive-default.xml.template
cd hive/conf
cp hive-default.xml.template hive-site.xml
vim hive-site.xml
将原始 < configuration>下的内容删除,添加如下
/opt/hive是hive的安装目录,hive.metastore.warehouse.dir是hive在hdfs上的存储目录,hive_rone是将要创建的数据库名称,hive-123456是数据库的连接账户密码,用来存储元数据
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive_rone/warehouse</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://CDH2/hive_rone?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.cj.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://CDH1:9083</value>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
<property>
<name>datanucleus.schema.autoCreateAll</name>
<value>true</value>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/opt/hive</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/opt/hive/hive-downloaded-addDir/</value>
<description>Temporary local directory for added resources in the remote file system. </description>
</property>
<property>
<name>hive.querylog.location</name>
<value>/opt/hive/querylog-location-addDir/</value>
<description>Location of Hive run time structured log file</description>
</property>
<property>
<name>hive.server2.logging.operation.log.location</name>
<value>/opt/hive/hive-logging-operation-log-addDir/</value>
<description>Top level directory where operation logs are stored if logging functionality is enabled</description>
</property>
修改环境变量:
vim /etc/profile
export HIVE_HOME=/opt/hive
export PATH=$HIVE_HOME/bin:$PATH
source /etc/profile
指定hive日志地址:
cp hive-log4j2.properties.template hive-log4j2.properties
vim hive-log4j2.properties
property.hive.log.dir = /opt/hive/logs
数据库如果报错,Hadoop jline版本和hive的jline不一致
```powershell
[ERROR] Terminal initialization failed; falling back to unsupported
java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected
at jline.TerminalFactory.create(TerminalFactory.java:101)
Remote分离
CDH1 | CDH2 | CDH3 |
---|---|---|
hive-server | hive-client | Mysql |
CDH1上
包名/opt/hive-server
如果jdbc:mysql://CDH3/hive?数据库已经存在,则这里需要修改
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://CDH3/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
</property>
vim hive-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_181-cloudera
export HIVE_HOME=/opt/hive-server
export HADOOP_HOME=/opt/hadoop
export HIVE_CONF_DIR=/opt/hive-server/conf
CDH2上
包名/opt/hive-client
vim hive-site.xml
<property>
<name>hive.metastore.uris</name>
<value>thrift://CDH1:9083</value>
</property>
启动
一、schematool -dbType mysql -initSchema 把之前创建的元数据都同步到mysql 里,初始化(只需要一次)
二、hive --service metastore &(每次都需要,对接mysql需要)
三、然后启动hive