单机版Impala

前提安装Hadoop Hive 详见 https://blog.csdn.net/jing_er_/article/details/106664707

1.下载rpm文件http://archive.cloudera.com/beta/impala-kudu/redhat/7/x86_64/impala-kudu/0/RPMS/x86_64/

2.下载依赖包 bigtop-utils 

 

http://archive.cloudera.com/cdh5/redhat/7/x86_64/cdh/5.9.0/RPMS/noarch/bigtop-utils-0.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.30.el7.noarch.rpm

3.安装

yum install mysql-connector-java

sudo yum -y install cyrus-sasl-plain lsb ntp

rpm -ivh bigtop-utils-0.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.30.el7.noarch.rpm

rpm -ivh impala-kudu-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el7.x86_64.rpm --nodeps

rpm -ivh impala-kudu-catalog-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el7.x86_64.rpm

rpm -ivh impala-kudu-state-store-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el7.x86_64.rpm

rpm -ivh impala-kudu-server-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el7.x86_64.rpm

rpm -ivh impala-kudu-shell-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el7.x86_64.rpm

rpm -ivh impala-kudu-udf-devel-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el7.x86_64.rpm

 

4.配置

vi /etc/default/bigtop-utils

           export JAVA_HOME=/usr/local/jdk1.8.0_211

vi /etc/default/impala

           IMPALA_CATALOG_SERVICE_HOST=192.168.2.111

           IMPALA_STATE_STORE_HOST=192.168.2.111

systemctl restart ntpd

 

复制配置文件

cp /home/data/hive/apache-hive-3.1.2-bin/conf/hive-site.xml /etc/impala/conf.dist/

cp /home/data/hadoop/hadoop-3.2.1/etc/hadoop/core-site.xml /etc/impala/conf.dist/

cp /home/data/hadoop/hadoop-3.2.1/etc/hadoop/hdfs-site.xml /etc/impala/conf.dist/

 

增加配置

# hdfs-site.xml

<!--impala configuration -->

<property>

        <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>

        <value>true</value>

</property>

<property>

        <name>dfs.block.local-path-access.user</name>

        <value>impala</value>

</property>

<property>

        <name>dfs.client.file-block-storage-locations.timeout.millis</name>

        <value>60000</value>

</property>

# core-site.xml

<!--impala configuration -->

<property>

        <name>dfs.client.read.shortcircuit</name>

        <value>true</value>

</property>

<property>

        <name>dfs.client.read.shortcircuit.skip.checksum</name>

        <value>false</value>

</property>

<property>

        <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>

        <value>true</value>

</property>

修改hive-site.xml

<property>

<name>hive.metastore.uris</name>

<value>thrift://192.168.2.111:9083</value>

</property>

<property>

<name>hive.metastore.client.socket.timeout</name>

<value>3600</value>

</property>

重启hadoop

/home/data/hadoop/hadoop-3.2.1/sbin 关闭 stop-all.sh start-all.sh

#启动hive

#nohup hive --service metastore &

#nohup hive --service hiveserver2 &

/etc/init.d/impala-state-store start

/etc/init.d/impala-catalog start

/etc/init.d/impala-server start

验证:impala-shell

参考https://blog.csdn.net/lukabruce/article/details/82970502#1%20%E7%8E%AF%E5%A2%83%E5%87%86%E5%A4%87

问题:[Not connected] > 

Not connected (use CONNECT to establish a connection)

解决:启动服务

hive --service metastore &

 

service impala-state-store start

service impala-catalog start

service impala-server start  

 

 

 

 

 

 

 

 

 

 

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值