Impala编译部署-5单机部署-1

1.1  文件拷贝

将编译生成并复制的可执行文件,一起拷贝到本机某目录下,比如/root/impala2

1.2  操作系统

1.2.1 安装JDK

安装JDK。方法同编译部分。

1.2.2 环境变量

~/.bashrc

 
  

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.65-3.b17.el7.x86_64

 
  

export PATH=$PATH:$JAVA_HOME/bin

 
  

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

 

 

 

~/.bash_profile

 

 
  

export IMPALA_HOME=/root/impala2

 
  

source /etc/default/impala

 

 

 

1.3  配置

1.3.1 Impala配置文件

将impala运行必须的配置放到文件/etc/default/impala中,大概内容如下

 

IMPALA_STATE_STORE_HOST=127.0.0.1

IMPALA_STATE_STORE_PORT=24000

IMPALA_BACKEND_PORT=22000

IMPALA_LOG_DIR=/var/log/impala

IMPALA_CATALOG_SERVICE_HOST=127.0.0.1

 

export IMPALA_STATE_STORE_ARGS=${IMPALA_STATE_STORE_ARGS:- \

-log_dir=${IMPALA_LOG_DIR} -state_store_port=${IMPALA_STATE_STORE_PORT}}

 

export IMPALA_SERVER_ARGS=" \

-log_dir=${IMPALA_LOG_DIR} \

-catalog_service_host=${IMPALA_CATALOG_SERVICE_HOST} \

-state_store_port=${IMPALA_STATE_STORE_PORT} \

-use_statestore \

-state_store_host=${IMPALA_STATE_STORE_HOST} \

-be_port=${IMPALA_BACKEND_PORT}"

 

export ENABLE_CORE_DUMPS=${ENABLE_COREDUMPS:-false}

 

export IMPALA_CATALOG_ARGS=" \

-catalog_service_host=${IMPALA_CATALOG_SERVICE_HOST} \

-catalog_service_port=26000"

 

export HADOOP_HOME="${IMPALA_HOME}/hadoop/"

export PATH=$PATH:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin

 

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native

export LD_LIBRARY_PATH=$IMPALA_HOME/lib64/:$LD_LIBRARY_PATH

 

for f in $IMPALA_HOME/dependency/*.jar; do

    export CLASSPATH=$CLASSPATH:$f

done

export CLASSPATH=$CLASSPATH:/etc/hadoop/

 

export CATALOGCMD="${IMPALA_HOME}/be/catalog/catalogd ${IMPALA_CATALOG_ARGS}"

export STATESTORECMD="${IMPALA_HOME}/be/statestore/statestored ${IMPALA_STATE_STORE_ARGS}"

export IMPALADCMD="${IMPALA_HOME}/be/service/impalad ${IMPALA_SERVER_ARGS}"

 

 

1.3.2 hadoop配置文件

mv ${IMPALA_HOME}/etc/hadoop   /etc

 

1.3.2.1 core-site.xml
<configuration>

<property>

        <name>fs.default.name</name>

        <value>hdfs://localhost:9000</value>

</property>

<property>

        <name>hadoop.tmp.dir</name>

        <value>/usr/local/hadoop/tmp</value>

</property>

</configuration>

 

1.3.2.2 hdfs-site.xml
<configuration>

<property>

           <name>dfs.replicatioin</name>

           <value>1</value>

</property>

<property>

           <name>dfs.name.dir</name>

           <value>/usr/local/hadoop/hdfs/name</value>

</property>

<property>

           <name>dfs.data.dir</name>

           <value>/usr/local/hadoop/hdfs/data</value>

</property>

<property>

               <name>dfs.client.read.shortcircuit</name>

               <value>true</value>

</property>

 

<property>

               <name>dfs.domain.socket.path</name>

               <value>/var/run/hdfs-sockets/dn</value>

</property>

 

<property>

               <name>dfs.client.file-block-storage-locations.timeout.millis</name>

               <value>10000</value>

</property>

 

<property>

               <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>

               <value>true</value>

</property>

</configuration>

 

转载于:https://www.cnblogs.com/fangjx/p/6863316.html

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值