hadoop中hive配置文件及spark配置文件

环境变量配置

# /etc/profile: system-wide .profile file for the Bourne shell (sh(1))
# and Bourne compatible shells (bash(1), ksh(1), ash(1), ...).
#jdk18.0_171
export JAVA_HOME=/opt/jdk1.8.0_171
export PATH=$JAVA_HOME/bin:$PATH
#hadoop
export HADOOP_HOME=/opt/hadoop
export PATH=$HADOOP_HOME/bin:$PATH
#spark
export SPARK_HOME=/opt/spark
export PATH=$SPARK_HOME/bin:$PATH
#hive
export HIVE_HOME=/opt/hive
export PATH=$HIVE_HOME/bin:$PATH
#scala
export SCALA_HOME=/opt/scala
export PATH=$SCALA_HOME/bin:$PATH

hive配置

hive-env-sh 添加以下内容:

export JAVA_HOME=/opt/jdk1.8.0_171
export HIve_HOME=/opt/hive

export HADOOP_HOME=/opt/hadoop

hive-site.xml 配置:

<property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
    <description>JDBC connect string for a JDBC metastore</description>
  </property>

<property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
  </property>

<property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
    <description>JDBC connect string for a JDBC metastore</description>

  </property>

<property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hive</value>
    <description>Username to use against metastore database</description>
  </property>

 <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>000000</value>
    <description>password to use against metastore database</description>

  </property>

 <property>
    <name>hive.server2.logging.operation.log.location</name>
    <value>/hive/tmp/${system:user.name}/operation_logs</value>
    <description>Top level directory where operation logs are stored if logging functionality is enabled</description>

  </property>

 <property>
    <name>hive.exec.scratchdir</name>
    <value>/hive/tmp</value>
    <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scr
atchdir}/&lt;username&gt; is created, with ${hive.scratch.dir.permission}.</description>
  </property>

<property>
    <name>hive.exec.local.scratchdir</name>
    <value>/hive/tmp/${system:user.name}</value>
    <description>Local scratch space for Hive jobs</description>
  </property>
  <property>
    <name>hive.downloaded.resources.dir</name>
    <value>/hive/tmp/${hive.session.id}_resources</value>
    <description>Temporary local directory for added resources in the remote file system.</description>
  </property>

  <property>

<property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/hive/warehouse</value>
    <description>location of default database for the warehouse</description>

  </property>

<property>
    <name>hive.querylog.location</name>
    <value>/hive/logs</value>
    <description>Location of Hive run time structured log file</description>

  </property>

配置slaver节点通过thrift方式连接hive

<property>
    <name>hive.metastore.uris</name>
    <value>thrift://master:9083</value>
    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>

  </property>

刷新hive元数据

./schematool -dbType mysql -initSchema

设置hive进程

 hive --service metastore &

spark搭建配置

spark-env.sh 配置

#jdk18.0_171
export JAVA_HOME=/opt/jdk1.8.0_171
#hadoop
export HADOOP_HOME=/opt/hadoop
#spark
export SPARK_HOME=/opt/spark
#scala

export SCALA_HOME=/opt/scala

slaves 配置

master
slaver1

slaver2

启动spark

cd /opt/spark/sbin/

./start-all.sh

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值