Hadoop+Hive+Spark环境win10搭建

依赖环境部署

 1)jdk 1.8 开发环境部署与环境变量配置,参考JDK下载与配置
 2)mysql 5.7 下载与安装,参考MySql下载与安装
 3)scala 2.11.12下载与安装,下载地址Scala官网


Hadoop安装与配置
1、下载hadoop安装包与工具包

  下载安装包:hadoop-2.7.7.tar.gz
  下载工具包:工具包

2、hadoop环境配置

  1)解压安装包并重命名目录为D:/soft/hadoop
  2)加压工具包,将解压出的etc和bin文件夹覆盖D:/soft/hadoop下对应目录
  3)修改配置文件

1:etc/hadoop/core-site.xml
<configuration>
   <property>
       <name>fs.defaultFS</name>
       <value>hdfs://localhost:9000</value>
   </property>
</configuration>

2:etc/hadoop/mapred-site.xml
<configuration>
   <property>
       <name>mapreduce.framework.name</name>
       <value>yarn</value>
   </property>
</configuration>

3:etc/hadoop/hdfs-site.xml
<configuration>
   <property>
       <name>dfs.replication</name>
       <value>1</value>
   </property>
   <property>
       <name>dfs.namenode.name.dir</name>
       <value>file:/D:/hadoop/data/namenode</value>
   </property>
   <property>
       <name>dfs.datanode.data.dir</name>
     <value>file:/D:/hadoop/data/datanode</value>
   </property>
</configuration>

4:etc\hadoop\yarn-site.xml
<configuration>
    <property>
       <name>yarn.nodemanager.aux-services</name>
       <value>mapreduce_shuffle</value>
    </property>
    <property>
       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
</configuration>

5:etc/hadoop/hadoop-env.cmd
set JAVA_HOME=%JAVA_HOME%

  4)配置hadoop环境变量,创建环境变量HADOOP_HOME=D:/soft/hadoop,并配置到path中

3、启动hadoop

  1)格式化namenode

hdfs namenode -format

  2)启动hadoop

D:/soft/hadoop/sbin/start-all.cmd

  3)查看 yarn GUInamenode GUI


Hive安装与配置
1、下载Hive安装包与驱动

  1)下载apache-hive-2.1.1-bin.tar.gz安装包
  2)下载mysql-connector-java.jar驱动

2、配置Hive环境

  1)解压安装包并重命名目录D:/soft/hive
  2)将mysql驱动jar包防止到D:/soft/hive/lib目录下
  3)添加HIVE_HOME=D:/soft/hive,并配置在path下
  4)参数配置

1:hive-site.xml (将D:/soft/soft/hive/conf下的hive-default.xml.template重命名为hive-site.xml)
<property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
        <description>JDBC connect string for a JDBC metastore</description>
    </property>

    <property>
      <name>javax.jdo.option.ConnectionDriverName</name>
      <value>com.mysql.jdbc.Driver</value>
      <description>Driver class name for a JDBC metastore</description>
    </property>

    <property>
      <name>javax.jdo.option.ConnectionUserName</name>
      <value>root</value>
      <description>username to use against metastore database</description>
    </property>

    <property>
      <name>javax.jdo.option.ConnectionPassword</name>
      <value>root</value>
      <description>password to use against metastore database</description>
    </property>

<property>
    <name>hive.exec.local.scratchdir</name>    
    <value>D:/soft/hive/scratch_dir</value>
    <description>Local scratch space for Hive jobs</description>
  </property>

  <property>
    <name>hive.downloaded.resources.dir</name>    
    <value>D:/soft/hive/resources_dir/${hive.session.id}_resources</value>    
    <description>Temporary local directory for added resources in the remote file system.</description>
  </property>

   <property>
    <name>hive.querylog.location</name>
    <value>D:/soft/hive/querylog_dir</value>
    <description>Location of Hive run time structured log file</description>
  </property>

   <property>
    <name>hive.server2.logging.operation.log.location</name>
    <value>D:/soft/hive/operation_dir</value>
    <description>Top level directory where operation logs are stored if logging functionality is enabled</description>
  </property>


#########################################################################


2:hive-log4j2.properties (将hive-log4j2.properties.template重命名为hive-log4j2.properties),替换为以下内容

status = INFO
name = HiveLog4j2
packages = org.apache.hadoop.hive.ql.log

# list of properties
property.hive.log.level = INFO
property.hive.root.logger = DRFA
property.hive.log.dir = hive_log
property.hive.log.file = hive.log
property.hive.perflogger.log.level = INFO

# list of all appenders
appenders = console, DRFA

# console appender
appender.console.type = Console
appender.console.name = console
appender.console.target = SYSTEM_ERR
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{ISO8601} %5p [%t] %c{2}: %m%n

# daily rolling file appender
appender.DRFA.type = RollingRandomAccessFile
appender.DRFA.name = DRFA
appender.DRFA.fileName = ${hive.log.dir}/${hive.log.file}
# Use %pid in the filePattern to append <process-id>@<host-name> to the filename if you want separate log files for different CLI session
appender.DRFA.filePattern = ${hive.log.dir}/${hive.log.file}.%d{yyyy-MM-dd}
appender.DRFA.layout.type = PatternLayout
appender.DRFA.layout.pattern = %d{ISO8601} %5p [%t] %c{2}: %m%n
appender.DRFA.policies.type = Policies
appender.DRFA.policies.time.type = TimeBasedTriggeringPolicy
appender.DRFA.policies.time.interval = 1
appender.DRFA.policies.time.modulate = true
appender.DRFA.strategy.type = DefaultRolloverStrategy
appender.DRFA.strategy.max = 30

# list of all loggers
loggers = NIOServerCnxn, ClientCnxnSocketNIO, DataNucleus, Datastore, JPOX, PerfLogger

logger.NIOServerCnxn.name = org.apache.zookeeper.server.NIOServerCnxn
logger.NIOServerCnxn.level = WARN

logger.ClientCnxnSocketNIO.name = org.apache.zookeeper.ClientCnxnSocketNIO
logger.ClientCnxnSocketNIO.level = WARN

logger.DataNucleus.name = DataNucleus
logger.DataNucleus.level = ERROR

logger.Datastore.name = Datastore
logger.Datastore.level = ERROR

logger.JPOX.name = JPOX
logger.JPOX.level = ERROR

logger.PerfLogger.name = org.apache.hadoop.hive.ql.log.PerfLogger
logger.PerfLogger.level = ${hive.perflogger.log.level}

# root logger
rootLogger.level = ${hive.log.level}
rootLogger.appenderRefs = root
rootLogger.appenderRef.root.ref = ${hive.root.logger}

3、启动Hive服务

  1)创建metastore必要元信息表

mysql -uroot -p root
use hive;
source D:\soft\hive\scripts\metastore\upgrade\mysql\hive-txn-schema-2.1.0.mysql.sql

  2)初始化metastore

hive --service metastore

  3)启动Hive客户端与命令测试

hive
create database test;
show databases;

Spark安装与配置

  1) 下载spark-2.4.5-bin-hadoop2.7.tgz安装包
  2) 解压安装包并重命名目录为D:/soft/spark
  3) 创建环境变量SPARK_HOME=D:/soft/spark ,并配置在path下
  4) 执行 spark-shell 命令测试

  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值