win10大数据环境搭建 es、presto、Hadoop、hive

win10Es单机版环境搭建

前提windows下已经安装好了jdk8的版本,配置好环境变量

https://mirrors.huaweicloud.com/elasticsearch/ 华为镜像站下载es

1 下载解压后在 config目录下得elasticsearch.yml文件添加

 xpack.ml.enabled: false  

2 运行bin目录下的 elasticsearch.bat 文件

3 浏览器中访问127.0.0.1:9200 返回以下结果表示启动成功

{

"name" : "node-1",

"cluster_name" : "compass",

"cluster_uuid" : "Zuj5FBMUTjuHQXlAHreGvA",

"version" : {

"number" : "5.5.3",

"build_hash" : "9305a5e",

"build_date" : "2017-09-07T15:56:59.599Z",

"build_snapshot" : false,

"lucene_version" : "6.6.0"

},

"tagline" : "You Know, for Search"

}

idea调试presto

1 https://github.com/prestosql/presto 下载项目 使用idea打开

2 在运行项目的时候,出现例如在presto-parser模块Cannot resolve symbol 'SqlBaseParser缺少代码的错误,这是因为源码不带anltr4的生成代码。 在prestp-parser项目中 找到如下插件运行 生成anltr4代码
在这里插入图片描述

3 在执行命令完成后,错误依旧没有消失,我们可以看看项目的结构。File -> Project Structure -> Modules -> presto-parser,将presto-parser的target -> generated-sources ->anltr4设置为Sources
在这里插入图片描述

4 按下图进行配置

Main Class: com.facebook.presto.server.PrestoServer

VM Options: -ea -XX:+UseG1GC -XX:G1HeapRegionSize=32M -XX:+UseGCOverheadLimit -XX:+ExplicitGCInvokesConcurrent -Xmx2G -Dconfig=etc/config.properties -Dlog.levels-file=etc/log.properties

Working directory : $MODULE_DIR $

Use classpath of module: presto-main

在这里插入图片描述

5 注释presto-main模块PrestoSystemRequirements类的如下代码,相关代码片段用IDEA搜索功能查找

// failRequirement("Presto requires Linux or Mac OS X (found %s)", osName);

修改文件描述符大小限制(手动改成10000):

private static OptionalLong getMaxFileDescriptorCount()
    {
        try {
            MBeanServer mbeanServer = ManagementFactory.getPlatformMBeanServer();
            //Object maxFileDescriptorCount = mbeanServer.getAttribute(ObjectName.getInstance(OPERATING_SYSTEM_MXBEAN_NAME), "MaxFileDescriptorCount");
            Object maxFileDescriptorCount = 10000;
            return OptionalLong.of(((Number) maxFileDescriptorCount).longValue());
        }
        catch (Exception e) {
            return OptionalLong.empty();
        }
    }

接下来,把PluginManager类的代码注释掉,

 /*for (File file : listFiles(installedPluginsDir)) {
            if (file.isDirectory()) {
                loadPlugin(file.getAbsolutePath());
            }
        }

        for (String plugin : plugins) {
            loadPlugin(plugin);
        }*/

然后把presto-main模块中的etc/catalog的配置文件全部改名为

.properties.bak

6 在presto-client项目pom文件中添加如下依赖

  <dependency>
            <groupId>com.squareup.okio</groupId>
            <artifactId>okio</artifactId>
            <version>2.8.0</version>
        </dependency>

最后运行PrestoServer。

搭建Hadoop环境

https://blog.csdn.net/weixin_43976602/article/details/91993229

安装hive

1 https://mirrors.tuna.tsinghua.edu.cn/apache/hive/ 下载hive解压到本地

2 https://www.bjjem.com/article-5545-1.html 下载总附件包 然后解压覆盖原来hive安装目录下的bin目录

3 下载mysql-connector-java-5.1.26-bin.jar(或其他jar版本)放在hive目录下的lib文件夹

4.hive配置

hive的配置文件放在$HIVE_HOME/conf下,里面有4个默认的配置文件模板

hive-default.xml.template 默认模板

hive-env.sh.template hive-env.sh默认配置

hive-exec-log4j.properties.template exec默认配置

hive-log4j.properties.template log默认配置

可不做任何修改hive也能运行,默认的配置元数据是存放在Derby数据库里面的,大多数人都不怎么熟悉,我们得改用mysql来存储我们的元数据,以及修改数据存放位置和日志存放位置等使得我们必须配置自己的环境,下面介绍如何配置。

(1)创建配置文件

$HIVE_HOME/conf/hive-default.xml.template -> $HIVE_HOME/conf/hive-site.xml

$HIVE_HOME/conf/hive-env.sh.template -> $HIVE_HOME/conf/hive-env.sh

$HIVE_HOME/conf/hive-exec-log4j.properties.template -> $HIVE_HOME/conf/hive-exec-log4j.properties

$HIVE_HOME/conf/hive-log4j.properties.template -> $HIVE_HOME/conf/hive-log4j.properties

(2)修改 hive-env.sh

export HADOOP_HOME=F:\hadoop\hadoop-2.7.2
export HIVE_CONF_DIR=F:\hadoop\apache-hive-2.1.1-bin\conf
export HIVE_AUX_JARS_PATH=F:\hadoop\apache-hive-2.1.1-bin\lib

(3)修改 hive-site.xml

<!--修改的配置-->

<property>

<name>hive.metastore.warehouse.dir</name>

<!--hive的数据存储目录,指定的位置在hdfs上的目录-->

<value>/user/hive/warehouse</value>

<description>location of default database for the warehouse</description>

</property>

<property>

<name>hive.exec.scratchdir</name>

<!--hive的临时数据目录,指定的位置在hdfs上的目录-->

<value>/tmp/hive</value>

<description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description>

</property>

<property>

<name>hive.exec.local.scratchdir</name>

<!--本地目录-->

<value>F:/hadoop/apache-hive-2.1.1-bin/hive/iotmp</value>

<description>Local scratch space for Hive jobs</description>

</property>

<property>

<name>hive.downloaded.resources.dir</name>

<!--本地目录-->

<value>F:/hadoop/apache-hive-2.1.1-bin/hive/iotmp</value>

<description>Temporary local directory for added resources in the remote file system.</description>

</property>

<property>

<name>hive.querylog.location</name>

<!--本地目录-->

<value>F:/hadoop/apache-hive-2.1.1-bin/hive/iotmp</value>

<description>Location of Hive run time structured log file</description>

</property>

<property>

<name>hive.server2.logging.operation.log.location</name>

<value>F:/hadoop/apache-hive-2.1.1-bin/hive/iotmp/operation_logs</value>

<description>Top level directory where operation logs are stored if logging functionality is enabled</description>

</property>

<!--mysql配置-->

<property>

<name>javax.jdo.option.ConnectionURL</name>

<value>jdbc:mysql://localhost:3306/hive?characterEncoding=UTF-8</value>

</property>

<property>

<name>javax.jdo.option.ConnectionDriverName</name>

<value>com.mysql.jdbc.Driver</value>

</property>

<property>

<name>javax.jdo.option.ConnectionUserName</name>

<value>root</value>

</property>

<property>

<name>javax.jdo.option.ConnectionPassword</name>

<value>root</value>

</property>

<!-- 解决 Required table missing : "`VERSION`" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.autoCreateTables"  -->

<property>

<name>datanucleus.autoCreateSchema</name>

<value>true</value>

</property>

<property>

<name>datanucleus.autoCreateTables</name>

<value>true</value>

</property>

<property>

<name>datanucleus.autoCreateColumns</name>

<value>true</value>

</property>

<!-- 解决 Caused by: MetaException(message:Version information not found in metastore. )  -->

<property>

<name>hive.metastore.schema.verification</name>

<value>false</value>

<description>

    Enforce metastore schema version consistency.

    True: Verify that version information stored in metastore matches with one from Hive jars.  Also disable automatic

          schema migration attempt. Users are required to manully migrate schema after Hive upgrade which ensures

          proper metastore schema migration. (Default)

    False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.

</description>

</property>

5.MySQL设置

在mysql中执行SQL语句创建hive:

 create database hive default character set latin1;
  1. 启动
    (1)启动hadoop:start-all.cmd
    (2)启动metastore服务:hive --service metastore
    (3)启动Hive:hive
    f the version information stored in metastore doesn’t match with one from in Hive jars.
```

5.MySQL设置

在mysql中执行SQL语句创建hive:

 create database hive default character set latin1;
  1. 启动
    (1)启动hadoop:start-all.cmd
    (2)启动metastore服务:hive --service metastore
    (3)启动Hive:hive
    若Hive成功启动,Hive本地模式安装完成。
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值