记录踩过的坑-Hadoop

目录

Hadoop 

初始化节点

启动

解压".tar"文件出错,提示:无法创建符号链接

Error: JAVA_HOME is incorrectly set

Could not locate executablenull\bin\winutils.exe in the Hadoop binaries

 java.lang.UnsatisfiedLinkError

localhost:50070/访问失败

HDFS-Failed to add storage directory

Windows配置文件

hdfs-site.xml

core-site.xml

Hbase

启动

网页

java.lang.IllegalStateException: The procedure WAL relies on the ability to hsync for proper operation during component failures

Windows hbase shell无法启动

java.lang.NoClassDefFoundError: Could not initialize class org.fusesource.jansi.internal.Kernel32

可视化页面

Windows配置文件

hbase-site.xml


Hadoop 

初始化节点

hadoop namenode -format

启动

start-all

解压".tar"文件出错,提示:无法创建符号链接

打开CMD进入压缩文件目录
然后再输入:start winrar x -y hadoop-3.1.2.tar.gz

Error: JAVA_HOME is incorrectly set

已经设置了JAVA_HOME,但还是报上面的错误。

一般是因为路径上包含了空格

解决方法

修改E:\Hadoop2.7.7\hadoop-2.7.7\etc\hadoop\hadoop-env.cmd

用路径替代符

C:\PROGRA~1\Java\jdk1.8.0_91

PROGRA~1  ===== C:\Program Files 目录的dos文件名模式下的缩写

set JAVA_HOME=C:\PROGRA~1\Java\jdk1.8.0_91

Could not locate executablenull\bin\winutils.exe in the Hadoop binaries

检查拷贝到Windows的System32目录、hadoop的bin目录的hadoop.dll、winutils.exe版本和hadoop版本是否一致。

 java.lang.UnsatisfiedLinkError

检查拷贝到System32、bin目录的hadoop.dll、winutils.exe版本和hadoop版本是否一致。

localhost:50070/访问失败

hdfs-site.xml

看有没有以下内容

<property>
  <name>dfs.http.address</name>
  <value>0.0.0.0:50070</value>
</property>

另外hadoop3.x的常用端口跟2.x的不一样
namenode    rpc-address    8020
namenode    http-address    9870
namenode    https-address    9871
datanode    address    9866
datanode    http-address    9864
datanode    https-address    9865
resourcemanager    http-address    8088

HDFS-Failed to add storage directory

网上资料说是因为多次对namenode进行format导致的。

解决方案

1、将namenode和datanode的clusterID和namespaceID修改一致即可。
2、当然也可以通过直接删除数据节点DN下的current文件夹

Windows配置文件

很多问题都是配置文件的问题。

hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>    
        <name>dfs.namenode.name.dir</name>    
        <value>/D:/hadoop/data/dfs/namenode</value>    
    </property>    
    <property>    
        <name>dfs.datanode.data.dir</name>    
        <value>/D:/hadoop/data/dfs/datanode</value>  
    </property>
    <property>
        <name>dfs.permission</name>
        <value>false</value>
    </property>
</configuration>

core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
	<property>
  		<name>fs.default.name</name>
  		<value>hdfs://localhost:9000</value>
	</property>
	<property>
  		<name>hadoop.tmp.dir</name>
  		<value>/D:/hadoop/tmp</value>
	</property>
</configuration>

Hbase

启动

start-hbase

网页

localhost:16010

java.lang.IllegalStateException: The procedure WAL relies on the ability to hsync for proper operation during component failures

hbase-site.xml增加配置 

<property>
  <name>hbase.unsafe.stream.capability.enforce</name>
  <value>false</value>
</property>

Windows hbase shell无法启动

报错信息:

This file has been superceded by packaging our ruby files into a jar and using jruby's bootstrapping to invoke them. If you need to source this file fo some reason it is now named 'jar-bootstrap.rb' and is located in the root of the file hbase-shell.jar and in the source tree at 'hbase-shell/src/main/ruby'.

我用的是hadoop3.3.0和hbase2.4.0,把hbase换成2.2.6解决。

java.lang.NoClassDefFoundError: Could not initialize class org.fusesource.jansi.internal.Kernel32

下载jansi-1.4.jar包放到hbase-2.2.1\lib下,重新启动即可。

可视化页面

具体看配置,我配的是http://localhost:16010

Windows配置文件

很多问题都是配置文件的问题。

hbase-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/*
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
-->
<configuration>
  <!--
    The following properties are set for running HBase as a single process on a
    developer workstation. With this configuration, HBase is running in
    "stand-alone" mode and without a distributed file system. In this mode, and
    without further configuration, HBase and ZooKeeper data are stored on the
    local filesystem, in a path under the value configured for `hbase.tmp.dir`.
    This value is overridden from its default value of `/tmp` because many
    systems clean `/tmp` on a regular basis. Instead, it points to a path within
    this HBase installation directory.

    Running against the `LocalFileSystem`, as opposed to a distributed
    filesystem, runs the risk of data integrity issues and data loss. Normally
    HBase will refuse to run in such an environment. Setting
    `hbase.unsafe.stream.capability.enforce` to `false` overrides this behavior,
    permitting operation. This configuration is for the developer workstation
    only and __should not be used in production!__

    See also https://hbase.apache.org/book.html#standalone_dist
  -->
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://localhost:9000/hbase</value>
    </property>
    <property>
        <name>hbase.tmp.dir</name>
        <value>D:/hadoop/hbase/tmp</value>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>127.0.0.1</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>D:/hadoop/hbase/zoo</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>false</value>
    </property>
    <property>
        <name>hbase.wal.provider</name>
        <value>filesystem</value>
    </property>
    <property> 
        <name>dfs.replication</name> 
        <value>1</value> 
    </property>
    <property>
        <name>hbase.unsafe.stream.capability.enforce</name>
        <value>false</value>
    </property>
</configuration>

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
flink-shaded-hadoop3和flink-shaded-hadoop3-uber是Apache Flink项目中与Hadoop 3.x版本集成相关的两个模块。 首先,Hadoop是一个分布式计算框架,用于处理大规模数据。而Flink是一个快速而可扩展的流式处理引擎,它可以在实时和批处理任务之间无缝切换。为了与Hadoop集成,并且能够在Flink中使用Hadoop生态系统的各种功能和工具,例如HDFS、YARN和MapReduce等,Flink提供了与Hadoop版本兼容的特殊模块。 flink-shaded-hadoop3模块是Flink所提供的一个可防止与Hadoop 3.x版本依赖冲突的模块。在Flink应用程序中,当需要使用Hadoop 3.x相关功能时,可以将flink-shaded-hadoop3模块添加到项目的依赖中。该模块会将特定版本的Hadoop 3.x依赖项重新打包,以避免与Flink自身或其他依赖项产生冲突。这样一来,Flink就能够与Hadoop 3.x版本协同工作,平滑地使用Hadoop的功能。 而flink-shaded-hadoop3-uber模块则是更加完整和庞大的用于集成Hadoop 3.x版本的模块。它将包含Hadoop 3.x依赖的所有必需库和资源等,以便于使用和编译。相比于flink-shaded-hadoop3模块,flink-shaded-hadoop3-uber模块更像是一个“全能版”,其中包含了实现与Hadoop 3.x版本深度集成所需的所有组件。这使得开发人员能够方便地构建和部署Flink应用程序,并且在与Hadoop生态系统进行交互时更加方便。 总的来说,flink-shaded-hadoop3和flink-shaded-hadoop3-uber模块都是Flink为了与Hadoop 3.x版本无缝集成,提供的两个特殊模块。它们通过重新打包Hadoop依赖,解决了可能产生的冲突问题,使得Flink能够顺利使用并利用Hadoop的功能和工具
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值