![](https://img-blog.csdnimg.cn/20201014180756738.png?x-oss-process=image/resize,m_fixed,h_64,w_64)
Hadoop
zhangjunli
这个作者很懒,什么都没留下…
展开
-
hadoop-3.x、zookeeper-3.x、hbase-2.x、hive-3.x、sqoop1.x、spark3.x
直接上干货一、vim /etc/profileexport JAVA_HOME=/usr/local/java/jdk1.8.0_271export HADOOP_HOME=/usr/local/hadoop-3.3.0export HBASE_HOME=/usr/local/hbase-2.3.3export HIVE_HOME=/usr/local/hive-3.1.2export SQOOP_HOME=/usr/local/sqoop-1.4.7export SQOOP_SERVE原创 2020-12-17 10:05:03 · 558 阅读 · 0 评论 -
Unable to load native-hadoop library for your platform解决
启动spark后,运行bin/spark-shell会出现一个警告WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable>>提君博客原创 http://www.cnblogs.com/tijun/ <<虽然不影响运行,但是看着不舒服。下面我整理的解决方法。...原创 2020-12-14 12:29:27 · 1008 阅读 · 0 评论 -
HBase2.3单机本地版使用(不需要hdfs)
1、安装单机版HBase下载hbase-2.3.0-bin.tar.gz及jdk-8u151-linux-x64.tar.gz并解压到目录/software关闭防火墙、禁用ipv6并设置hostname为hbase,,同时需调整时区为中国时区检查系统版本为centos7[root@hbase ~]# cat /etc/redhat-release CentOS Linux release 7.8.2003 (Core)[root@hbase ~]# uname -aLinux hbase原创 2020-12-11 14:49:20 · 649 阅读 · 0 评论 -
hive Dynamic partition strict mode requires at least one static partition column解决
报错:FAILED: SemanticException [Error 10096]: Dynamic partition strict mode requires at least one static partition column.To turn this off set hive.exec.dynamic.partition.mode=nonstrict1、解决方案:set hive.exec.dynamic.partition.mode=nonstrict;2、重新执行:原创 2020-12-10 15:12:12 · 584 阅读 · 0 评论 -
Hive drop table table_name一直卡死解决办法
方法一:替换mysql驱动包方法二:修改mysql字符集(my.cnf)[client]default-character-set=latin1[mysql]default-character-set=latin1[mysqld]default-character-set=latin1方法三:修改配置文件(hive-site.conf)<property> <name>hive.metastore.schema.verification原创 2020-12-10 13:45:51 · 403 阅读 · 0 评论 -
ERROR: Attempting to operate on yarn resourcemanager as root ERROR: but there is no
ERROR: Attempting to operate on yarn resourcemanager as rootERROR: but there is no YARN_RESOURCEMANAGER_USER defined. Aborting operation.出现以上报错信息需要到 sbin 目录下 更改 start-yarn.sh 和 stop-yarn.sh 信息,在两个配置文件的第一行添加:YARN_RESOURCEMANAGER_USER=rootHADOOP_SECUR.原创 2020-12-02 13:42:46 · 6364 阅读 · 1 评论 -
hive常见的问题
启动 hive 时WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance w原创 2020-12-02 13:39:49 · 286 阅读 · 0 评论 -
sqoop导入hive报错Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_DIR is ..
sqoop导入hive表,如报以下的错误19/12/06 00:09:41 ERROR hive.HiveConfig: Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_DIR is set correctly.19/12/06 00:09:41 ERROR tool.ImportTool: Import failed: java.io.IOException: java.lang.ClassNotFo原创 2020-12-02 13:37:21 · 286 阅读 · 0 评论 -
hive执行insert语句时,MapReduce进度一直是0%
修改Yarn-site.xml配置<property><name>yarn.nodemanager.resource.memory-mb</name><value>4096</value></property>原创 2020-12-02 12:23:06 · 1755 阅读 · 0 评论 -
SafeModeException: Cannot delete **. Name node is in safe mode
这是因为Name node 处于安全模式中。如何关闭安全模式呢?命令为:hadoop dfsadmin -safemode leave原创 2020-12-02 12:21:57 · 543 阅读 · 0 评论 -
搭建hadoop3.x报错 Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
启动hadoop伪分布式集群报错node1: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). node2: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).数据节点node1和node2为正常启动,如图在网上找了很多相同报错都是ssh免秘登陆的问题,但在这报错前每台服务器可以实现免密访问,最开始排除了ssh免密连接问题原创 2020-12-02 09:47:48 · 2648 阅读 · 0 评论 -
错误: 找不到或无法加载主类 org.apache.hadoop.mapreduce.v2.app.MRAppMaster
yarn执行MapReduce任务时,找不到主类导致的[2019-12-31 20:02:59.464]Container exited with a non-zero exit code 1. Error file: prelaunch.err. Last 4096 bytes of prelaunch.err : Last 4096 bytes of stderr : 错误: 找不到或无法加载主类 org.apache.hadoop.mapreduce.v2.app.MRAppMaster解决:在原创 2020-12-01 14:08:15 · 1499 阅读 · 0 评论 -
beeline启动时,错误 User: root is not allowed to impersonate root
今天敲代码了吗?beeline启动时,错误 User: root is not allowed to impersonate root错误:beeline>!connect jdbc:hive2://192.168.33.01:10000 root rootConnecting to jdbc:hive2://192.168.33.01:10000Error: Failed to open new session: java.lang.RuntimeException: org.apa原创 2020-11-27 16:53:39 · 977 阅读 · 0 评论 -
Hive启动报错:java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument
报错详细:Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357) at org.apache.hadoop.conf.Configu原创 2020-11-26 18:15:26 · 829 阅读 · 0 评论 -
hive3.1.2单机伪分布式部署
下载hivewget http://mirror.bit.edu.cn/apache/hive/hive-3.1.2/apache-hive-3.1.2-bin.tar.gz安装tar -zxvf apache-hive-3.1.2-bin.tar.gz -C /root/apps #解压mv apache-hive-3.1.2-bin hive-3.1.2 #改名配置环境export HIVE_HOME=/root/apps/hive-3.1.2export PA原创 2020-11-26 18:11:14 · 347 阅读 · 0 评论 -
HBase-2.3.3 安装运行
一.下载cd /home/bigdata;wget http://archive.apache.org/dist/hbase/2.3.3/hbase-2.3.3-bin.tar.gz;tar -zxvf hbase-2.3.3-bin.tar.gz;chmod -R 777 hbase-2.3.3;二.配置vim /home/bigdata/hbase-2.3.3/conf/hbase-env.sh;添加:export JAVA_HOME=/usr/java/jdk1..原创 2020-11-26 14:48:42 · 1231 阅读 · 1 评论 -
HBase2.3单机伪分布式部署
1、安装单机版HBase下载hbase-2.3.0-bin.tar.gz及jdk-8u151-linux-x64.tar.gz并解压到目录/software关闭防火墙、禁用ipv6并设置hostname为hbase,,同时需调整时区为中国时区检查系统版本为centos7[root@hbase ~]# cat /etc/redhat-release CentOS Linux release 7.8.2003 (Core)[root@hbase ~]# uname -aLinux hb.原创 2020-11-26 14:46:05 · 580 阅读 · 0 评论 -
hadoop3.x单机伪分布式部署
关闭防火墙systemctl stop firewalld.servicesystemctl disable firewalld.service配置ip映射vim /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.local原创 2020-11-26 14:40:19 · 552 阅读 · 0 评论 -
Hadoop-3.3.0部署配置
1、下载 Hadoop下载地址:http://hadoop.apache.org/Hadoop 安装地址:/usr/local/hadoop/hadoop-3.3.02、 解压 Hadoop 到指定文件夹tar -zxf hadoop-3.2.0.tar.gz -C /usr/local/hadoop3、 查看 Hadoop 版本信息cd /usr/hadoop/local/hadoop-3.2.0./bin/hadoop version4. Hadoop 配置4.1 建立目录在 /usr原创 2020-11-25 18:52:58 · 2753 阅读 · 2 评论 -
NullPointerException spark.storage.BlockManagerMaster.registerBlockManager
现象Java端报错:19/11/05 15:06:05 INFO SparkEnv: Registering OutputCommitCoordinator19/11/05 15:06:06 INFO Utils: Successfully started service 'SparkUI' on port 4040.19/11/05 15:06:06 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://DD-HP5500:4原创 2020-06-03 11:00:37 · 1117 阅读 · 0 评论 -
Spark2.x在Idea中运行在远程集群中并进行调试
方法1把自己的电脑作为Driver端,直接把jar包提交到集群,此时Spark的Master与Worker会一直和本机的Driver端保持连接,调试比较方便。import org.apache.spark.SparkContextimport org.apache.spark.SparkConfobject WordCount { def main(args: Array[String]): Unit = { val sparkConf = new SparkConf().se...原创 2020-05-24 21:48:36 · 1164 阅读 · 1 评论 -
Permission denied: user=administrator, access=WRITE 问题解决
Hadoop集群环境部署在几个Linux服务器上,现在想使用windows上的Java客户端来操作集群中的HDFS文件,但是在客户端运行时出现了如下的认证错误,被折磨了几天,问题终得以解决。以此文记录问题的解决过程。(如果想看最终解决问题的方法拉到最后,如果想看我的问题解决思路请从上向下看)问题描述上传文件的代码:package com.cys.mapreduce;import java.io.IOException;import java.util.StringTokenizer;原创 2020-05-24 21:36:37 · 6205 阅读 · 4 评论 -
Centos 7 zookeeper 3.1.4 + hadoop 3.1.2+Hbase 2.1.3完全分布式+高可用(HA)
版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。 ...原创 2020-02-17 21:19:40 · 634 阅读 · 0 评论