问题解决整理
struggling_rong
这个作者很懒,什么都没留下…
展开
-
启动kafka报错 /usr/local/kafka/kafka_2.11-1.1.0/bin/kafka-run-class.sh:行271
问题kafka-server-start.sh config/server.properties启动kafka时报如下错误:/usr/local/kafka/kafka_2.11-1.1.0/bin/kafka-run-class.sh:行271: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.151-5.b12.el7_4.x86_64//bin/j...原创 2018-09-02 12:46:01 · 10156 阅读 · 4 评论 -
通过count(*)统计hive表中数据数量为0
问题:select count() from t1;得到的数量为0,原因:该表创建时指定的存储格式为parquet,所以count()无法统计解决办法:count单独某个字段可以统计出数目...转载 2018-09-21 19:45:43 · 6838 阅读 · 3 评论 -
ERROR exec.DDLTask: java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.xxx
问题:通过sqoop工具将mysql中的数据导入到hive中sqoop import --connect jdbc:mysql://master:3306/mf --username root --password xxxx --table t_activity_page --split-by id --delete-target-dir --hive-import -m 1报如下错误:...转载 2018-09-19 17:03:48 · 1669 阅读 · 0 评论 -
MetaException(message:Hive Schema version 2.3.0 does not match metastore's schema version 1.2.0
环境: spark 2.2.0 hive 2.3.3 问题: 用spark应用创建一张hive表后,在通过hive shell来操作hive时报如下错;MetaException(message:Hive Schema version 2.3.0 does not match metastore's schema version 1.2.0原因: spark应用创建表时,指定的sc...转载 2018-09-10 23:45:31 · 1628 阅读 · 1 评论 -
MetaException(message:file:/user/hive/warehouse/xxx is not a directory or unable to create one)
环境: hadoop 2.7.6 spark 2.2.0 hive 2.3.3 问题: 编写spark应用保存数据到hive表,之前不存在该表,报如下错:Caused by: MetaException(message:file:/user/hive/warehouse/t_spark_ncdc is not a directory or unable to create one)...原创 2018-09-10 23:18:36 · 3240 阅读 · 1 评论 -
org.apache.hadoop.hbase.DoNotRetryIOException: Field is not a long, it's 19 bytes wide
问题: 使用java代码调用原子性递增方法时报如下错:Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: Field is n...原创 2018-09-02 12:41:08 · 5960 阅读 · 0 评论 -
regionserver.HRegionServer: error telling master we are up
问题: 修改hbase的配置文件hbase-env.sh,hbase-site.xml后,启动hbase集群,slave2启动异常,查看日志时报如下错误: regionserver.HRegionServer: reportForDuty to master=master,16000,1535782912381 with port=16020, startcode=153578291440...原创 2018-09-02 12:42:21 · 2272 阅读 · 0 评论 -
TaskSetManager: Lost task 0.0 in stage 9.0 (TID 18, localhost, executor driver): java.lang.NoSuchMet
环境 window10 ,idea ,scala-2.11, spark-2.2.0 问题: 本地运行spark sql代码报错//5. 从外部数据源获取数据 val fileDogDF = spark.read.json(s"data/sql/te.json") fileDogDF.show()提示的异常:TaskSetManager: Lost task 0...原创 2018-09-02 12:44:31 · 8127 阅读 · 0 评论 -
kill不掉 spark-submit
问题: spark-shell正在运行时,进行如合上笔记本盖子让计算机睡眠,挂起虚拟机等异常操作,再回来时,使用spark-shell异常。再去kill -9 spark-submitspark-submit进程变成僵尸进程,无法杀掉 原因: 进行了异常操作 解决办法: 目前想到的只有重启...原创 2018-09-02 12:43:20 · 2966 阅读 · 0 评论 -
将DataFrame数据保存为table后,查看该table中的数据为空
问题: //保存为表 personDF.write.saveAsTable("person") //查看有哪些表 spark.catalog.listTables().show() //读取表中数据 val personTable = spark.read.table("person") personTable.show() //为空原因...原创 2018-09-02 12:44:10 · 1669 阅读 · 1 评论 -
Initial job has not accepted any resources; check your cluster UI to ensure that workers are registe
提交一个spark应用时报如下警告:18/07/29 11:42:37 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources查看...原创 2018-09-02 12:45:23 · 9731 阅读 · 0 评论 -
Exception in thread "main" org.apache.spark.SparkException: Task not serializable
运行一个spark应用时报如下错误:Exception in thread "main" org.apache.spark.SparkException: Task not serializable at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298) at o...原创 2018-09-02 12:45:43 · 4916 阅读 · 0 评论 -
org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
问题:在hbase shell中执行语句报如下异常:ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2379) at org.apache.had...原创 2018-10-15 09:56:25 · 2323 阅读 · 0 评论