CDH 5.16.2 异常踩坑

20/05/05 23:52:24 WARN CorruptStatistics: Ignoring statistics because created_by could not be parsed (see PARQUET-251): parquet-mr
org.apache.parquet.VersionParser$VersionParseException: Could not parse created_by: parquet-mr using format: (.+) version ((.*) )?\(build ?(.*)\)
	at org.apache.parquet.VersionParser.parse(VersionParser.java:112)
	at org.apache.parquet.CorruptStatistics.shouldIgnoreStatistics(CorruptStatistics.java:60)
	at org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetStatistics(ParquetMetadataConverter.java:263)
	at org.apache.parquet.format.converter.ParquetMetadataConverter.fromParquetMetadata(ParquetMetadataConverter.java:567)
	at org.apache.parquet.format.converter.ParquetMetadataConverter.readParquetMetadata(ParquetMetadataConverter.java:544)
	at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:431)
	at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:386)
	at org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.initialize(SpecificParquetRecordReaderBase.java:107)
	at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.initialize(VectorizedParquetRecordReader.java:109)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$buildReader$1.apply(ParquetFileFormat.scala:381)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$buildReader$1.apply(ParquetFileFormat.scala:355)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:168)
	at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:109)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.scan_nextBatch$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.sort_addToSorter$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
	at org.apache.spark.sql.execution.aggregate.SortAggregateExec$$anonfun$doExecute$1$$anonfun$3.apply(SortAggregateExec.scala:80)
	at org.apache.spark.sql.execution.aggregate.SortAggregateExec$$anonfun$doExecute$1$$anonfun$3.apply(SortAggregateExec.scala:77)
	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
	at org.apache.spark.scheduler.Task.run(Task.scala:99)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

1、该异常是在IDEA中运行报的异常,放到集群环境下则不会出现。

20/05/05 23:29:18 WARN ShellBasedUnixGroupsMapping: got exception trying to get groups for user atguigu org. apache. hadoop. util. Shell$ExitCodeException: GetLocalGroupsForUser error (1332):?????????????????
at org. apache. hadoop. util. SheLL. runcommand(ShelL. java:464)
at org. apache. hadoop. util. ShelL. run(ShelL. java:379)
at org. apache. hadoop. util. Shel1$ShellCommandExecutor. execute(ShelL. java:589)
at org. apache. hadoop. util. Shell. execCommand(ShelL. java:678) at org. apache. hadoop. util. ShelL. execCommand(ShelL. java:661)
at org. apache. hadoop. security. ShellBasedUnixGroupsMapping. getUnixGroups(SheLLBasedUnixGroupsMapping. java:83)
at org. apache. hadoop. security. ShellBasedUnixGroupsMapping. getGroups(She11BasedunixGroupsMapping. java:52)
at org. apache. hadoop. security. Groups. getGroups(Groups. java:89)
at org. apache. hadoop. security. UserGroupInformation. getGroupNames(UserGroupInformation. java:1352)
at org. apache. hadoop. hive. metastore. HiveMetastoreCLient. open(HiveMetastoreClient. java:436)
at org. apache. hadoop. hive. metastore. HiveMetastoreClient.<init>(HiveMetastoreClient. iava:236)
at org. apache. hadoop. hive. ql. metadata. SessionHiveMetastoreClient.<init>(SessionHiveMetastoreClient. java:74)<4 in at org. apache. hadoop. hive. metastore. MetastoreUtils. newInstance(Metastoreutis. iava:1521)
at org. apache. hadoop. hive. metastore. RetryingMetastoreClient.<init>(RetryingMetastoreClient. java:86)
at org. apache. hadoop. hive. metastore. RetryingMetastoreClient. getProxy(RetryingMetastoreClient. java:132) at org. apache. hadoop. hive. metastore. RetryingMetastoreClient. getProxy(RetryingMetastoreClient. iava:104)
at org. apache. hadoop. hive. ql. metadata. Hive. createMetastoreClient(Hive. java:3005)

2、这是我们在设置用户权限时,该用户不存在,需要手动增加该用户再运行程序即可。

```bash
Cannot access: /user/admin. Note: you are a Hue admin but not a HDFS superuser, "hdfs" or part of HDFS supergroup, "supergroup".
StandbyException: Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error (error 403)

3、原因是:HA的namenode由hadoop102和hadoop104组成。原来是hadoop104为活跃namenode,变成了hadoop102为活跃。因为我们在 hue 的配置中,设置webhdfs_url=hadoop104,手动切换namenode,变成hadoop104为活跃或者更改hue的配置即可。hue恢复正常使用。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值