hive1.2以前版本的where条件NullPointerException 1、异常背景:hive版本1.1.0,表是orc格式,使用条件where name in ('支付金额','订单量','客单价','毛利率','全链路达成率','猫超重点商品在架率','基准价毛利率','商品缺货率')2、日志如下:Diagnostic Messages for this Task:Error: java.lang.RuntimeException: org.apache.had...
flume sink hdfs异常 1、异常消息如下:016-08-26 14:19:17,704 (hdfs-sink1-call-runner-2) [ERROR - org.apache.flume.sink.hdfs.AbstractHDFSWriter.hflushOrSync(AbstractHDFSWriter.java:267)] Error while trying to hflushOrSync!2016-
sqoop导数据到hive失败 1、sqoop异常现象:FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. InvalidObjectException(message:There is no database named dw)2、hive.log: 2016-08-23 16:1
datanode Bad connect ack with firstBadLink 1、每次启动job很慢并有异常信息:ERROR - java.io.IOException: Bad connect ack with firstBadLink as 10.21.232.114:5001023-08-2016 14:13:21 CST import_ucord01_order_discount ERROR - at org.apache.hadoop.hdfs.DF
mapreduce job任务非常慢 1、application日志2016-08-11 14:48:15,174 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Ramping down all scheduled reduces:02016-08-11 14:48:15,174 INFO [
hive和hbase集成异常 一、偶尔出现两个异常Error: java.lang.IllegalArgumentException: Illegal character code:-1, at 0. User-space table qualifiers can only contain 'alphanumeric characters': i.e. [a-zA-Z_0-9-.]: � at org.apach
reids定时异常 1、最近业务总是报redis异常,而且总是凌晨1点左右16/07/18 01:02:30 ERROR ShardedDaoImpl: sharded redis lrange:redis.clients.jedis.exceptions.JedisConnectionException: java.net.ConnectException: Connection timed ou
mapreduce job一直卡住 16/07/15 17:34:41 INFO input.FileInputFormat: Total input paths to process : 116/07/15 17:34:42 INFO mapreduce.JobSubmitter: number of splits:116/07/15 17:34:42 INFO mapreduce.JobSubmitter: Submitti
StandbyException 16/07/15 16:16:48 INFO retry.RetryInvocationHandler: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over hdfs01.beta1.fn/10.202.249.230:9000 after 2 fail over attempt
hadoop-2.5.0-cdh5.3.0 HA在线升级 本文升级到hadoop2.61、hadoop升级前准备,namenode元数据和配置备份等信息参考上篇文章:http://blog.csdn.net/linux_ja/article/details/519084662、备份信息:hdfs dfsadmin -rollingUpgrade preparehdfs dfsadmin -rollingUpgrade quer
hdfs dfsadmin -rollingUpgrade解读 hdfs dfsadmin -rollingUpgrade prepare 源码中是CheckpointSignature rollEditLog() throws IOException { getEditLog().rollEditLog(); // Record this log segment ID in all of the storage directori
Hadoop 2.3.0-cdh5.0.2升级Hadoop 2.5.0-cdh5.3.1 一、准备升级包for line in `cat /home/hadoop/platform/hadoop.list|awk '{print $1}'`; do echo $line;ssh $line "/bin/mkdir /home/hadoop/platform";donefor line in `cat /home/hadoop/platform/hadoop.list|a
flume kafka-sink high cpu flume sink到kfka时候,导致cpu过高,以下是分析原因:一、flume的kafka的sink cpu太高分析:1、获取flume的进程id[root@datanode conf]$ toptop - 10:17:58 up 14 days, 18:09, 2 users, load average: 1.37, 1.11, 0.65Tasks: 436 total,
org.apache.thrift7.transport.TTransportException: java.net.ConnectException: Connection timed out storm诡异问题查找:一个经常容易报的异常:Exception in thread "main" org.apache.thrift7.transport.TTransportException: java.net.ConnectException: Connection timed out at org.apache.thrift7.transport.TSocket.open(TS
hive on tez遇到问题 一、环境hive1.3.1,tez0.5.0二、运行异常如下:return code -101 from org.apache.hadoop.hive.ql.exec.tez.TezTask. org.apache.tez.mapreduce.hadoop.MRHelpers.getBaseMRConfiguration(Lorg/apache/hadoop/conf/Configurat
hive on tez集成 一、1.下载apache-tez-0.5.4-src.tar.gz2.下载apache-maven-3.2.5-bin.tar3.下载protobuf-2.5.0.tar.gz二、安装1.安装mvn,省略2.安装protobuf: a.tar -zxf protobuf-2.5.0.tar.gz命令解压后得到是protobuf-2.5.0的源码, b.
jmx监控spark executor配置 jmx监控spark比storm稍微有点繁琐:首先在spark-defaults.conf中添加 ,但是8711端口不能重复,也就是说不能在一个节点上启动两个executor,或者端口冲突,没有storm友好 spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dcom.sun.management.jmxremote -Dc
storm work配置JMX监控 配置JMX监控,可以查看GC和线程等等信息,方便debug,以下是在sorm.yaml文件中配置参数配置,%ID%取的时每个work的进程号,因为一个节点上可以有多个work,所以防止端口号重复导致启动失败,所以用动态代替worker.childopts: "-Xmx2048m -Xms2048m -Xmn500m -XX:PermSize=256M -XX:MaxPermSi
FairSync与NonFairSync比较 通过源码比较二者区别,: static final class FairSync extends Sync { private static final long serialVersionUID = -3000897897090466540L; final void lock() { acquire(1);//直接调用tryAcqui
sparkSQL 集成hive异常问题解决 1、报:Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient是由于元数据默认到derby中找,所以提供mysql的解决:在spar