测试flink实时流系列(五):flink实时流测试问题总汇

一、ZooKeeper错误

1. 启动ZK集群后,./zkServer.sh status 查询状态显示“Error contacting service. It is probably not running.

问题描述:三台ZK节点组成的集群,每个节点运行./zkServer.sh start后,用zkServer.sh status查询ZK集群状态,显示如下:

$ ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/xxx/zk/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.

问题分析:以下几个方面都会造成这个问题,需要逐一排查

1)myid文件内容不正确,即id和配置的server.id不匹配。myid文件路径不正确,myid需要创建在zoo.cfg里配置的dataDir= 目录下。

2)防火墙没关闭/Iptable过滤了。ubuntu上如何关闭防火墙,请参考测试flink实时流系列(一):搭建ZK+Kafka集群

3)zoo.cfg文件不正确,主机名出错

4)2181端口被占用,这个很小概率碰上。

解决方法:

我逐步排查,zoo.cfg,myid文件,关掉防火墙后,问题解决。

$ ./bin/zkServer.sh status
JMX enabled by default
Using config: /home/flink/zookeeper/bin/../conf/zoo.cfg
Mode: leader

二、kafka错误

1. 启动kafka报错,如下:

at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
        at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
        at org.apache.kafka.clients.producer.internals.BufferPool.allocate(Buffer...
        at org.apache.kafka.clients.producer.internals.RecordAccumulator.append(R...
        at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.jav...
        at com.intel.hibench.datagen.streaming.util.KafkaSender.send(KafkaSender....
        at com.intel.hibench.datagen.streaming.util.RecordSendTask.run(RecordSend...
        at java.util.TimerThread.mainLoop(Timer.java:555)
        at java.util.TimerThread.run(Timer.java:505)
        at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
        at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
        at org.apache.kafka.clients.producer.internals.BufferPool.allocate(Buffer...
        at org.apache.kafka.clients.producer.internals.RecordAccumulator.append(R...
        at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.jav...
        at com.intel.hibench.datagen.streaming.util.KafkaSender.send(KafkaSender....
        at com.intel.hibench.datagen.streaming.util.RecordSendTask.run(RecordSend...
        at java.util.TimerThread.mainLoop(Timer.java:555)
        at java.util.TimerThread.run(Timer.java:505)
        at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
        at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
        at org.apache.kafka.clients.producer.internals.BufferPool.allocate(Buffer...
        at org.apache.kafka.clients.producer.internals.RecordAccumulator.append(R...
        at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.jav...
        at com.intel.hibench.datagen.streaming.util.KafkaSender.send(KafkaSender....
        at com.intel.hibench.datagen.streaming.util.RecordSendTask.run(RecordSend...
        at java.util.TimerThread.mainLoop(Timer.java:555)
        at java.util.TimerThread.run(Timer.java:505)
        at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
        at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
        at org.apache.kafka.clients.producer.internals.BufferPool.allocate(Buffer...
        at org.apache.kafka.clients.producer.internals.RecordAccumulator.append(R...
        at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.jav...
        at com.intel.hibench.datagen.streaming.util.KafkaSender.send(KafkaSender....
        at com.intel.hibench.datagen.streaming.util.RecordSendTask.run(RecordSend...
        at java.util.TimerThread.mainLoop(Timer.java:555)
        at java.util.TimerThread.run(Timer.java:505)

问题描述:该kafka 版本是在另外一台服务器 运行正常的,拷贝到这台类似的服务器上运行,出错。

问题分析:由于报错的是Java.nio,是java本身的库,推测是java版本不一致造成

解决方法:检查服务器的java版本与之前正常运行的服务器版本是否一致。果然,正常运行的服务器是Java1.8版本,而现在的是java.1.13。更新java版本后,问题得到了解决。

三、flink错误

1. flink运行时,报java.lang.IllegalStateException:错误,提示akka超时,"Caused by: akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka.tcp://flink@127.0.0.1:44163/user/taskmanager#-968101339]] after [10000 ms]
"

java.lang.IllegalStateException: Update task on instance e8779c530879046deb259b6ddb2dab86 @ localhost - 4 slots - URL: akka.tcp://flink@127.0.0.1:44163/user/taskmanager failed due to:
        at org.apache.flink.runtime.executiongraph.Execution$6.onFailure(Execution.java:954)
        at akka.dispatch.OnFailure.internal(Future.scala:228)
        at akka.dispatch.OnFailure.internal(Future.scala:227)
        at akka.dispatch.japi$CallbackBridge.apply(Future.scala:174)
        at akka.dispatch.japi$CallbackBridge.apply(Future.scala:171)
        at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
        at scala.runtime.AbstractPartialFunction.applyOrElse(AbstractPartialFunction.scala:28)
        at scala.concurrent.Future$$anonfun$onFailure$1.apply(Future.scala:136)
        at scala.concurrent.Future$$anonfun$onFailure$1.apply(Future.scala:134)
        at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
        at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:121)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka.tcp://flink@127.0.0.1:44163/user/taskmanager#-968101339]] after [10000 ms]
        at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)
        at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
        at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:599)
        at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
        at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:597)
        at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:467)
        at akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:419)
        at akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:423)
        at akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375)
        at java.lang.Thread.run(Thread.java:748)

问题描述: flink运行时过程中,出现大量akka超时错误,导致flink子任务线程从RUNNING转换为

问题分析:节点间的通讯都是通过akka消息来传递的,高负载下,消息没有得到及时处理,默认的超时时间用完了,于是超时。

解决方法:增加超时时间

#新增下面配置参数到文件conf/flink-conf.yaml
akka.ask.timeout: 600 s
web.timeout="100000"

2. flink启动时报java.lang.RuntimeException:Unable to get Cluster status from Application Client错误,也是提示超时,但错误类型不一样Caused by: akka.pattern.AskTimeoutException: Recipient[Actor[akka://flink/user/applicationClient#1856313309]] had already been terminated.

TaskManager status (0/10)
TaskManager status (0/10)
2019-10-30 14:29:47,227 INFO  org.apache.flink.yarn.ApplicationClient                       - Remote JobManager has been stopped successfully. Stopping local application client
2019-10-30 14:29:47,231 INFO  org.apache.flink.yarn.ApplicationClient                       - Stopped Application client.
2019-10-30 14:29:47,231 INFO  org.apache.flink.yarn.ApplicationClient                       - Disconnect from JobManager Actor[akka.tcp://flink@192.168.1.1:36673/user/jobmanager#1269146106].
2019-10-30 14:29:47,297 INFO  org.apache.flink.yarn.FlinkYarnCluster                        - Application application_1572416881582_0002 finished with state FINISHED and final state FAILED at 1572416987024

------------------------------------------------------------
 The program finished with the following exception:

java.lang.RuntimeException: Unable to get Cluster status from Application Client
        at org.apache.flink.yarn.FlinkYarnCluster.getClusterStatus(FlinkYarnCluster.java:307)
        at org.apache.flink.client.CliFrontend.getClient(CliFrontend.java:1062)
        at org.apache.flink.client.CliFrontend.run(CliFrontend.java:315)
        at org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1192)
        at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1243)
Caused by: akka.pattern.AskTimeoutException: Recipient[Actor[akka://flink/user/applicationClient#1856313309]] had already been terminated.
        at akka.pattern.AskableActorRef$.ask$extension(AskSupport.scala:132)
        at akka.pattern.AskableActorRef$.$qmark$extension(AskSupport.scala:144)
        at akka.pattern.AskSupport$class.ask(AskSupport.scala:75)
        at akka.pattern.package$.ask(package.scala:43)
        at akka.pattern.Patterns$.ask(Patterns.scala:47)
        at akka.pattern.Patterns.ask(Patterns.scala)
        at org.apache.flink.yarn.FlinkYarnCluster.getClusterStatus(FlinkYarnCluster.java:302)
        ... 4 more
Shutting down YARN cluster
2019-10-30 14:29:47,708 INFO  org.apache.flink.yarn.FlinkYarnCluster                        - Sending shutdown request to the Application Master
2019-10-30 14:29:47,709 INFO  org.apache.flink.yarn.FlinkYarnCluster                        - Deleting files in hdfs://localhost:9000/user/yjiang2/.flink/application_1572416881582_0002
2019-10-30 14:29:48,206 INFO  org.apache.flink.yarn.FlinkYarnCluster                        - YARN Client is shutting down

问题描述:flink无法正常启动

问题分析:由于错误提示Unable to get Cluster,先检查了flink服务器节点的hdfs和yarn是否正常启动,并没发现错误。怀疑是flink本身错误导致使用yarn时出错。先用yarn-session测试,果然在节点:8088页面查看结果也报错:

 $FLINK_HOME/bin/yarn-session.sh  -d -s 16 -tm 10240 -n 1 -jm 8192

解决方法:排查flink配置文件conf/flink-conf.yaml,发现配置参数taskmanager.tmp.dirs:/home/flink/mnt/nvme0n1/flink对应的目录有误,更新后,问题得到解决。

3. flink运行时报org.apache.flink.client.program.ProgramInvocationException: The main method caused an error.错误

问题描述:flink启动时报如下错误:Unable to retrieve any partitions for the requested topics [identity].Please check previous log entries

TaskManager status (9/10)
All TaskManagers are connected
Using the parallelism provided by the remote cluster (10). To use another parallelism, set it at the ./bin/flink client.
metrics is being written to kafka topic FLINK_identity_32_50000_50_1572960216986

------------------------------------------------------------
 The program finished with the following exception:

org.apache.flink.client.program.ProgramInvocationException: The main method caused an error.
        at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:520)
        at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:403)
        at org.apache.flink.client.program.Client.runBlocking(Client.java:248)
        at org.apache.flink.client.CliFrontend.executeProgramBlocking(CliFrontend.java:866)
        at org.apache.flink.client.CliFrontend.run(CliFrontend.java:333)
        at org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1192)
        at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1243)
Caused by: java.lang.RuntimeException: Unable to retrieve any partitions for the requested topics [identity].Please check previous log entries
        at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer08.<init>(FlinkKafkaConsumer08.java:221)
        at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer08.<init>(FlinkKafkaConsumer08.java:177)
        at com.intel.hibench.flinkbench.datasource.StreamBase.createDataStream(StreamBase.java:46)
        at com.intel.hibench.flinkbench.microbench.Identity.processStream(Identity.java:38)
        at com.intel.hibench.flinkbench.RunBench.runAll(RunBench.java:68)
        at com.intel.hibench.flinkbench.RunBench.main(RunBench.java:30)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:505)
        ... 6 more
The following messages were created by the YARN cluster while running the Job:
[Tue Nov 05 13:23:24 UTC 2019] Launching container (container_1572875276237_0007_01_000002 on host localhost).
[Tue Nov 05 13:23:25 UTC 2019] Launching container (container_1572875276237_0007_01_000003 on host localhost).
[Tue Nov 05 13:23:26 UTC 2019] Launching container (container_1572875276237_0007_01_000004 on host localhost).
[Tue Nov 05 13:23:27 UTC 2019] Launching container (container_1572875276237_0007_01_000005 on host localhost).
[Tue Nov 05 13:23:28 UTC 2019] Launching container (container_1572875276237_0007_01_000006 on host localhost).
[Tue Nov 05 13:23:29 UTC 2019] Launching container (container_1572875276237_0007_01_000007 on host localhost).
[Tue Nov 05 13:23:30 UTC 2019] Launching container (container_1572875276237_0007_01_000008 on host localhost).
[Tue Nov 05 13:23:31 UTC 2019] Launching container (container_1572875276237_0007_01_000009 on host localhost).
[Tue Nov 05 13:23:32 UTC 2019] Launching container (container_1572875276237_0007_01_000010 on host localhost).
[Tue Nov 05 13:23:33 UTC 2019] Launching container (container_1572875276237_0007_01_000011 on host localhost).
Shutting down YARN cluster
2019-11-05 13:23:37,266 INFO  org.apache.flink.yarn.FlinkYarnCluster                        - Sending shutdown request to the Application Master
2019-11-05 13:23:37,268 INFO  org.apache.flink.yarn.ApplicationClient                       - Sending StopYarnSession request to ApplicationMaster.
2019-11-05 13:23:37,450 INFO  org.apache.flink.yarn.ApplicationClient                       - Remote JobManager has been stopped successfully. Stopping local application client
2019-11-05 13:23:37,452 INFO  org.apache.flink.yarn.ApplicationClient                       - Stopped Application client.
2019-11-05 13:23:37,452 INFO  org.apache.flink.yarn.ApplicationClient                       - Disconnect from JobManager Actor[akka.tcp://flink@127.0.0.1:36249/user/jobmanager#1603241213].
2019-11-05 13:23:37,484 INFO  org.apache.flink.yarn.FlinkYarnCluster                        - Deleting files in hdfs://localhost:9000/user/donald/.flink/application_1572875276237_0007
2019-11-05 13:23:37,486 INFO  org.apache.flink.yarn.FlinkYarnCluster                        - Application application_1572875276237_0007 finished with state FINISHED and final state FAILED at 1572960217283
2019-11-05 13:23:38,304 INFO  org.apache.flink.yarn.FlinkYarnCluster                        - YARN Client is shutting down

问题分析:这个错误提示很明显Caused by: java.lang.RuntimeException: Unable to retrieve any partitions for the requested topics [identity].Please check previous log entries。就是在kafka集群上没找到[identity]的topics 消息

解决方法:在kafka集群上创建[identity]的topics 消息,用./bin/workloads/streaming/identity/prepare/dataGen.sh 脚本创建这个topic或者./bin/kafka-topics.sh命令创建。

4、flink启动时报错,org.apache.flink.client.program.ProgramInvocationException: The main method caused an error.提示由Caused by: java.lang.NumberFormatException: null原因引起。

问题描述:flink.client不能正常启动,并且提示ERROR: Config file not found! Should not happen. Caused by: ERROR: Unknown config key:hibench.streambench.kafka.brokerList

TaskManager status (31/32)
All TaskManagers are connected
Using the parallelism provided by the remote cluster (32). To use another parallelism, set it at the ./bin/flink client.
ERROR: Config file not found! Should not happen. Caused by:
ERROR: Unknown config key:hibench.streambench.kafka.brokerList
ERROR: Unknown config key:hibench.streambench.zkHost
ERROR: Unknown config key:hibench.streambench.testCase
ERROR: Unknown config key:hibench.streambench.kafka.topic
ERROR: Unknown config key:hibench.streambench.kafka.consumerGroup
ERROR: Unknown config key:hibench.streambench.flink.bufferTimeout

------------------------------------------------------------
 The program finished with the following exception:

org.apache.flink.client.program.ProgramInvocationException: The main method caused an error.
        at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:520)
        at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:403)
        at org.apache.flink.client.program.Client.runBlocking(Client.java:248)
        at org.apache.flink.client.CliFrontend.executeProgramBlocking(CliFrontend.java:866)
        at org.apache.flink.client.CliFrontend.run(CliFrontend.java:333)
        at org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1192)
        at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1243)
Caused by: java.lang.NumberFormatException: null
        at java.lang.Long.parseLong(Long.java:552)
        at java.lang.Long.parseLong(Long.java:631)
        at com.intel.hibench.flinkbench.RunBench.runAll(RunBench.java:47)
        at com.intel.hibench.flinkbench.RunBench.main(RunBench.java:30)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:505)
        ... 6 more
The following messages were created by the YARN cluster while running the Job:
[Wed Nov 06 03:22:02 UTC 2019] Launching container (container_1573010233403_0001_01_000002 on host localhost).
[Wed Nov 06 03:22:03 UTC 2019] Launching container 
。。。
[Wed Nov 06 03:22:32 UTC 2019] Launching container (container_1573010233403_0001_01_000032 on host localhost).
[Wed Nov 06 03:22:33 UTC 2019] Launching container (container_1573010233403_0001_01_000033 on host localhost).
Shutting down YARN cluster
2019-11-06 03:22:56,052 INFO  org.apache.flink.yarn.FlinkYarnCluster                        - Sending shutdown request to the Application Master
2019-11-06 03:22:56,054 INFO  org.apache.flink.yarn.ApplicationClient                       - Sending StopYarnSession request to ApplicationMaster.
2019-11-06 03:22:56,580 INFO  org.apache.flink.yarn.ApplicationClient                       - Remote JobManager has been stopped successfully. Stopping local application client
2019-11-06 03:22:56,582 INFO  org.apache.flink.yarn.ApplicationClient                       - Stopped Application client.
2019-11-06 03:22:56,582 INFO  org.apache.flink.yarn.ApplicationClient                       - Disconnect from JobManager Actor[akka.tcp://flink@127.0.0.1:37069/user/jobmanager#-870933217].
2019-11-06 03:22:56,605 INFO  org.apache.flink.yarn.FlinkYarnCluster                        - Deleting files in hdfs://localhost:9000/user/donald/.flink/application_1573010233403_0001
2019-11-06 03:22:56,610 INFO  org.apache.flink.yarn.FlinkYarnCluster                        - Application application_1573010233403_0001 finished with state FINISHED and final state FAILED at 1573010576076
2019-11-06 03:22:56,815 INFO  org.apache.flink.yarn.FlinkYarnCluster                        - YARN Client is shutting down

问题分析:提示了 ERROR: Config file not found! Should not happen. Caused by: ERROR: Unknown config key:hibench.streambench.kafka.brokerList 这个错误,分析是使用的hibench的conf文件配置有问题。运行flink的命令如下:

./bin/flink run -m yarn-cluster -yn 12 -ys 4 -ytm 6144 -yjm 8192 -c com.intel.hibench.flinkbench.RunBench /home/flink/HiBench-7.0/flinkbench/streaming/target/flinkbench-streaming-7.1-SNAPSHOT-jar-with-dependencies.jar /home/flink/HiBench-7.0/report/repartition/flink/conf/sparkbench/sparkbench.conf

解决方法:更正了/home/flink/HiBench-7.0/report/repartition/flink/conf/sparkbench/sparkbench.conf
配置文件,一个参数书写有问题。
 

四、HiBench错误

1. kafka服务器上运行metric脚本获取测试结果出错

问题描述:执行hibench里的metrics_reader.sh脚本后,提示错误:Exception in thread "main" java.util.concurrent.ExecutionException: java.lang.NumberFormatException: For input string: "448 

问题分析:由于是输入了topic后才报的错误,怀疑是topic格式错误所致

解决方法:metrics_reader.sh查询的格式是:FLINK_identity_32_50000_50_1572422746601

[yjiang2@hadoop3 HiBench-7.0]$ ./bin/workloads/streaming/identity/common/metrics_reader.sh
patching args=
Parsing conf: /home/yjiang2/HiBench-7.0/conf/hadoop.conf
Parsing conf: /home/yjiang2/HiBench-7.0/conf/hibench.conf
Parsing conf: /home/yjiang2/HiBench-7.0/conf/spark.conf
Parsing conf: /home/yjiang2/HiBench-7.0/conf/workloads/streaming/identity.conf
probe sleep jar: /home/yjiang2/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.1-tests.jar
start MetricsReader bench
FLINK_identity_32_50000_50_1572422746601
identity
Please input the topic:identity
log4j:WARN No appenders could be found for logger (org.I0Itec.zkclient.ZkConnection).
log4j:WARN No appenders could be found for logger (org.I0Itec.zkclient.ZkEventThread).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Starting MetricsReader for kafka topic: identity
Exception in thread "main" java.util.concurrent.ExecutionException: java.lang.NumberFormatException: For input string: "448   208.53.133.145,emuvzpsrrgptgkeloluknxttefaxprzutrtmbwtbrmaemejafpajhcnjokxmakrfbfqidslpjszqiqjsvftezilmswa...
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:192)
        at com.intel.hibench.common.streaming.metrics.KafkaCollector$$anonfun$1.apply(KafkaCollector.scala:52)
        at com.intel.hibench.common.streaming.metrics.KafkaCollector$$anonfun$1.apply(KafkaCollector.scala:52)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
        at scala.collection.AbstractTraversable.map(Traversable.scala:104)
        at com.intel.hibench.common.streaming.metrics.KafkaCollector.start(KafkaCollector.scala:52)
        at com.intel.hibench.common.streaming.metrics.MetricsReader$.delayedEndpoint$com$intel$hibench$common$streaming$metrics$MetricsReader$1(MetricsReader.scala:32)
        at com.intel.hibench.common.streaming.metrics.MetricsReader$delayedInit$body.apply(MetricsReader.scala:19)
        at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
        at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
        at scala.App$$anonfun$main$1.apply(App.scala:76)
        at scala.App$$anonfun$main$1.apply(App.scala:76)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
        at scala.App$class.main(App.scala:76)
        at com.intel.hibench.common.streaming.metrics.MetricsReader$.main(MetricsReader.scala:19)
        at com.intel.hibench.common.streaming.metrics.MetricsReader.main(MetricsReader.scala)
Caused by: java.lang.NumberFormatException: For input string: "448  208.53.133.145,emuvzpsrrgptgkeloluknxttefaxprzutrtmbwtbrmaemejafpajhcnjokxmakrfbfqidslpjszqiqjsvftezilmswafob,1981-05-05,0.9478478,Mozilla/5.0 (iPhone; U; CPU like ...
        at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
        at java.lang.Long.parseLong(Long.java:589)
        at java.lang.Long.parseLong(Long.java:631)
        at scala.collection.immutable.StringLike$class.toLong(StringLike.scala:251)
        at scala.collection.immutable.StringOps.toLong(StringOps.scala:30)
        at com.intel.hibench.common.streaming.metrics.FetchJob.call(FetchJob.scala:32)
        at com.intel.hibench.common.streaming.metrics.FetchJob.call(FetchJob.scala:24)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

finish MetricsReader bench

  • 1
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值