java字符串标胶大小_AWS胶水作业在s3上的大输入csv数据失败

对于小的s3输入文件(~10GB),胶水ETL作业可以正常工作,但对于较大的数据集(~200GB),作业失败 .

添加部分ETL代码 .

# Converting Dynamic frame to dataframe

df = dropnullfields3.toDF()

# create new partition column

partitioned_dataframe = df.withColumn('part_date', df['timestamp_utc'].cast('date'))

# store the data in parquet format on s3

partitioned_dataframe.write.partitionBy(['part_date']).format("parquet").save(output_lg_partitioned_dir, mode="append")

工作执行了4个小时并引发了错误 .

文件“script_2017-11-23-15-07-32.py”,第49行,在partitioned_dataframe.write.partitionBy(['part_date']) . format(“parquet”) . save(output_lg_partitioned_dir,mode =“append “)文件”/mnt/yarn/usercache/root/appcache/application_1511449472652_0001/container_1511449472652

02_000001/pyspark.zip/pyspark/sql/readwriter.py“,第550行,在保存文件中”/ mnt / yarn / usercache / root / appcache / application_1511449472652_0001 / container_1511449472652

02_000001 / py4j-0.10.4-src.zip / py4j / java_gateway.py“,第1133行,在调用文件中”/mnt/yarn/usercache/root/appcache/application_1511449472652_0001/container_1511449472652

02_000001/pyspark.zip/pyspark/sql /utils.py” 63行,在装饰文件 “/mnt/yarn/usercache/root/appcache/application_1511449472652_0001/container_1511449472652

02_000001/py4j-0.10.4-src.zip/py4j/protocol.py”,线路319,在get_return_value py4j.protocol.Py4JJavaError:调用o172.save时发生错误 . :org.apache.spark.SparkException:作业已中止 . 在org.apache.spark.sql.execution.datasources.FileFormatWriter $$ anonfun:在org.apache.spark.sql.execution.datasources.FileFormatWriter $$ anonfun $写$ 1.适用$ MCV $ SP(147 FileFormatWriter.scala)在org.apache.spark.sql上的org.apache.spark.sql.execution.datasources.FileFormatWriter $$ anonfun $ write $ 1.apply(FileFormatWriter.scala:121)$ $ $ apply(FileFormatWriter.scala:121) . execution.SQLExecution $ .withNewExecutionId(SQLExecution.scala:57)在org.apache.spark.sql.execution.datasources.FileFormatWriter $ .WRITE(FileFormatWriter.scala:121)在org.apache.spark.sql.execution.datasources . 在org.apache.spache.spark.sql.execution.command.ExecutedCommandExec的org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult $ lzycompute(commands.scala:58)的InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:101) . orE.apache.spark.sql.execution.Sp上的org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)中的sideEffectResult(commands.scala:56) arkPlan $$ anonfun $执行$ 1.apply(SparkPlan.scala:114)org.apache.spark.sql.execution.SparkPlan $$ anonfun $执行$ 1.apply(SparkPlan.scala:114)org.apache.spark . 位于org.apache.spark.sql.execution的org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:151)的sql.execution.SparkPlan $$ anonfun $ executeQuery $ 1.apply(SparkPlan.scala:135) .SparkPlan.executeQuery(SparkPlan.scala:132)org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)at org.apache.spark.sql.execution.QueryExecution.toRdd $ lzycompute(QueryExecution) .scala:87)org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:492)at at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)在org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:198)在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccess orImpl.java:62)在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)在java.lang.reflect.Method.invoke(Method.java:498)在py4j.reflection.MethodInvoker.invoke(MethodInvoker.java :244)在py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)在py4j.Gateway.invoke(Gateway.java:280)在py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)在py4j.commands .CallCommand.execute(CallCommand.java:79)at py4j.GatewayConnection.run(GatewayConnection.java:214)at java.lang.Thread.run(Thread.java:748)引起:org.apache.spark.SparkException:由于阶段失败导致作业中止:3385任务的序列化结果总大小(1024.1 MB)大于org.apache.spark.scheduler.DAGScheduler.org上的spark.driver.maxResultSize(1024.0 MB)$ apache $ spark $ scheduler $ DAGScheduler $$ failJobAndIndependentStages(DAGScheduler.scala:1435)org.apache.spark.scheduler.DAGScheduler $$ anonfun $ abortStage $ 1.apply(DAGScheduler.scala:1423)at org.apache.s park.scheduler.DAGScheduler $$ anonfun $ abortStage $ 1.apply(DAGScheduler.scala:1422)at scala.collection.mutable.ResizableArray $ class.foreach(ResizableArray.scala:59)at scala.collection.mutable.ArrayBuffer.foreach( ArrayBuffer.scala:48)atorg.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)atg.apache.spark.scheduler.DAGScheduler $$ anonfun $ handleTaskSetFailed $ 1.apply(DAGScheduler.scala:802)at org.apache.spark . scheduler.DAGScheduler $$ anonfun $ $ handleTaskSetFailed 1.适用(DAGScheduler.scala:802)在scala.Option.foreach(Option.scala:257)在org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802) org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive (DAGScheduler.scala:1594)在org.apache.spark.util.EventLoop $$匿名$ 1.run(EventLoop.scala:48)在org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)在org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)在org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)在org.apache.spark.SparkContext .runJob(SparkContext.scala:1951)在org.apache.spark.sql.execution.datasources.FileFormatWriter $$ anonfun $写$ 1.适用$ MCV $ SP(FileFormatWriter.scala:127)...... 30多种日志类型的结束:标准输出

如果您能提供解决此问题的任何指导,我将不胜感激 .

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值