Spark-ts 时序 三次指数平滑,偶尔报错: trust region step has failed to reduce Q

org.apache.commons.math3.exception.MathIllegalStateException: trust region step has failed to reduce Q

这个错误出现的很随意,纯属偶发,我的程序离线加载了几千点的数据,并逐点的进行三次指数平滑预测。在运行了几百次之后会报出如下错误。 

sparkts-0.4.0
holterwinters model

org.apache.commons.math3.exception.MathIllegalStateException: trust region step has failed to reduce Q
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.bobyqb(BOBYQAOptimizer.java:867)
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.bobyqa(BOBYQAOptimizer.java:329)
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.doOptimize(BOBYQAOptimizer.java:241)
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.doOptimize(BOBYQAOptimizer.java:49)
at org.apache.commons.math3.optim.BaseOptimizer.optimize(BaseOptimizer.java:153)
at org.apache.commons.math3.optim.BaseMultivariateOptimizer.optimize(BaseMultivariateOptimizer.java:65)
at org.apache.commons.math3.optim.nonlinear.scalar.MultivariateOptimizer.optimize(MultivariateOptimizer.java:63)
at tt1.HoltWinters$.fitModelWithBOBYQA(HoltWinters.scala:32)
at tt1.HoltWinters$.fitModel(HoltWinters.scala:13)
at tt1.TimeSeriesModel$$anonfun$1.apply(TimeSeriesModel.scala:40)
at tt1.TimeSeriesModel$$anonfun$1.apply(TimeSeriesModel.scala:37)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$34.apply(RDD.scala:919)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$34.apply(RDD.scala:919)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1881)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1881)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
18/04/02 15:09:59 WARN TaskSetManager: Lost task 127.0 in stage 798.0 (TID 32922, localhost): org.apache.commons.math3.exception.MathIllegalStateException: trust region step has failed to reduce Q
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.bobyqb(BOBYQAOptimizer.java:867)
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.bobyqa(BOBYQAOptimizer.java:329)
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.doOptimize(BOBYQAOptimizer.java:241)
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.doOptimize(BOBYQAOptimizer.java:49)
at org.apache.commons.math3.optim.BaseOptimizer.optimize(BaseOptimizer.java:153)
at org.apache.commons.math3.optim.BaseMultivariateOptimizer.optimize(BaseMultivariateOptimizer.java:65)
at org.apache.commons.math3.optim.nonlinear.scalar.MultivariateOptimizer.optimize(MultivariateOptimizer.java:63)
at tt1.HoltWinters$.fitModelWithBOBYQA(HoltWinters.scala:32)
at tt1.HoltWinters$.fitModel(HoltWinters.scala:13)
at tt1.TimeSeriesModel$$anonfun$1.apply(TimeSeriesModel.scala:40)
at tt1.TimeSeriesModel$$anonfun$1.apply(TimeSeriesModel.scala:37)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$34.apply(RDD.scala:919)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$34.apply(RDD.scala:919)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1881)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1881)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

18/04/02 15:09:59 ERROR TaskSetManager: Task 127 in stage 798.0 failed 1 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 127 in stage 798.0 failed 1 times, most recent failure: Lost task 127.0 in stage 798.0 (TID 32922, localhost): org.apache.commons.math3.exception.MathIllegalStateException: trust region step has failed to reduce Q
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.bobyqb(BOBYQAOptimizer.java:867)
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.bobyqa(BOBYQAOptimizer.java:329)
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.doOptimize(BOBYQAOptimizer.java:241)
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.doOptimize(BOBYQAOptimizer.java:49)
at org.apache.commons.math3.optim.BaseOptimizer.optimize(BaseOptimizer.java:153)
at org.apache.commons.math3.optim.BaseMultivariateOptimizer.optimize(BaseMultivariateOptimizer.java:65)
at org.apache.commons.math3.optim.nonlinear.scalar.MultivariateOptimizer.optimize(MultivariateOptimizer.java:63)
at tt1.HoltWinters$.fitModelWithBOBYQA(HoltWinters.scala:32)
at tt1.HoltWinters$.fitModel(HoltWinters.scala:13)
at tt1.TimeSeriesModel$$anonfun$1.apply(TimeSeriesModel.scala:40)
at tt1.TimeSeriesModel$$anonfun$1.apply(TimeSeriesModel.scala:37)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$34.apply(RDD.scala:919)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$34.apply(RDD.scala:919)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1881)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1881)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1855)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1868)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1881)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1952)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:919)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:917)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:323)
at org.apache.spark.rdd.RDD.foreach(RDD.scala:917)
at tt1.TimeSeriesModel.holtWintersModelTrain(TimeSeriesModel.scala:60)
at tt1.RunModel$$anonfun$2.apply(RunModel.scala:171)
at tt1.RunModel$$anonfun$2.apply(RunModel.scala:100)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at tt1.RunModel$.main(RunModel.scala:99)
at tt1.RunModel.main(RunModel.scala)
Caused by: org.apache.commons.math3.exception.MathIllegalStateException: trust region step has failed to reduce Q
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.bobyqb(BOBYQAOptimizer.java:867)
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.bobyqa(BOBYQAOptimizer.java:329)
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.doOptimize(BOBYQAOptimizer.java:241)
at org.apache.commons.math3.optim.nonlinear.scalar.noderiv.BOBYQAOptimizer.doOptimize(BOBYQAOptimizer.java:49)
at org.apache.commons.math3.optim.BaseOptimizer.optimize(BaseOptimizer.java:153)
at org.apache.commons.math3.optim.BaseMultivariateOptimizer.optimize(BaseMultivariateOptimizer.java:65)
at org.apache.commons.math3.optim.nonlinear.scalar.MultivariateOptimizer.optimize(MultivariateOptimizer.java:63)
at tt1.HoltWinters$.fitModelWithBOBYQA(HoltWinters.scala:32)
at tt1.HoltWinters$.fitModel(HoltWinters.scala:13)
at tt1.TimeSeriesModel$$anonfun$1.apply(TimeSeriesModel.scala:40)
at tt1.TimeSeriesModel$$anonfun$1.apply(TimeSeriesModel.scala:37)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$34.apply(RDD.scala:919)
at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$34.apply(RDD.scala:919)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1881)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1881)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
18/04/02 15:10:04 INFO CassandraConnector: Disconnected from Cassandra cluster: Test Cluster
18/04/02 15:10:05 INFO SerialShutdownHooks: Successfully executed shutdown hook: Clearing session cache for C* connector

很是头疼,经过多次地位问题,发现在猜测置信区间的时候, holterwinters会输入 a,b,c 3个参数,参数值为0-1的随机数,这时a,b,c经过计算可能出现除0️⃣的情况,导致程序异常!!!

解决方案:修改HoltWinters.scala  fitModelWithBOBYQA()函数

val initGuess = new InitialGuess(Array(a, b, c))
val maxIter = new MaxIter(30000)
val maxEval = new MaxEval(30000)
val goal = GoalType.MINIMIZE
val bounds = new SimpleBounds(Array(0.0, 0.0, 0.0), Array(1.0, 1.0, 1.0))
var optimal: PointValuePair=null
var params=Array(0D,0D,0D)
try {
optimal = optimizer.optimize(objectiveFunction, goal, bounds, initGuess, maxIter, maxEval)
params = optimal.getPoint
}catch {
case ex:MathIllegalStateException=>{
print("MathIllegalStateException MathIllegalStateException MathIllegalStateException MathIllegalStateException")
fitModelWithBOBYQA(ts,period,modelType)//如果出现错误,从新再运行一次,直到a,b,c计算后不在有异常
}

 这个错误在运行多次后出现,没有规律性!

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 18
    评论
powell方法是计算最值的有效方法,一般情况下是无约束的,目前Powell已发展了一种称为BOBYQA的带区间约束算法。 压缩文件里面optimization.h是无约束powell寻优的一个类,以前在baidu上找到的。powell.h和powell.cpp是带区间约束的powell算法BOBYQA的C++包装,由于BOBYQA算法只能计算二维以上,一维算法是俺自己写的。调用时,一维算法采用ExecuteBrent,二维以上采用BoundedPowell。 BOBYQA目录下是powell方法祖师爷M. J. D. Powell写的FORTRAN代码,powell.lib是俺把Fortran代码编译成的静态库。由于这个lib是Fortran Power Station 4.0做的,msfrt40.dll就成了一个甩不掉的累赘。 如果使用Fortran编程的话,就不需要俺的包装了,直接用Fortran代码吧。 使用方法请参考Fortran代码中main.f,有问题请联系baita00@yahoo.com.cn。 另外链接时好像还需要fps4.0中的msfrt.lib,原来没有加进去,现在加不进去了。需要的话请邮件联系。 鉴于很多同志不清楚用法,下面略加说明。 BoundedPowell函数的参数说明。 第一个参数,待优化函数的指针,这个函数必须定义成_stdcall类型,其本身带有三个参数,第一个参数是优化变量个数,第二个参数是优化变量数组,第三个参数就是函数的值,由于使用Fortran的关系,这几个参数都必须传递地址。 举例说,假定待优化函数为f(x)=x^2,那么只有一个参数,这个函数应该定义为 void _stdcall objfun(int *n, double *para, double *f) { *f=para[0]*para[0]; } 第二个参数int n,待优化计算变量的个数 第三个参数double *x, 待优化变量,这是一个数组,长度为n 第四个参数double *xlb,变量的下界,数组 第五个参数double *xub,变量的上界,数组 第五个参数double rhobeg,第六个参数double rhoend,这两个参数是Powell同志定义的两个半径,具体我也说不清楚,你想搞清楚的话可以搜Powerll同志的文献。一般你把rhobeg设置为1,rhoend设置为计算的精度,比如你希望精度为万分之一,rhoend就为1e-4。 第七个参数int maxfun,最大迭代次数 第八个参数,BOBYQA计算时的返回代码,具体如下 // //BOBYQA iflag返回值的含义: //IFLAG=1, Return from BOBYQA because NPT is not in the required interval //IFLAG=2, Return from BOBYQA because one of the differences XU(I)-XL(I)/6X is less than 2*RHOBEG. //IFLAG=3, Return from BOBYQA because FCN has been called MAXFUN times. //IFLAG=4, Return from BOBYQA because of much cancellation in a denominator. //IFLAG=5, Return from BOBYQA because a trust region step has failed to reduce Q. //
评论 18
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值