MindOpt Tuner是达摩院决策智能实验室基于mindopt优化求解器研发的调参器,超参自动优化工具,它可以帮助运筹优化工程师在使用求解器时自动搜索最佳参数组合,尝试不同的参数组合,评估每组参数的性能,然后基于这些结果来确定最佳参数。这样可以大大减少手动调整参数的时间和精力,并且可以帮助提升求解性能。
Python调用
上一篇中讲解了如何使用命令行来调用和查询结果。里面的接口都有封装为Python接口,我们可以输入如下指令引入文件,也可以再输入help指令来查看对应的Python API说明:
import mtunerpy as mtuner
#help(mtuner)
OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
这里我们可以如下方式来提交任务:
scenario_dict = {
'solver': 'cbc',
'problem': ['./model/nl_train_1.nl'],
'max_tuning_time': 600
}
mtuner.create_task(scenario_dict)
Problem file "nl_train_1.nl" uploaded successfully.
Task #438397484918644736 created succesfully.
这里小编得到任务ID:438394637284024320
。
类似地,我们还可以提交一组优化问题算例mps_train_oss.txt进行调参。
如下,得到任务ID 438394766997069824
。
scenario_dict = {
'solver': 'cbc',
'problem': ['./model/mps_train_oss.txt'],
'max_tuning_time': 3600
}
mtuner.create_task(scenario_dict)
Task #438397553394851840 created succesfully.
运行成功后,我们可以通过如下方式获取结果。可以看到和命令行的结果一样,都可以得到比较多的求解效率提升。
import mtunerpy as mtuner
#print("----调参状态和文件---")
#mtuner.task_status(['438394637284024320','438394766997069824'])
print("----nl_train_1.nl调参效果---")
mtuner.fetch_result('438394637284024320','result/performance.txt')
print("----一组3个问题调参效果---")
mtuner.fetch_result('438394766997069824','result/performance.txt', False)
----nl_train_1.nl调参效果---
Tuning finished, the best wallclock_time is 0.09 [** 36.33x improvement **]
----一组3个问题调参效果---
Tuning finished, the best avg_wallclock_time is 3.83 [** 33.32x improvement **]
调参的进阶用法
前面主要讲解调参的基础用法。提交任务的时候,通过选择了求解器后,由MindOpt Tuner来给出调参任务的默认设置来进行调参。比如页面提交,可以不用参数填写的格子,直接使用。
除此之外,我们还可以设置调参的任务。详情我们可以查看用户文档的调参任务配置。
这里我们先 -h
一下查看接口的说明:
!mindopt-tuner create-task -h
OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
MindOpt Tuner v0.9.0 (Build: 20230404)
Usage: mindopt-tuner create-task [-h] --solver {cbc,cplex} --problem <problem> [<problem> ...]
[--parameters <parameters> [<parameters> ...]] [--log-level {info,debug,error}]
[--verbose {0,1,2}] [--tuning-objective {wallclock_time,cpu_time}]
[--max-distinct-para-combos <number>] [--max-tuning-time <seconds>]
[--max-eval-time <seconds>]
Create a new tuning task, and then start it.
Arguments:
-h, --help Show help information.
--solver {cbc,cplex} (Required) Specify the solver to tune.
--problem <problem> [<problem> ...] (Required) Path to the optimization problem file(s), separated by space.
--parameters <parameters> [<parameters> ...] Specify the solver's tunable parameters (see documentation), separated
by space.
--log-level {info,debug,error} Log detail level.
--verbose {0,1,2} Set verbosity level.
--tuning-objective {wallclock_time,cpu_time} Select the tuning objective.
--max-distinct-para-combos <number> Tuning will be terminated after the number of distinct parameter
combinations reach this number.
--max-tuning-time <seconds> Tuning will be terminated after surpassing this time (in seconds) 1, 2,
..., 3600000.
--max-eval-time <seconds> Maximum time allowed for a single evaluation in seconds 1, 2, ...,
144000).
这里我们自定义这些参数。
这里可以举个cbc相关的例子。比如默认参数定义文件内容是test_cbc_params.txt:
cuts {off, on, root, ifmove, forceOn}[on] # Switches all cut generators on or off. cat
preprocess {off, on, save, equal, sos, trysos, equalall, strategy, aggregate, forcesos}[sos] # Whether to use integer preprocessing. cat
heuristics [0, 1][1]i # Switches most primal heuristics on or off. boolean
strongBranching [0, 5][5]i # Number of variables to look at in strong branching. int
trustPseudoCosts [-3, 100][10]i # Number of branches before we trust pseudocosts. int
这里是遵循 参数名 取值范围 默认值
的格式来,最后面的 [10]i
指的是整型变量。
特别提醒 :无特殊需求不需要动这个参数,取值错误等都会导致调参任务异常。
这里,我们可以将这个参数、里面选取部分参数来条,比如:我们可以通过如下指令来自定义自己的调参范围,只调2个参数,还可以减少调参时间。
!mindopt-tuner create-task --solver cbc --problem './model/nl_train_1.nl' --max-tuning-time 400 --parameters cuts preprocess
OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
Problem file "nl_train_1.nl" uploaded successfully.
Task #438413495201964032 created succesfully.
小编这里得到taskID 438413495201964032。
求解完后,我们可以通过如下方式获取结果,也可以直接访问页面端作业列表。
从结果看,缩小调参的范围,也会有提升结果!提速了32倍!。
#print("-------查看状态和results文件链接--------")
#!mindopt-tuner task-status --task-id 438413495201964032
print("-------本地的提交结果--------")
! mindopt-tuner fetch-result --task-id 438413495201964032 --file result/performance.txt
-------本地的提交结果--------
OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
Tuning finished, the best wallclock_time is 0.1 [** 32.00x improvement **]