-
Notifications
You must be signed in to change notification settings - Fork 11
Home
Welcome to the roboptim-core-plugin-ipopt Wiki.
Like most NLP solvers, Ipopt is highly customizable. Even though its parameters and options are all described in the official documentation, it can be quite difficult to know which parameters really matter when one is trying to optimize the resolution of an optimization problem. The official Ipopt documentation provides a quick "Hints And Tricks" page that explains some key considerations, but this may not always be enough to achieve desired performances.
Note that all the options described here should be prefixed by ipopt.
in RobOptim, for instance:
solver.parameters ()["ipopt.tol"].value = 1e-3;
solver.parameters ()["ipopt.linear_solver"].value = std::string ("ma57");
solver.parameters ()["ipopt.mu_strategy"].value = std::string ("adaptive");
Note that string
parameters should be explicitly initialized as std::string
(else const char*
will be converted to bool
).
TODO
Be warned that bound_relax_factor
changes the bounds of the problems, and only the final solution is projected back on the user bounds if honor_original_bounds
is set to yes
(default behavior). This implies that intermediate values operate on the relaxed problem, which may not be what you want if you rely on them in your callbacks.
nlp_scaling_max_gradient
: if the maximum gradient is above this value, then gradient based scaling will be performed. This option is only used if nlp_scaling_method
is set to gradient-based
. In some cases (e.g. when the maximum gradient is lower than the default value), convergence can be greatly improved by lowering this value that defaults to 100
. In some robotics-related problems, we ended up using 0.1
to force the use of gradient-based scaling, thus increasing our convergence rate. All of this depends on the scaling of your problem.
Note that Ipopt may not start at the starting point provided if it is too close to the bounds. This can be controlled with the bound_frac and bound_push options.
start_with_resto
: this option may be worth activating if the initial point is highly infeasible.
Ipopt relies on linear solvers for the computation of search directions. On small/simple problems, this should not have much of an impact, but larger problems can really benefit from efficient linear solvers. Here is a list of such solvers that can be used by setting the linear_solver
parameter:
-
MUMPS
: public domain solver. There should be a package for your Linux distribution, else it is bundled with Ipopt. -
MA27
,MA57
: HSL solvers. Use the Academic Licence if you can for more recent versions. PreferMA57
toMA27
.
Ipopt also offers several debugging options that should be disabled in release mode:
-
check_derivatives_for_naninf
: check the Jacobian and Hessian matrices forNaN
/Inf
. -
derivative_test
: enable Ipopt's derivative checker. This is only done once before the optimization. For more extensive derivative testing, use RobOptim's finite-differences check instead. -
print_level
: use Ipopt's logging feature.output_file
should be set to save the logs to a file. The maximum value for this option is12
, but5
or7
should be more than enough for most debugging tasks. -
print_user_options
: print all options set by the user. This can be useful if you think that the options you're setting are not properly forwarded to Ipopt.
RobOptim's callback functions can also be used to:
- analyze data forwarded by Ipopt in the
intermediate_callback
function (e.g."ipopt.mode"
: regular or restoration), - decide whether to stop the optimization process based on a user-defined stop criterion. One simply needs to set
"ipopt.stop"
totrue
and the optimization will end.
For a better understanding of Ipopt and its algorithms, you can refer to its related papers.