Class mmin_bfgs2 (o2scl)¶
-
template<class
func_t
= multi_funct, classvec_t
= boost::numeric::ublas::vector<double>, classdfunc_t
= grad_funct, classauto_grad_t
= gradient<multi_funct, boost::numeric::ublas::vector<double>>, classdef_auto_grad_t
= gradient_gsl<multi_funct, boost::numeric::ublas::vector<double>>>
classo2scl
::
mmin_bfgs2
: public o2scl::mmin_base<multi_funct, grad_funct, boost::numeric::ublas::vector<double>>¶ Multidimensional minimization by the BFGS algorithm (GSL)
The functions mmin() and mmin_de() min a given function until the gradient is smaller than the value of mmin::tol_rel (which defaults to \( 10^{-4} \) ).
See an example for the usage of this class in Multidimensional minimizer example.
This class includes the optimizations from the GSL minimizer
vector_bfgs2
.Default template arguments
func_t
- multi_functvec_t
- boost::numeric::ublas::vector <double >dfunc_t
- mm_functauto_grad_t
- gradient<func_t, boost::numeric::ublas::vector <double > >def_auto_grad_t
- gradient_gsl<func_t, boost::numeric::ublas::vector < double > >- Todo:
While BFGS does well in the
ex_mmin
example with the initial guess of \( (1,0,7\pi) \) it seems to converge more poorly for the spring function than the other minimizers with other initial guesses, and I think this will happen in the GSL versions too. I need to examine this more closely with some code designed to clearly show this.
- Idea for Future:
When the bfgs2 line minimizer returns a zero status, the minimization fails. When err_nonconv is false, the minimizer isn’t able to update the x vector so the mmin() function doesn’t return the best minimum obtained so far. This is a bit confusing, and could be improved.
The original variables from the GSL state structure
-
int
iter
¶
-
double
step
¶
-
double
g0norm
¶
-
double
pnorm
¶
-
double
delta_f
¶
-
double
fp0
¶
-
mmin_wrapper_gsl<func_t, vec_t, dfunc_t, auto_grad_t>
wrap
¶
-
double
rho
¶
-
double
sigma
¶
-
double
tau1
¶
-
double
tau2
¶
-
double
tau3
¶
-
int
order
¶
-
mmin_linmin_gsl
lm
¶ The line minimizer.
Store the arguments to set() so we can use them for iterate()
-
double
st_f
¶
-
size_t
dim
¶ Memory size.
-
auto_grad_t *
agrad
¶ Automatic gradient object.
-
double
step_size
¶ The size of the first trial step (default 0.01)
-
double
lmin_tol
¶ The tolerance for the 1-dimensional minimizer.
-
def_auto_grad_t
def_grad
¶ Default automatic gradient object.
-
mmin_bfgs2
()¶
-
~mmin_bfgs2
()¶
-
int
iterate
()¶ Perform an iteration.
-
const char *
type
()¶ Return string denoting type(“mmin_bfgs2”)
-
int
allocate
(size_t n)¶ Allocate the memory.
-
int
free
()¶ Free the allocated memory.
-
int
restart
()¶ Reset the minimizer to use the current point as a new starting point.
-
int
set
(vec_t &x, double u_step_size, double tol_u, func_t &ufunc)¶ Set the function and initial guess.
-
int
set_de
(vec_t &x, double u_step_size, double tol_u, func_t &ufunc, dfunc_t &udfunc)¶ Set the function, the gradient, and the initial guess.
-
int
mmin
(size_t nn, vec_t &xx, double &fmin, func_t &ufunc)¶ Calculate the minimum
min
offunc
w.r.t the arrayx
of sizenn
.
-
int
mmin_de
(size_t nn, vec_t &xx, double &fmin, func_t &ufunc, dfunc_t &udfunc)¶ Calculate the minimum
min
offunc
w.r.t the arrayx
of sizenn
.
-
mmin_bfgs2
(const mmin_bfgs2<func_t, vec_t, dfunc_t, auto_grad_t, def_auto_grad_t>&)¶
-
mmin_bfgs2<func_t, vec_t, dfunc_t, auto_grad_t, def_auto_grad_t> &
operator=
(const mmin_bfgs2<func_t, vec_t, dfunc_t, auto_grad_t, def_auto_grad_t>&)¶