ROOT   6.13/01 Reference Guide
ROOT::Math::IpoptMinimizer::InternalTNLP Class Reference

Internal class to create a TNLP object, required for Ipopt minimization in c++, every method is overloaded to pass the information to Ipopt solvers.

Definition at line 99 of file IpoptMinimizer.h.

Public Member Functions

InternalTNLP (IpoptMinimizer *minimizer)

virtual ~InternalTNLP ()
default destructor More...

virtual bool eval_f (Index n, const Number *x, bool new_x, Number &obj_value)
Return the value of the objective function at the point $$x$$. More...

virtual bool eval_g (Index n, const Number *x, bool new_x, Index m, Number *g)
Return the value of the constraint function at the point $$x$$. More...

virtual bool eval_grad_f (Index n, const Number *x, bool new_x, Number *grad_f)
Return the gradient of the objective function at the point $$x$$. More...

virtual bool eval_h (Index n, const Number *x, bool new_x, Number obj_factor, Index m, const Number *lambda, bool new_lambda, Index nele_hess, Index *iRow, Index *jCol, Number *values)
Return either the sparsity structure of the Hessian of the Lagrangian, or the values of the Hessian of the Lagrangian (9) for the given values for $$x$$, $$\sigma_f$$, and $$\lambda$$. More...

virtual bool eval_jac_g (Index n, const Number *x, bool new_x, Index m, Index nele_jac, Index *iRow, Index *jCol, Number *values)
Return either the sparsity structure of the Jacobian of the constraints, or the values for the Jacobian of the constraints at the point $$x$$. More...

virtual void finalize_solution (SolverReturn status, Index n, const Number *x, const Number *z_L, const Number *z_U, Index m, const Number *g, const Number *lambda, Number obj_value, const IpoptData *ip_data, IpoptCalculatedQuantities *ip_cq)
This method is called by IPOPT after the algorithm has finished (successfully or even with most errors). More...

virtual bool get_bounds_info (Index n, Number *x_l, Number *x_u, Index m, Number *g_l, Number *g_u)
Give IPOPT the value of the bounds on the variables and constraints. More...

virtual bool get_nlp_info (Index &n, Index &m, Index &nnz_jac_g, Index &nnz_h_lag, IndexStyleEnum &index_style)
Give IPOPT the information about the size of the problem (and hence, the size of the arrays that it needs to allocate). More...

virtual bool get_starting_point (Index n, bool init_x, Number *x, bool init_z, Number *z_L, Number *z_U, Index m, bool init_lambda, Number *lambda)
Give IPOPT the starting point before it begins iterating. More...

Private Member Functions

Methods to block default compiler methods.

The compiler automatically generates the following three methods.

Since the default compiler implementation is generally not what you want (for all but the most simple classes), we usually put the declarations of these methods in the private section and never implement them. This prevents the compiler from implementing an incorrect "default" behavior without us knowing. (See Scott Meyers book, "Effective C++")

InternalTNLP (const InternalTNLP &)

InternalTNLPoperator= (const InternalTNLP &)

Private Attributes

IpoptMinimizerfMinimizer

UInt_t fNNZerosHessian

UInt_t fNNZerosJacobian

Number nlp_lower_bound_inf

Number nlp_upper_bound_inf

Friends

class IpoptMinimizer

#include <Math/IpoptMinimizer.h>

Inheritance diagram for ROOT::Math::IpoptMinimizer::InternalTNLP:
[legend]

◆ InternalTNLP() [1/2]

 ROOT::Math::IpoptMinimizer::InternalTNLP::InternalTNLP ( IpoptMinimizer * minimizer )

◆ ~InternalTNLP()

 virtual ROOT::Math::IpoptMinimizer::InternalTNLP::~InternalTNLP ( )
virtual

default destructor

◆ InternalTNLP() [2/2]

 ROOT::Math::IpoptMinimizer::InternalTNLP::InternalTNLP ( const InternalTNLP & )
private

◆ eval_f()

 virtual bool ROOT::Math::IpoptMinimizer::InternalTNLP::eval_f ( Index n, const Number * x, bool new_x, Number & obj_value )
virtual

Return the value of the objective function at the point $$x$$.

Parameters
 n (in), the number of variables in the problem (dimension of $$x$$). x (in), the values for the primal variables, $$x$$, at which $$f(x)$$ is to be evaluated. new_x (in), false if any evaluation method was previously called with the same values in x, true otherwise. obj_value (out) the value of the objective function ( $$f(x)$$). The boolean variable new_x will be false if the last call to any of the evaluation methods (eval_*) used the same $$x$$ values. * * This can be helpful when users have efficient implementations that calculate multiple outputs at once. IPOPT internally caches results from the TNLP and generally, this flag can be ignored. The variable n is passed in for your convenience. This variable will have the same value you specified in get_nlp_info.
Returns
true if everything is right, false in other case.

◆ eval_g()

 virtual bool ROOT::Math::IpoptMinimizer::InternalTNLP::eval_g ( Index n, const Number * x, bool new_x, Index m, Number * g )
virtual

Return the value of the constraint function at the point $$x$$.

Parameters
 n (in), the number of variables in the problem (dimension of $$x$$). x (in), the values for the primal variables, $$x$$, at which the constraint functions, $$g(x)$$, are to be evaluated. new_x (in), false if any evaluation method was previously called with the same values in x, true otherwise. m (in), the number of constraints in the problem (dimension of $$g(x)$$). g (out) the array of constraint function values, $$g(x)$$. The values returned in g should be only the $$g(x)$$ values, do not add or subtract the bound values $$g^L$$ or $$g^U$$. The boolean variable new_x will be false if the last call to any of the evaluation methods (eval_*) used the same $$x$$ values. This can be helpful when users have efficient implementations that calculate multiple outputs at once. IPOPT internally caches results from the TNLP and generally, this flag can be ignored. The variables n and m are passed in for your convenience. These variables will have the same values you specified in get_nlp_info.
Returns
true if everything is right, false in other case.

 virtual bool ROOT::Math::IpoptMinimizer::InternalTNLP::eval_grad_f ( Index n, const Number * x, bool new_x, Number * grad_f )
virtual

Return the gradient of the objective function at the point $$x$$.

Parameters
 n (in), the number of variables in the problem (dimension of $$x$$). x (in), the values for the primal variables, $$x$$, at which $$\nabla f(x)$$ is to be evaluated. new_x (in), false if any evaluation method was previously called with the same values in x, true otherwise. grad_f (out) the array of values for the gradient of the objective function ( $$\nabla f(x)$$). The gradient array is in the same order as the $$x$$ variables (i.e., the gradient of the objective with respect to x[2] should be put in grad_f[2]). The boolean variable new_x will be false if the last call to any of the evaluation methods (eval_*) used the same $$x$$ values. This can be helpful when users have efficient implementations that calculate multiple outputs at once. IPOPT internally caches results from the TNLP and generally, this flag can be ignored.

The variable n is passed in for your convenience. This variable will have the same value you specified in get_nlp_info.

Returns
true if everything is right, false in other case.

◆ eval_h()

 virtual bool ROOT::Math::IpoptMinimizer::InternalTNLP::eval_h ( Index n, const Number * x, bool new_x, Number obj_factor, Index m, const Number * lambda, bool new_lambda, Index nele_hess, Index * iRow, Index * jCol, Number * values )
virtual

Return either the sparsity structure of the Hessian of the Lagrangian, or the values of the Hessian of the Lagrangian (9) for the given values for $$x$$, $$\sigma_f$$, and $$\lambda$$.

Parameters
 n (in), the number of variables in the problem (dimension of $$x$$). x (in), the values for the primal variables, $$x$$, at which the Hessian is to be evaluated. new_x (in), false if any evaluation method was previously called with the same values in x, true otherwise. obj_factor (in), factor in front of the objective term in the Hessian, $$\sigma_f$$. m (in), the number of constraints in the problem (dimension of $$g(x)$$). lambda (in), the values for the constraint multipliers, $$\lambda$$, at which the Hessian is to be evaluated. new_lambda (in), false if any evaluation method was previously called with the same values in lambda, true otherwise. nele_hess (in), the number of nonzero elements in the Hessian (dimension of iRow, jCol, and values). iRow (out), the row indices of entries in the Hessian. jCol (out), the column indices of entries in the Hessian. values (out), the values of the entries in the Hessian.

The Hessian matrix that IPOPT uses is defined in (9). See Appendix A for a discussion of the sparse symmetric matrix format used in this method.

If the iRow and jCol arguments are not NULL, then IPOPT wants you to fill in the sparsity structure of the Hessian (the row and column indices for the lower or upper triangular part only). In this case, the x, lambda, and values arrays will be NULL.

Returns
true if everything is right, false in other case.

◆ eval_jac_g()

 virtual bool ROOT::Math::IpoptMinimizer::InternalTNLP::eval_jac_g ( Index n, const Number * x, bool new_x, Index m, Index nele_jac, Index * iRow, Index * jCol, Number * values )
virtual

Return either the sparsity structure of the Jacobian of the constraints, or the values for the Jacobian of the constraints at the point $$x$$.

Parameters
 n (in), the number of variables in the problem (dimension of $$x$$). x (in), the values for the primal variables, $$x$$, at which the constraint Jacobian, $$\nabla g(x)^T$$, is to be evaluated. new_x (in), false if any evaluation method was previously called with the same values in x, true otherwise. m (in), the number of constraints in the problem (dimension of $$g(x)$$). n_ele_jac (in), the number of nonzero elements in the Jacobian (dimension of iRow, jCol, and values). iRow (out), the row indices of entries in the Jacobian of the constraints. jCol (out), the column indices of entries in the Jacobian of the constraints. values (out), the values of the entries in the Jacobian of the constraints. The Jacobian is the matrix of derivatives where the derivative of constraint $$g^{(i)}$$ with respect to variable $$x^{(j)}$$ is placed in row $$i$$ and column $$j$$. See Appendix A for a discussion of the sparse matrix format used in this method.

If the iRow and jCol arguments are not NULL, then IPOPT wants you to fill in the sparsity structure of the Jacobian (the row and column indices only). At this time, the x argument and the values argument will be NULL.

If the x argument and the values argument are not NULL, then IPOPT wants you to fill in the values of the Jacobian as calculated from the array x (using the same order as you used when specifying the sparsity structure). At this time, the iRow and jCol arguments will be NULL;

The boolean variable new_x will be false if the last call to any of the evaluation methods (eval_*) used the same $$x$$ values. This can be helpful when users have efficient implementations that calculate multiple outputs at once. IPOPT internally caches results from the TNLP and generally, this flag can be ignored.

The variables n, m, and nele_jac are passed in for your convenience. These arguments will have the same values you specified in get_nlp_info.

Returns
true if everything is right, false in other case.

◆ finalize_solution()

 virtual void ROOT::Math::IpoptMinimizer::InternalTNLP::finalize_solution ( SolverReturn status, Index n, const Number * x, const Number * z_L, const Number * z_U, Index m, const Number * g, const Number * lambda, Number obj_value, const IpoptData * ip_data, IpoptCalculatedQuantities * ip_cq )
virtual

This method is called by IPOPT after the algorithm has finished (successfully or even with most errors).

Parameters
 status (in), gives the status of the algorithm as specified in IpAlgTypes.hpp, SUCCESS: Algorithm terminated successfully at a locally optimal point, satisfying the convergence tolerances (can be specified by options). MAXITER_EXCEEDED: Maximum number of iterations exceeded (can be specified by an option). CPUTIME_EXCEEDED: Maximum number of CPU seconds exceeded (can be specified by an option). STOP_AT_TINY_STEP: Algorithm proceeds with very little progress. STOP_AT_ACCEPTABLE_POINT: Algorithm stopped at a point that was converged, not to desired'' tolerances, but to acceptable'' tolerances (see the acceptable-... options). LOCAL_INFEASIBILITY: Algorithm converged to a point of local infeasibility. Problem may be infeasible. USER_REQUESTED_STOP: The user call-back function intermediate_callback (see Section 3.3.4) returned false, i.e., the user code requested a premature termination of the optimization. DIVERGING_ITERATES: It seems that the iterates diverge. RESTORATION_FAILURE: Restoration phase failed, algorithm doesn't know how to proceed. ERROR_IN_STEP_COMPUTATION: An unrecoverable error occurred while IPOPT tried to compute the search direction. INVALID_NUMBER_DETECTED: Algorithm received an invalid number (such as NaN or Inf) from the NLP; see also option check_derivatives_for_naninf. INTERNAL_ERROR: An unknown internal error occurred. Please contact the IPOPT authors through the mailing list. n (in), the number of variables in the problem (dimension of $$x$$). x (in), the final values for the primal variables, $$x_*$$. z_L (in), the final values for the lower bound multipliers, $$z^L_*$$. z_U (in), the final values for the upper bound multipliers, $$z^U_*$$. m (in), the number of constraints in the problem (dimension of $$g(x)$$). g (in), the final value of the constraint function values, $$g(x_*)$$. lambda (in), the final values of the constraint multipliers, $$\lambda_*$$. obj_value (in), the final value of the objective, $$f(x_*)$$. ip_data are provided for expert users. ip_cq are provided for expert users. This method gives you the return status of the algorithm (SolverReturn), and the values of the variables, the objective and constraint function values when the algorithm exited.
Returns
true if everything is right, false in other case.

◆ get_bounds_info()

 virtual bool ROOT::Math::IpoptMinimizer::InternalTNLP::get_bounds_info ( Index n, Number * x_l, Number * x_u, Index m, Number * g_l, Number * g_u )
virtual

Give IPOPT the value of the bounds on the variables and constraints.

The values of n and m that you specified in get_nlp_info are passed to you for debug checking. Setting a lower bound to a value less than or equal to the value of the option nlp_lower_bound_inf will cause IPOPT to assume no lower bound. Likewise, specifying the upper bound above or equal to the value of the option nlp_upper_bound_inf will cause IPOPT to assume no upper bound. These options, nlp_lower_bound_inf and nlp_upper_bound_inf, are set to $$-10^{19}$$ and $$10^{19}$$, respectively, by default.

Parameters
 n (in), the number of variables in the problem (dimension of $$x$$). x_l (out) the lower bounds $$x^L$$ for $$x$$. x_u (out) the upper bounds $$x^U$$ for $$x$$. m (in), the number of constraints in the problem (dimension of $$g(x)$$). g_l (out) the lower bounds $$g^L$$ for $$g(x)$$. g_u (out) the upper bounds $$g^U$$ for $$g(x)$$.
Returns
true if everything is right, false in other case.

◆ get_nlp_info()

 virtual bool ROOT::Math::IpoptMinimizer::InternalTNLP::get_nlp_info ( Index & n, Index & m, Index & nnz_jac_g, Index & nnz_h_lag, IndexStyleEnum & index_style )
virtual

Give IPOPT the information about the size of the problem (and hence, the size of the arrays that it needs to allocate).

Parameters
 n (out), the number of variables in the problem (dimension of $$x$$). m (out), the number of constraints in the problem (dimension of $$g(x)$$). nnz_jac_g (out), the number of nonzero entries in the Jacobian. nnz_h_lag (out), the number of nonzero entries in the Hessian. index_style (out), the numbering style used for row/col entries in the sparse matrix format (C_STYLE: 0-based, FORTRAN_STYLE: 1-based). default C_STYLE;
Returns
true if everything is right, false in other case.

◆ get_starting_point()

 virtual bool ROOT::Math::IpoptMinimizer::InternalTNLP::get_starting_point ( Index n, bool init_x, Number * x, bool init_z, Number * z_L, Number * z_U, Index m, bool init_lambda, Number * lambda )
virtual

Give IPOPT the starting point before it begins iterating.

The variables n and m are passed in for your convenience. These variables will have the same values you specified in get_nlp_info. Depending on the options that have been set, IPOPT may or may not require bounds for the primal variables $$x$$, the bound multipliers $$z^L$$ and $$z^U$$, and the constraint multipliers $$\lambda$$. The boolean flags init_x, init_z, and init_lambda tell you whether or not you should provide initial values for $$x$$, $$z^L$$, $$z^U$$, or $$\lambda$$ respectively. The default options only require an initial value for the primal variables $$x$$. Note, the initial values for bound multiplier components for infinity'' bounds ( $$x_L^{(i)}=-\infty$$ or $$x_U^{(i)}=\infty$$) are ignored.

Parameters
 n (in), the number of variables in the problem (dimension of $$x$$). init_x (in), if true, this method must provide an initial value for $$x$$. x (out), the initial values for the primal variables, $$x$$. init_z (in), if true, this method must provide an initial value for the bound multipliers $$z^L$$ and $$z^U$$. z_L (out), the initial values for the bound multipliers, $$z^L$$. z_U (out), the initial values for the bound multipliers, $$z^U$$. m (in), the number of constraints in the problem (dimension of $$g(x)$$). init_lambda (in), if true, this method must provide an initial value for the constraint multipliers, $$\lambda$$. lambda (out), the initial values for the constraint multipliers, $$\lambda$$.
Returns
true if everything is right, false in other case.

◆ operator=()

 InternalTNLP& ROOT::Math::IpoptMinimizer::InternalTNLP::operator= ( const InternalTNLP & )
private

◆ IpoptMinimizer

 friend class IpoptMinimizer
friend

Definition at line 100 of file IpoptMinimizer.h.

◆ fMinimizer

 IpoptMinimizer* ROOT::Math::IpoptMinimizer::InternalTNLP::fMinimizer
private

Definition at line 101 of file IpoptMinimizer.h.

◆ fNNZerosHessian

 UInt_t ROOT::Math::IpoptMinimizer::InternalTNLP::fNNZerosHessian
private

Definition at line 103 of file IpoptMinimizer.h.

◆ fNNZerosJacobian

 UInt_t ROOT::Math::IpoptMinimizer::InternalTNLP::fNNZerosJacobian
private

Definition at line 102 of file IpoptMinimizer.h.

◆ nlp_lower_bound_inf

 Number ROOT::Math::IpoptMinimizer::InternalTNLP::nlp_lower_bound_inf
private

Definition at line 104 of file IpoptMinimizer.h.

◆ nlp_upper_bound_inf

 Number ROOT::Math::IpoptMinimizer::InternalTNLP::nlp_upper_bound_inf
private

Definition at line 105 of file IpoptMinimizer.h.

Libraries for ROOT::Math::IpoptMinimizer::InternalTNLP:
[legend]

The documentation for this class was generated from the following file: