# The solver¶

## Problem statement¶

OSQP solves convex quadratic programs (QPs) of the form

where \(x\in\mathbf{R}^{n}\) is the optimization variable. The objective function is defined by a positive semidefinite matrix \(P \in \mathbf{S}^{n}_{+}\) and vector \(q\in \mathbf{R}^{n}\). The linear constraints are defined by matrix \(A\in\mathbf{R}^{m \times n}\) and vectors \(l\) and \(u\) so that \(l_i \in \mathbf{R} \cup \{-\infty\}\) and \(u_i \in \mathbf{R} \cup \{+\infty\}\) for all \(i \in \{1,\ldots,m\}\).

## Algorithm¶

The solver runs the following ADMM algorithm (for more details see the related papers at the Citing OSQP section):

where \(\Pi\) is the projection onto the hyperbox \([l,u]\). \(\rho\) is the ADMM step-size.

### Linear system solution¶

The linear system solution is the core part of the algorithm.
It can be done using a **direct** or **indirect** method.

With a **direct linear system solver** we solve the following linear system with a quasi-definite matrix

With an **indirect linear system solver** we solve the following linear system with a positive definite matrix

OSQP core is designed to support different linear system solvers. For their installation see this section. To specify your favorite linear system solver see this section.

### Convergence¶

At each iteration \(k\) OSQP produces a tuple \((x^{k}, z^{k}, y^{k})\) with \(x^{k} \in \mathbf{R}^{n}\) and \(z^{k}, y^{k} \in \mathbf{R}^{m}\).

The primal and and dual residuals associated to \((x^{k}, z^{k}, y^{k})\) are

Complementary slackness is satisfied by construction at machine precision. If the problem is feasible, the residuals converge to zero as \(k\to\infty\). The algorithm stops when the norms of \(r_{\rm prim}^{k}\) and \(r_{\rm dual}^{k}\) are within the specified tolerance levels \(\epsilon_{\rm prim}>0\) and \(\epsilon_{\rm dual}>0\)

We set the tolerance levels as

### \(\rho\) step-size¶

To ensure quick convergence of the algorithm we adapt \(\rho\) by balancing the residuals.
In default mode, the inteval (*i.e.*, number of iterations) at which we update \(\rho\) is defined by a time measurement.
When the iterations time becomes greater than a certain fraction of the setup time, *i.e.* `adaptive_rho_fraction`

, we set the current number of iterations as the interval to update \(\rho\).
The update happens as follows

Note that \(\rho\) is updated only if it is sufficiently different than the current one.
In particular if it is `adaptive_rho_tolerance`

times larger or smaller than the current one.

## Infeasible problems¶

OSQP is able to detect if the problem is primal or dual infeasible.

### Primal infeasibility¶

When the problem is primal infeasible, the algorithm generates a certificate of infeasibility, *i.e.*, a vector \(v\in\mathbf{R}^{m}\) such that

where \(v_+=\max(v,0)\) and \(v_-=\min(v,0)\).

The primal infeasibility check is then

### Dual infeasibility¶

When the problem is dual infeasible, OSQP generates a vector \(s\in\mathbf{R}^{n}\) being a certificate of dual infeasibility, *i.e.*,

The dual infeasibility check is then

### Polishing¶

Polishing is an additional algorithm step where OSQP tries to compute a high-accuracy solution.
We can enable it by turning the setting `polish`

to 1.

Polishing works by guessing the active constraints at the optimum and solving an additional linear system.
If the guess is correct, OSQP returns a high accuracy solution.
Otherwise OSQP returns the ADMM solution.
The status of the polishing phase appears in the information `status_polish`

.

Note that polishing requires the solution of an additional linear system and thereby, an additional factorization if the linear system solver is direct. However, the linear system is usually much smaller than the one solved during the ADMM iterations.

The chances to have a successful polishing increase if the tolerances `eps_abs`

and `eps_rel`

are small.
However, low tolerances might require a very large number of iterations.