# Introduction

Presolving conventionally means quick elimination of some variables and constraints prior to numerical solution of an optimization problem. Presented with constraints $LaTeX: a^{\rm T}x=0\,,~x\succeq0$ for example, a presolver is likely to check whether constant vector $LaTeX: \,a$ is positive; for if so, variable $LaTeX: \,x$ can have only the trivial solution. The effect of such tests is to reduce the problem dimensions.

Most commercial optimization problem solvers incorporate presolving. Particular reductions can be proprietary or invisible, while some control or selection may be given to a user. But all presolvers have the same motivation: to make an optimization problem smaller and (ideally) easier to solve. There is profit potential because a solver can then compete more effectively in the marketplace for large-scale problems.

We present a method for reducing variable dimension based upon geometry of constraints in the problem statement:

 $LaTeX: \begin{array}{rl} \mbox{minimize}_{x\in_{}\mathbb{R}^{^n}} &f(x) \\ \mbox{subject to} &A_{}x=b \\ &x\succeq0 \\ &x_{j\!}\in\mathbb{Z}~,\qquad j\in\mathcal{J} \end{array}$


where $LaTeX: \,A$ is a matrix of predetermined dimension, $LaTeX: \mathbb{Z}$ represent the integers, $LaTeX: \reals$ the real numbers, and $LaTeX: \mathcal{J}$ is some possibly empty index set.

The caveat to use of our proposed method for presolving is that it is not fast. One would incorporate this method only when a problem is too big to be solved; that is, when solver software chronically exits with error or hangs.